preliminary - Bad Request - Cern
preliminary - Bad Request - Cern
preliminary - Bad Request - Cern
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Abstract<br />
The construction of the LHC is proceeding well. The<br />
problems encountered with civil engineering work last year<br />
which are now resolved, and the letting of contracts for<br />
machine components is on schedule. Although the initial<br />
deliveries of equipment are up to a few months behind those<br />
planned, suppliers are confident they will be able to make<br />
good on this once the full-scale production is running. The<br />
machine layout is stable as is the expected performance, and<br />
we can look forward with confidence to the first physics run<br />
in 2006.<br />
I. INTRODUCTION<br />
The LHC (Large Hadron Collider)[1], under construction at<br />
CERN, is designed to provide proton-proton collisions at a<br />
center-of-mass energy of 14 TeV and luminosity of<br />
10 34 cm 2 .s -1 . This machine is a major advanced engineering<br />
venture. It consists of two synchrotron rings, interleaved<br />
horizontally, the main elements of which are the two-in-one<br />
superconducting dipole and quadrupole magnets operating in<br />
superfluid helium at 1.9 K. The collider will be installed in the<br />
existing tunnel of 26.7 km in circumference which until<br />
recently housed the LEP collider. This has constrained the<br />
layout to resemble closely that of LEP, with eight identical<br />
2.8 km long arcs, separated by eight 540 m long straight<br />
sections, the centres of which are referred to as “Points”. At<br />
four of these points the beams are brought into collision.<br />
Points 1 and 5 will house the high luminosity multipurpose<br />
experiments ATLAS and CMS, for which considerable civil<br />
engineering is required. The more specialized experiments<br />
ALICE and LHCb will be installed in the existing caverns,<br />
which previously housed LEP experiments, at Points 2 and 8.<br />
Major systems of the collider itself will be installed in the<br />
remaining four straight sections. Points 3 and 7 are dedicated<br />
to beam cleaning, Point 4 to beam acceleration, and Point 6 to<br />
beam extraction. The general layout of the LHC is shown in<br />
Figure 1, and a simulated view of the installed machine is<br />
shown in Figure 2.<br />
Following a decade of R&D and technical validation of<br />
the major systems of the collider at CERN, at collaborating<br />
institutes and with industry, construction of the LHC is now<br />
underway. Contracts have been awarded to industry for the<br />
supply of superconducting magnets, cryogenic refrigeration<br />
plants and other machine equipment, and manufacture has<br />
begun. The components of some systems, such as those of the<br />
injection lines and the superconducting RF are nearing<br />
completion. The upgrading of the injector complex of existing<br />
CERN accelerators is practically finished. Civil engineering is<br />
now advancing well after some setbacks associated with the<br />
The Status of the LHC Machine.<br />
T.M.Taylor<br />
CERN, 1211 Geneva 23, Switzerland<br />
Tom.Taylor@cern.ch<br />
terrain, but not before leading to a review of the schedule. It is<br />
now planned to start installation in autumn 2003, to<br />
commission the first sector (Point 8 to Point 7) in summer<br />
2004, and to complete installation by the end of 2005. After<br />
an initial set-up test with beam in spring 2006, it is foreseen to<br />
start a seven-month physics programme in autumn 2006.<br />
RF<br />
& Future Expt.<br />
Cleaning<br />
ALICE<br />
Low ß (Ions)<br />
Octant 3<br />
Injection<br />
Octant 4<br />
Octant 2<br />
Low ß (pp)<br />
High Luminosity<br />
CMS<br />
Octant 5<br />
Octant 1<br />
ATLAS<br />
Low ß (pp)<br />
High Luminosity<br />
Figure 1: General layout of the LHC<br />
Octant 6<br />
Octant 8<br />
Figure 2: Simulated view of the LHC in its tunnel<br />
Octant 7<br />
Injection<br />
Dump<br />
Low ß<br />
(B physics)<br />
Cleaning<br />
LHC-B
II. THE LATTICE<br />
The main parameters of the LHC as a proton collider are<br />
listed in Table 1.<br />
The design of the lattice has matured over the past years<br />
both in terms of robustness and flexibility, and critical<br />
technologies and engineering solutions have been validated,<br />
while nevertheless maintaining the initially declared<br />
performance of the machine. The FODO lattice is composed<br />
of 46 half-cells per arc; each half-cell is 53.45 m long and<br />
consists of three twin-aperture dipoles having a magnetic<br />
length of 14.3 m, and one twin aperture quadrupole, 3.1 m in<br />
length.<br />
Table 1: Main parameters of the LHC<br />
Energy at collision 7 TeV<br />
Energy at injection 450 GeV<br />
Dipole field at 7 TeV 8.33 T<br />
Coil inner diameter 56 mm<br />
Distance between aperture axes (1.9 K) 194 mm<br />
Luminosity 10 34<br />
cm -2 s -1<br />
Beam current 0.56 A<br />
Bunch spacing 7.48 m<br />
Bunch separation 24.95 ns<br />
Number of particles per bunch 1.1 x 10 11<br />
Normalized transverse emittance (r.m.s.) 3.75 μm<br />
Total crossing angle 300 μrad<br />
Luminosity lifetime 10 h<br />
Energy loss per turn 6.7 keV<br />
Critical photon energy 44.1 eV<br />
Total radiated power per beam 3.8 kW<br />
Stored energy per beam 350 MJ<br />
III. THE SUPERCONDUCTING MAGNETS<br />
A major technological challenge of the LHC is the<br />
industrial production of 1232 superconducting main dipoles<br />
[2] operating at 8.3 T, 400 superconducting main quadrupoles<br />
[3] producing gradients of 223 T m -1 , and several thousand<br />
other superconducting magnets [4], for correcting multipole<br />
errors, steering and colliding the beams, and increasing<br />
luminosity in collision. All these magnets (Table 2), which<br />
must produce a controlled field with a precision of 10 -4 , are<br />
presently being manufactured by industry in Europe, India,<br />
Japan and the USA.<br />
A specific feature of the main dipoles, a cross-section of<br />
which appears in Figure 3, is their twin-aperture design. To<br />
produce the fields required for bending the counter-rotating<br />
beams, two sets of windings are combined in a common<br />
mechanical and magnetic structure to constitute twin-aperture<br />
magnets. This design is more compact and efficient than two<br />
separate strings of magnets, as the return flux of one aperture<br />
contributes to increasing the field in the other. The high<br />
quality field in the magnet apertures is produced by winding<br />
flat multi-strand cables, in a two-layer cos θ geometry. The<br />
large electromagnetic forces acting on the conductors are<br />
reacted by non-magnetic collars resting against the iron yoke,<br />
contained in a welded cylinder which also act as a helium<br />
enclosure.<br />
Table 2: Superconducting magnets in the LHC<br />
Type Quantity Purpose<br />
MB 1232 Main dipole<br />
MQ 400 Main quadrupole<br />
MSCB 376 Combined chromaticity and closedorbit<br />
corrector<br />
MCS 2464 Sextupole for correcting dipole<br />
MCDO 1232<br />
persistent currents<br />
Octupole/decapole for correcting<br />
MO 336<br />
dipole persistent currents<br />
Landau octupole for instability control<br />
MQT 256 Trim quadrupole for lattice correction<br />
MCB 266 Orbit correction dipole<br />
MQM 100 Dispersion suppressor quadrupole<br />
MQX 32 Low-β insertion quadrupole<br />
MQY 20 Enlarged-aperture quadrupole<br />
Figure 3: Transverse cross-section of the dipole in its cryostat<br />
CERN supplies the superconducting cable to the<br />
companies assembling the magnets. In view of the quantity of<br />
cable required (~1250 tonnes), all European wire<br />
manufacturers (together with one in the USA and one in<br />
Japan) are involved in this production.<br />
The LHC magnets must preserve their field quality over a<br />
large dynamic range, in particular at low levels when<br />
persistent currents in the superconductor give rise to remanent<br />
field effects. This requires a small diameter of the Nb-Ti<br />
filaments in the cable strands. The chosen diameter of ~7μm<br />
represents a compromise that also takes into account the<br />
requirement to maximize overall current density in the strand.<br />
It is also necessary to apply a uniform resistive coating to the<br />
strands to control inter-strand currents. Together with the tight<br />
dimensional tolerances, these constraints have presented quite<br />
a challenge to the manufacturers, but after a difficult start<br />
most companies are now producing satisfactory material.<br />
However, with present delays of about 12 months we shall<br />
require a large increase in throughput to satisfy the needs of<br />
magnet production as it accelerates in the next months.
Following a decade of development and model work, final<br />
prototypes magnets built in industry have permitted the<br />
validation of technical design choices and manufacturing<br />
techniques, thus leading the way for the adjudication of preseries<br />
and series contracts for the dipoles, quadrupoles and<br />
correctors, the production of which has now started and will<br />
continue over the next four years (Figure 4). The first three<br />
magnets of the dipole series have been tested and are<br />
acceptable for installation in the machine. Preparations for the<br />
production of the quadrupoles are advancing well, and<br />
deliveries of corrector magnets have already started.<br />
b)<br />
Figure 4: First pre-series superconducting magnets under test. a)<br />
Main dipole at CERN; b) Low-β quadrupole at Fermilab<br />
IV. CRYOGENICS<br />
The LHC uses superfluid helium for cooling the magnets<br />
[5]. The main reason for this is the lower operating<br />
temperature, with corresponding increased working field of<br />
the superconductor. The low viscosity of superfluid helium<br />
enables it to permeate the magnet windings, thereby<br />
smoothing thermal disturbances, thanks to its very large<br />
specific heat (~2000 times that of the cable per unit volume),<br />
and conducting heat away, thanks to its exceptional thermal<br />
conductivity (1000 times that of OFHC copper, peaking at<br />
1.9 K).<br />
a)<br />
The LHC magnets operate in static baths of pressurized<br />
superfluid helium, cooled by continuous heat exchange with<br />
flowing saturated superfluid helium. This cooling scheme,<br />
which requires two-phase flow of superfluid helium in nearly<br />
horizontal tubes, has been intensively studied on test loops<br />
and validated on a full-scale prototype magnet string [6].<br />
Individual cryogenic loops extend over 107 m, the length of a<br />
lattice cell, and these loops are fed in parallel from each<br />
cryogenic plant over the 3.3 km sector length through a<br />
compound cryogenic distribution line [7] running along the<br />
cryo-magnets in the tunnel.<br />
The high thermodynamic cost of refrigeration at low<br />
temperature requires careful management of the system heat<br />
loads. This has been achieved by the combined use of<br />
intermediate shielding, multi-layer insulation and conduction<br />
intercepts in the design of the cryostats (see Figure 3), and by<br />
the installation of beam screens cooled at between 5 and 20 K<br />
by supercritical helium, for absorbing a large fraction of the<br />
beam-induced heat loads [8]. To cope with its heat load, the<br />
LHC will employ eight large helium cryogenic plants, each<br />
producing a mix of liquefaction and refrigeration at different<br />
temperatures, with an equivalent capacity of 18 kW @ 4.5 K<br />
and a coefficient of performance of 230 W/W [9]. The cold<br />
box of the first LHC cryogenic plant, presently undergoing<br />
reception tests at CERN, is shown in Figure 5.<br />
Figure 5: Coldbox of first 18 kW @ 4.5 K helium refrigerator<br />
In view of the low saturation pressure of helium at 1.8 K,<br />
the compression of high flow-rates of helium vapour over a<br />
pressure ratio of 80 requires multi-stage cold hydrodynamic<br />
compressors (Figure 6). This technology, together with that of<br />
low-pressure heat exchangers, was developed specifically for<br />
this purpose. Following detailed thermodynamic studies and<br />
prototyping conducted in partnership with industry, eight<br />
2400 W @ 1.8 K refrigeration units have been ordered from
two companies, and the first one has been delivered to CERN<br />
for reception tests. The overall coefficient of performance of<br />
these units, when connected to the conventional 4.5 K helium<br />
refrigerators, is about 900 W/W.<br />
Figure 6: Impellers of cold compressors for the first 2.4 kW @ 1.8 K<br />
refrigeration unit<br />
V. HIGH TEMPERATURE SUPERCONDUCTOR<br />
CURRENT LEADS<br />
Powering the magnet circuits in the LHC will require<br />
feeding up to 3.4 MA into the cryogenic environment. Using<br />
resistive vapour-cooled current leads for this purpose would<br />
result in a heavy liquefaction load. The favourable cooling<br />
conditions provided by 20 K gaseous helium available in the<br />
LHC cryogenic system make the use of HTS-based current<br />
leads in this system particularly attractive. With a comfortable<br />
temperature difference to extract the heat from the resistive<br />
section in a compact heat exchanger, this allows operation of<br />
the upper end of the HTS section below 50 K, at which the<br />
presently available materials, e.g., BSCCO 2223 in a<br />
silver/gold matrix, exhibit much higher critical current density<br />
than at the usual 77 K provided by liquid nitrogen cooling.<br />
The thermodynamic appeal of such HTS-based current leads<br />
is presented in Table 3.<br />
Table 3: Performance of HTS-based current leads for the LHC,<br />
compared to resistive vapour-cooled leads<br />
Lead type<br />
Resistive,<br />
vapourcooled<br />
(4 to 300 K)<br />
HTS (4 to 50 K)<br />
Resistive, gas<br />
cooled (50 to<br />
300 K)<br />
Heat into LHe [W/kA] 1.1 0.1<br />
Total exergy [W/kA] 430 150<br />
Electrical<br />
f<br />
[W/kA] 1430 500<br />
After conducting tests on material samples, CERN<br />
procured from industry and tested extensively prototypes of<br />
HTS-based current leads for 13 kA and 0.6 kA. This has<br />
enabled us to demonstrate feasibility and performance of this<br />
solution, to identify potential construction problems, to<br />
address transient behaviour and control issues, and to prepare<br />
the way for procurement of series units [10].<br />
VI. INSTALLATION AND TEST STRING 2<br />
The tight constraints of the LHC tunnel, the large quantity<br />
of equipment to be transported and installed, and the limited<br />
time for installation require detailed preparation, including<br />
both CAD simulation and full-scale modeling. Using the<br />
information from verifications of the tunnel geometry that<br />
were performed in 1999, 3D mock-ups have been developed<br />
for critical tunnel sections. As a result of these studies areas of<br />
interference have been identified, and transport and<br />
installation scenarios have been confirmed. The recent<br />
experience gained in the assembly of the first half of Test<br />
String 2 [11], featuring a full 107 m long cell comprising<br />
dipoles and quadrupoles, has been of great value in validating<br />
the techniques and tooling developed for the installation and<br />
interconnection of the lattice magnets. The Test String is<br />
shown in Figure 7.<br />
Figure 7: Test String 2.
VII. PERFORMANCE AND UPGRADE POTENTIAL<br />
It is confidently expected that in the first year of running a<br />
luminosity of 10 33 cm -2 s -1 will be achieved at the nominal<br />
centre-of-mass energy of 7+7 TeV, and that the machine will<br />
provide an integrated luminosity of 10 fb -1 during the first 6month<br />
period of physics data-taking. It will probably then<br />
take another two to three years to ramp up to the nominal<br />
peak luminosity of 10 34 cm -2 s -1 in the high luminosity<br />
experiments. As concerns upgrade potential, the accelerator is<br />
being engineered so as to allow the possibility of achieving up<br />
to about 7.5 TeV per beam, but this may require changing<br />
some of the weaker dipoles. It can also be envisaged to further<br />
increase the luminosity by up to a factor of two by reducing<br />
the β−function at the interaction point from 0.5 m to 0.25 m,<br />
but this will call for the replacement of the inner triplet<br />
quadrupoles with larger aperture magnets. This new<br />
generation of high field superconducting magnets, based on<br />
the use of Nb3Sn or Nb3Al material, is presently the subject of<br />
R&D in several laboratories; we expect that in 5-7 years it<br />
should be possible to embark on the production of a small<br />
series suitable for the low-β insertions. Studies are also<br />
underway regarding more radical upgrading of the machine,<br />
such as increasing the luminosity by another factor of five, or<br />
replacing the main lattice magnets with more powerful<br />
magnets to take the beam energy up to 10-12 TeV. But this<br />
will be for a far more distant future.<br />
VIII. CONCLUSION<br />
After a decade of comprehensive R&D, the LHC<br />
construction is now in full swing [12]. Industrial contracts<br />
have been awarded for the procurement of most of the 7000<br />
superconducting magnets and for the largest helium cryogenic<br />
system ever built, and the production of this equipment is<br />
underway. Although located at CERN and basically funded by<br />
its twenty member states, the project, which will serve the<br />
world’s high-energy physics community, is supported by a<br />
global collaboration, with special contributions from Canada,<br />
India, Japan, Russia and the USA. A full-scale test of the first<br />
sector is planned for 2004, and colliding beams for physics<br />
are expected to be available from 2006 onwards.<br />
IX. REFERENCES<br />
1. The LHC Study Group, The Large Hadron Collider,<br />
Conceptual Design, CERN/AC/95-05, 1995.<br />
2. Wyss, C., "The LHC Magnet Programme: from<br />
Accelerator Physics Requirements to Production in<br />
Industry", in Proc. EPAC2000, edited by J.L. Laclare et<br />
al., Austrian Academy of Science Press, Vienna, Austria,<br />
2000, pp. 207-211.<br />
3. Billan, J. et al., "Performance of the Prototypes and Startup<br />
of Series Fabrication of the LHC Arc Quadrupoles",<br />
paper presented at PAC2001, Chicago, USA, 2001.<br />
4. Siegel, N., "Overview of LHC Magnets other than the<br />
Main Dipoles", in Proc. EPAC2000, edited by<br />
5.<br />
J.L. Laclare et al., Austrian Academy of Science Press,<br />
Vienna, Austria, 2000, pp. 23-27.<br />
Lebrun, Ph., "Cryogenics for the Large Hadron Collider",<br />
IEEE Trans. Appl. Superconductivity 10, pp. 1500-1506<br />
(2000).<br />
6. Claudet, G. & Aymar, R., "Tore Supra and He-II Cooling<br />
of Large High-field Magnets", in Adv. Cryo. Eng. 35A,<br />
edited by R.W. Fast, Plenum, New York, 1990, pp. 55-<br />
67.<br />
7. Erdt, W. et al., "The LHC Cryogenic Distribution Line:<br />
Functional Specification and Conceptual Design", in Adv.<br />
Cryo. Eng. 45B, edited by Q.-S. Shu, Kluwer<br />
8.<br />
Academic/Plenum, New York, 2000, pp. 1387-1394.<br />
Gröbner, O., "The LHC Vacuum System", in Proc.<br />
PAC97, edited by M. Comyn, M.K. Craddock &<br />
9.<br />
M. Reiser, IEEE Piscataway, New Jersey, USA, 1998,<br />
pp. 3542-3546.<br />
Claudet, S. et al., "Economics of Large Helium<br />
Cryogenic Systems: Experience from Recent Projects at<br />
CERN", in Adv. Cryo. Eng. 45B, edited by Q.-S. Shu,<br />
Kluwer Academic/Plenum, New York, 2000, pp. 1301-<br />
1308.<br />
10. Ballarino, A., "High-temperature Superconducting<br />
Current Leads for the Large Hadron Collider", IEEE<br />
Trans. Appl. Superconductivity 9, pp. 523-526 (1999).<br />
11. Bordry, F. et al., "The Commissioning of the LHC Test<br />
String 2", paper presented at PAC2001 Chicago, USA,<br />
2001<br />
12. Ostojic, R., "Status and Challenges of LHC<br />
Construction", invited paper at PAC2001, Chicago, USA,<br />
2001.
Trends and Challenges in High Speed Microprocessor Design<br />
Kerry Bernstein<br />
IBM Microelectronics<br />
Essex Junction, VT USA<br />
Phone: (802) 769-6897 Fax: (802) 769-6744 Internet: kbernste@us.ibm.com<br />
Abstract<br />
Entropy is a worthy adversary! High performance logic design<br />
in next-generation CMOS lithography must address an<br />
increasing array of challenges in order to deliver superior performance,<br />
power consumption, reliability and cost. Technology<br />
scaling is reaching fundamental quantum-mechanical<br />
boundaries! This paper reviews example mechanisms which<br />
threaten deep submicron VLSI circuit design, such as tunneling,<br />
radiation-induced logic corruption, and on-chip delay<br />
variability. Architectures, circuit topologies, and device technologies<br />
under development are explored which extend “evolutionary”<br />
concepts and introduce “revolutionary” paradigms.<br />
It will be these revolutionary technologies which will bring<br />
our industry to the threshold of human compute capability.<br />
Introduction<br />
The overwhelming success of VLSI arises from the convergence<br />
of advances in multiple disciples: MOSFET device<br />
design, process development, innovative new circuit topologies,<br />
and power new state machine architectures. Each has<br />
consistently contributed opportunities for improving transaction<br />
throughput. So successful has been this progression, that<br />
limits in the design space must now be confronted. This pro-<br />
gression, known as scaling 1 , has provided benchmarks for<br />
each discipline, generation over generation. We first examine<br />
the scaling experience, look at example mechanisms limiting<br />
continued scaling, and then explore how designs have<br />
responded to these new capabilities and limitations. Finally,<br />
we will muse over the compute power continued scaling may<br />
enable.<br />
Scaling Experience<br />
Scaling refers to the practice of simultaneously reducing a collection<br />
of key electrical and physical design parameters by a<br />
constant value. Figure 1 shows the application of scale factor a<br />
to the physical dimensions of a MOSFET. Frank, etal<br />
describes how the retention of these relationships preserves<br />
device optimization 2 . This relationship has in fact been preserved,<br />
more or less, through multiple generations of CMOS<br />
Lithography, yield the performance trend shown in Figure 2.<br />
Also evident is the requirement more recently for the constant<br />
infusion of innovative structures and materials to sustain this<br />
improvement. The underlying engine driving this capability is<br />
photolithography. Smaller and smaller minimum critical<br />
dimensions have given rise to channel length reductions seen.<br />
speed, provided the other following boundary conditions are<br />
met.<br />
Scaling Limitations<br />
Nonetheless, even with innovation, “Moore’s Law” has been<br />
observed to be eroding. A roll-off in device performance arises<br />
from the inability to scale threshold voltage as quickly as supply.<br />
This results in (V GS-V T) overdrive voltage reduction.<br />
Process tolerance presents a second challenge to scaling. As<br />
critical parameter tolerance becomes harder to maintain at<br />
smaller lithography, the amount of timing margin consumed<br />
by the resulting delay variation impacts yield. Figure 3 shows<br />
the offset between the functionality window (defined by the<br />
dispersion of delay in paths of varying composition) and the<br />
full process tolerance window. It is evident that to maintain<br />
yield, performance must be sacrificed in the form of margin.<br />
Aside from process, voltage and temperature variation across<br />
die also contribute to delay variability. Typically a design may<br />
present up to 3% performance change per 10degC in temperature<br />
change, or 5% performance change per 100 mV in supply<br />
voltage variation. A fourth challenge to scaling lies in its<br />
intrinsic response to radiation events. Alpha particles arising<br />
from semiconductor materials or high energy protons or neutron<br />
daughters of cosmic ray events both have the opportunity<br />
to deliver charge necessary to corrupt the content of a bistable,<br />
or to glitch a logic level. Figure 4 shows the steady decrease in<br />
QCRIT, the minimum charge necessary to induce an event,<br />
against scaling. As feature dimensions are reduced, the capacitance<br />
reservoir of charge balancing an event is also reduced.<br />
A fifth challenge is associated with the integrity of gate dielectrics<br />
in the MOSFET. As dimensions sizes reduce, it is essential<br />
to reduce gate oxide thickness for the gate to retain control<br />
over the inversion layer formation. Thinner dielectrics have<br />
higher tunneling currents and more frequent breakdowns.<br />
Design Response<br />
To understand how designs have exploited this capability and<br />
address its emerging limitations, it is useful to examine the<br />
Patterson-Hennessy Formula 3 for performance contributions.<br />
Time = Instr/pgm x Cycles/Instr x Seconds/Cycle<br />
(1) (2) (3)<br />
The first term is the responsibility of the compiler; improvement<br />
in the second term comes from architectural enhancements,<br />
which have been responsible for perhaps half of recent<br />
performance gains. Out of order execution, speculative<br />
branching, multi-threading, and superscalar functional units<br />
are examples. These features, while improving through-put,<br />
also add extra circuits and devices, increasing power consumption.<br />
Figure 5 shows this trend and its implausible trend.<br />
The third term falls squarely in the lap of the process and circuit<br />
designers. The lies, however, an even more insidious subtle<br />
trend (A”red-hat topic”!). Because more and more circuitry<br />
has been added to boost architecture performance, wire<br />
lengths have not reduced with scaling as die sizes have not<br />
shrunk. Worse, these additions create deeper pipelines with<br />
less intrinsic delay per pipeline. In short a signal has to go farther<br />
than before, and has even less time to get there than
efore, both after scaling. The result is that less and less of the<br />
chip may be accessed in a given cycle, as shown in Figure 6.<br />
The design response as is “logical islands” are not defined during<br />
placement, considering which functions must be less than<br />
1 cycle latency away.<br />
Capabilities of the Extended Paradigm<br />
To combat this trend the, high speed microprocessors require<br />
constant innovation. A denser device with lower parasitic<br />
capacitance and which puts out more current is one such inno-<br />
vation. The Strained Silicon MOSFET 4 is an evolutionary<br />
MOSFET improvement (Figure 7); it derives it’s performance<br />
advantage from strain induced in the layer in which the inversion<br />
channel is formed. In the cited reference, a thin SiGe<br />
layer is deposited on a Si substrate. With different lattice constants,<br />
the resulting strain induces improved mobility in 2 of<br />
Silicon’s 6 degenerate states. An architectural direction likely<br />
to improve throughput is increased parallelism. Figure 8<br />
shows the results of an analysis of various means of achieving<br />
equivalent performance. It supports the conclusion that an<br />
array of smaller simpler processors run at lower voltages can<br />
meet the equivalent performance of fewer processors at high<br />
voltage, saving, power and design resource. Finally, new circuit<br />
topologies promise to help reduce cycle time. Figure 9<br />
shows Clock-Delayed Domino, an emerging circuit family<br />
used in semi-synchronous and “locally-asynchronous-globally-synchronous”<br />
microprocessors. At its heart, the circuit is<br />
a simple dynamic domino, which has traveling along with it its<br />
own clock. The clock can serve one circuit or one time-sliced<br />
column of circuits. It’s delay is tuned via the passgate beta<br />
ratio.<br />
Conclusions<br />
Technology scaling is a paradigm that has indeed served our<br />
industry well. It is directly as well as indirectly responsible for<br />
the historic performance, density, and power trend known as<br />
“Moore’s Law.” Most recently, quantum-mechanical limitations<br />
to scaling have become evident and have required compensation<br />
by the designer at the architecture as well as circuit<br />
topology level. Innovation in novel MOSFET design, new circuit<br />
families, and logic architectures will provide a path for<br />
evolution of existing approaches, and buy time to develop revolutionary<br />
concepts. Just as scaling up to now has enabled<br />
more function to be brought onboard chip with less latency,<br />
continued scaling will before long allow our industry to<br />
deliver transaction throughput rivaling human compute capability.<br />
It is incumbent, then, to wisely invest our physical as<br />
well as intellectual resources, to most fruitfully enjoy this<br />
future capability.<br />
References<br />
[1]B. Davari, etal., “CMOS Scaling for High Performance and Low<br />
Power - The Next 10 Years,” Proceedings of the IEEE, Vol. 83, No.4,<br />
April, 1995, pp. 595-606.<br />
[2] D. Frank, etal., “Generalized Scale Length for Two-Dimensional<br />
Effects in MOSFETs”, IEEE Electron Device Letters, Vol. 19, No.<br />
10, October 1998, pp 385-387.<br />
[3]D. Patterson, etal., “Computer Architecture: A Quantitative<br />
Approach”, Morgan Kaufmann Publishers, 1995, ISBN 1558603298<br />
[4] K. Rim, etal., “Strained Si NMOSFETs for High Performance<br />
CMOS Technology”, Proceedings of 2001 VLSI Technology Symposium,<br />
pp. 59-60.<br />
[5] F. Pollack, “New Microarchitecture Challenges in the Coming<br />
Generations of CMOS Process Technologies”, Micro32.<br />
Figure 1: MOSFET Device ideal scaling relationships<br />
h<br />
Relative Device Performance<br />
0.2<br />
1986<br />
1992<br />
Figure 2: Innovation and Scaling<br />
1998<br />
2004<br />
Year of Technology Capability<br />
2010<br />
Figure 3: Timing Margin Consumption by Process Variation<br />
Qcrit (fC)<br />
30<br />
25<br />
20<br />
15<br />
10<br />
5<br />
0<br />
20<br />
10<br />
5.0<br />
2.0<br />
1.0<br />
0.5<br />
0.5✙<br />
Bulk<br />
0.25✙<br />
Bulk<br />
0.22✙<br />
Bulk<br />
33.1 18.8 9.88 8.64 7.5 7 4.81 4.23 3.67 2.48 1.97 2.16<br />
Cell Area (um2)<br />
0.18✙<br />
Bulk<br />
0.13✙<br />
SOI<br />
Figure 4: Channel hot electron degradation dependence on V T.
Figure 5: Trends in Power Consumption 5<br />
Control Area Scaling<br />
Accessible Die Area<br />
120%<br />
110%<br />
100%<br />
90%<br />
80%<br />
70%<br />
60%<br />
50%<br />
40%<br />
30%<br />
20%<br />
0.5 0.45 0.45 0.35<br />
Figure 6: Area of Control [3]<br />
Figure 7: Strained Silicon MOSFET<br />
uP Count (Green)<br />
Scaled Area Fixed Die Area<br />
Scale Factor<br />
Sys Pwr ,Watts (Blue)<br />
80<br />
75<br />
2500<br />
70<br />
2000<br />
65<br />
60<br />
1500<br />
55<br />
50<br />
1000<br />
45<br />
500<br />
0.9 1 1.1 1.2 1.3 1.4 1.5 1.6<br />
uP count, nom proc<br />
Pwr, nom proc<br />
uP count, Proc A<br />
Pwr, Proc A<br />
uP Count, Proc B<br />
Pwr, Proc B<br />
Voltage<br />
IBM study shows uP count and voltage combos providing 50% sys perf w/various device options<br />
Figure 8: Distributed Processing’s Power x Delay advantages.<br />
Figure 9: Clock-Delayed Dynamic Domino Logic
Abstract<br />
Most modern HEP experiments use pixel detectors for<br />
vertex finding because these detectors provide clean and<br />
unambiguous position information even in a high multiplicity<br />
environment. At LHC three of the four main experiments will<br />
use pixel vertex detectors. There is also a strong development<br />
effort in the US centred around the proposed BTeV<br />
experiment. The chips being developed for these detectors<br />
will be discussed giving particular attention to the<br />
architectural choices of the various groups. Radiation tolerant<br />
deep sub-micron CMOS is used in most cases. In light of<br />
predicted developments in the semiconductor industry and<br />
bearing in mind some fundamental limits it is possible to<br />
foresee the trends in pixel detector design for future<br />
experiments.<br />
I. INTRODUCTION<br />
Pixel detectors are now important components in modern<br />
High Energy Physics (HEP) experiments. The initial<br />
development work for SSC and LHC was first applied in a<br />
heavy ion experiment [1] and in LEP [2]. This provided a<br />
basis for confidence for use in the future p-p experiments at<br />
LHC [3, 4, 5], the Alice heavy ion experiment [6] as well as<br />
to the proposed BTeV experiment at the Tevatron [7]. An<br />
extensive overview of the history of the development of pixel<br />
detectors for HEP is given in [8]. The present paper takes a<br />
closer look at the design of the electronics for the different<br />
present-day systems. Some basic concepts relevant to pixel<br />
electronics are reviewed. Then the work in progress for the<br />
new experiments is discussed with particular reference to the<br />
chosen readout architectures. Finally, an attempt is made to<br />
explore the way ahead for such detectors in future.<br />
II. DEFINITIONS AND BASIC CONCEPTS<br />
A pixel detector is a 2-dimensional matrix of microscopic<br />
(
compromise between the required timing resolution and noise.<br />
The equations for series, ENC d, and parallel noise, ENC o, [11]<br />
can be simplified to:<br />
2<br />
C<br />
2<br />
2 t<br />
ENCd<br />
∝ ENCo ∝ I oτ<br />
s<br />
gmτ<br />
s<br />
where Ct is the total capacitance on the input node of one<br />
channel, gm is the transconductance of the input transistor, τs is<br />
the shaping time and Io is the leakage current of the sensor<br />
element. Other parallel noise sources have to be added to Io. In<br />
modern pixel detector systems the parallel noise is much<br />
lower than the series noise due to a fast shaping time and the<br />
small leakage current (fA-pA) from the tiny sensor volume.<br />
The rise time, tr, of the front end amplifier is given by:<br />
m<br />
where C L is the load on the output of the front-end amplifier<br />
and C f is the amplitude of the feedback capacitance which is<br />
inversely proportional to the voltage gain of the system. From<br />
these expressions one can observe that C t and C L should be<br />
minimised and g m maximised to obtain high speed. C t may<br />
indeed be minimised by careful detector design, C L by<br />
reducing the load on the preamplifier output, but maximising<br />
g m implies increased power consumption.<br />
III. PIXEL DETECTOR SYSTEMS FOR HEP<br />
There are 4 major developments for HEP experiments<br />
underway at present. The Atlas and CMS vertex detectors are<br />
aimed towards the high event rate p-p experiments at LHC.<br />
The common Alice/LHCb development has two quite<br />
different aims: the rather low event rate but very high<br />
multiplicity Alice vertex detector and the LHCb RICH<br />
detector readout. At the proposed BTeV experiment in<br />
Fermilab the chip should provide information for the first<br />
level trigger. The intention in what follows is to highlight the<br />
similarities and differences between them.<br />
A. Atlas and CMS<br />
t<br />
r<br />
C ( C t L + C f )<br />
∝<br />
g C<br />
Atlas and CMS have very similar environments. Both<br />
vertex detectors must withstand neutron fluences of up to<br />
10 15<br />
neq/cm 2<br />
and each has to provide unambiguous 2dimensional<br />
hit information every 25 ns. Both have the usual<br />
requirements of minimal power consumption and material. As<br />
the total neutron fluence is well above the value where the ntype<br />
bulk material behaves as p-type, n +<br />
on n detectors are<br />
used. Atlas uses a p-spray as isolation between pixels [12] and<br />
CMS uses two concentric broken p +<br />
guard rings [13]. Both<br />
experiments expect to operate the detectors not fully depleted<br />
near the end of the lifetime because of reverse annealing.<br />
Therefore the most probable peak energy deposited by<br />
particles will be strongly attenuated. There may also be<br />
charge collection time issues. As mentioned above a threshold<br />
must be set in each pixel to keep the data volume from the<br />
detector manageable and this should be around one third of<br />
the most probable peak. This implies that the required<br />
minimum operating threshold be around 2 - 2.5 ke -<br />
. The Atlas<br />
group has decided for rectangular pixels (50 µm x 400 µm)<br />
f<br />
giving optimum spatial resolution in r-φ while CMS planned<br />
to have square pixels (at present 150 µm x 150 µm) making<br />
use of the Lorentz angle of the charge drift in the sensor<br />
produced by the 4 T magnetic field at CMS. Both experiments<br />
elected to make first full scale prototype chips in the DMILL<br />
technology [14] and this has had a strong influence on the<br />
choice of layout and readout architecture.<br />
The Atlas chip is organised as a matrix of 18 x 160 pixels.<br />
The layout of the cell is such that the columns are grouped as<br />
pairs enabling two columns to use the same readout bus. This<br />
means that the layout is flipped from one column to its<br />
neighbour, a practice which is not normally recommended<br />
where transistor parameter matching is important. Each pixel<br />
comprises a preamplifier-shaper (t peak = 25 ns) with a transistor<br />
in feedback which is biased by a diode connected transistor<br />
connected to the preamplifier output followed by a<br />
discriminator, see Fig. 2 taken from [15]. There is a 3-bit<br />
register which permits threshold adjustment pixel-by-pixel<br />
and two further bits which control masking and testing<br />
operations. The power consumption per channel is expected to<br />
be 40 µW. As the feedback is a constant current in the linear<br />
range of the amplifier the discriminator provides Time-over-<br />
Threshold (ToT) information.<br />
Fig. 2. Schematic of the Atlas pixel taken from [15].<br />
When a pixel is hit, it sends a Fast-OR signal to the End of<br />
Column (EoC) logic. A token is sent up the column pair and<br />
the first hit pixel encountered puts its address on the bus along<br />
with a timestamp. It does the same for the trailing edge of the<br />
hit. The token then drops to the next hit pixel. There is an 16cell<br />
deep hit buffer at the bottom of the column which stores<br />
the addresses, timestamps and ToT values for the hit pixels.<br />
There is also logic at the End of Column (EoC) which<br />
compares the timestamps of the hits with the external trigger<br />
input. Extra peripheral logic is used to package the hit<br />
information into a serial stream for readout. Although results<br />
on test chips were encouraging [16] the first results from the<br />
bump-bonded full-scale prototypes were disappointing. Some<br />
of the problems were traced to yield issues associated with the<br />
very high component density of the design. A new full-scale<br />
prototype is under development using deep sub-micron<br />
technology.<br />
The CMS chip is organised as a matrix of 53 x 52 pixels.<br />
Also in this case alternating columns are mirrored and<br />
grouped as pairs, two columns sharing the same readout bus.<br />
The front-end is a classic preamplifier-shaper circuit<br />
(t peak = 27 ns). The schematic of the front-end is shown in<br />
Fig. 3 taken from [17]. The buffered output of the shaper is<br />
sent to a discriminator which has a 3-bit trim DAC. At the
output of the discriminator a pulse stretching circuit is used to<br />
produce a signal which sample-and-holds the analog value of<br />
the buffered shaper output. The pulse width is tuned to sample<br />
the analog signal on or near to the peak. The power<br />
consumption of one pixel is around 40µW.<br />
Fig. 3. Schematic of the CMS pixel cell taken from [17]<br />
When a pixel is hit a Fast-OR signal is sent to the EoC<br />
logic and this saves a timestamp and sends a token through<br />
the column pair. Up to 8 timestamps can be saved in the EoC.<br />
When the token arrives at a hit pixel both the address and the<br />
analog value of the hit are sent to the EoC as analog signals.<br />
There is a 24-deep buffer which stores this information. The<br />
timestamp, addresses and analog values of the hit pixels are<br />
sent as analog information to the control room following a<br />
positive comparison of the hit timestamps with the Level 1<br />
trigger. Good results have been reported from smaller test<br />
chips [18] and a full-scale prototype is almost ready for<br />
submission. There are also plans to convert the design to deep<br />
sub-micron technology in the coming year.<br />
B BTeV<br />
There is one large-scale development underway inside the<br />
HEP community but outside the LHC programme. This is the<br />
FNAL pixel development which aims towards the proposed<br />
BTeV experiment but which might also be useful for the<br />
upgrades of the other experiments at the Tevatron. The<br />
radiation environment at the BTeV experiment is similar to<br />
the LHC but the bunch crossing interval is 132 ns instead of<br />
25 ns. n +<br />
on n detectors will be used but the decision about pspray<br />
or p-stop isolation is pending. The group chose to<br />
design the chip in deep sub-micron CMOS following the<br />
radiation tolerant design techniques used by the RD49<br />
Collaboration [19]. Interestingly, the FNAL team developed a<br />
common rules file which allows them produce a design which<br />
is compatible with two different 0.25 µm processes.<br />
The largest prototype to date has 18 x 160 pixels and the<br />
pixel cell size is 50µm x 400µm. Each cell has a preamplifier<br />
followed by a shaper (t peak = 150 ns). The feedback of the<br />
preamplifier has two branches: a very low bandwidth<br />
feedback amplifier which drives a current source at the input<br />
which compensates for detector leakage current, and a simple<br />
NMOS transistor which provides the fast return to zero, see<br />
Fig. 4 taken from [20]. This dual feedback system was needed<br />
as the W/L ratio of the enclosed gate NMOS cannot be made<br />
lower than 2.3 [21]. Another unique feature of the pixel cell is<br />
the 3-bit ADC which has been implemented at the output of<br />
the shaper.<br />
Vdda<br />
Sensor<br />
-<br />
+<br />
Test<br />
Vff<br />
Inject<br />
Vref<br />
-<br />
+<br />
Threshold<br />
Thresholds<br />
Flash Latch<br />
Thermometer<br />
to Binary<br />
Encoder<br />
Command Interpreter<br />
00 - idle<br />
01 - reset<br />
10 - output<br />
11 - listen<br />
4 pairs of<br />
Command Lines<br />
RFastOR Throttle<br />
HFastOR<br />
Fig. 4. Schematic of the BTeV pixel cell taken from [20]<br />
Kill<br />
Token Out<br />
When a pixel is hit it pulls down a Fast-OR signal which<br />
indicates a hit to the EoC logic. The EoC logic notes the<br />
timestamp and sends a token up the column. When a hit pixel<br />
receives the token it sends its address and the contents of the<br />
ADC to the EoC. There is some core logic which packages<br />
the timestamp, address and amplitude information for<br />
immediate readout. In the case of BTeV this information is<br />
used in the generation of the first level trigger.<br />
C Alice/LHCb<br />
The chip which has been developed for the Alice tracker<br />
and the LHCb RICH readout is probably the most<br />
complicated of the readout chips to date containing over 13<br />
million transistors. Neither experiment expects a radiation<br />
dose even close to those of Atlas and CMS. Straightforward p +<br />
on n detectors were chosen as these are cheaper and easier to<br />
obtain. The chip has a matrix of 256 x 32 cells and each cell<br />
measures 50 µm x 425 µm. The preamplifier is differential<br />
which should improve the power supply rejection ratio and<br />
limit the substrate induced noise at the expense of increased<br />
power consumption. As a fast return to zero was required for<br />
the LHCb application a new front-end architecture was used<br />
which uses the preamplifier along with two shaping stages,<br />
see Fig. 5 taken from [22]. The circuit is ready to accept<br />
another hit after 150 ns. The output of the second shaper is<br />
connected to a discriminator with 3-bits of threshold adjust.<br />
There are two readout modes of operation: Alice and LHCb.<br />
Resets<br />
Bus<br />
Controller<br />
ADC<br />
Row<br />
Address<br />
Read Clock<br />
Read Reset<br />
Token In<br />
Token Reset
IN<br />
C in<br />
L.F.<br />
Feedback<br />
Preamp Shaper Shaper<br />
#1 #2 Discrim.<br />
Bias<br />
test FF<br />
Analog test input<br />
Thres.<br />
Th. Adj.<br />
FFs<br />
Fig. 5. Schematic of the Alice/LHCb pixel cell.<br />
3<br />
FO-FM<br />
delay<br />
8<br />
BCO<br />
mask<br />
FF<br />
Coinc.<br />
logic<br />
strobe<br />
In Alice mode, if the discriminator fires one of the two<br />
registers stores the timestamp in the block marked delay. The<br />
contents of the 8-bit timestamp bus are ramped up and down<br />
with a modulo determined by the Level 1 trigger latency.<br />
When there is a coincidence with a Level 1 trigger and a hit<br />
resulting from the positive comparison of one of the register<br />
contents with the timestamp, a 1 is put in a 4-bit FIFO. A<br />
Level 2 trigger accept triggers the transfer of the FIFO<br />
information to a 256-bit shift register for readout.<br />
In LHCb mode, the outputs of the discriminators of 8<br />
pixels are ORed together and the 16 registers are used<br />
sequentially to store timestamps. On Level 0 trigger the event<br />
is stored in a 16-bit FIFO. A Level 1 trigger initiates the<br />
readout of the matrix. 32 clock cycles are needed for full<br />
readout. A full description of the readout system is given in<br />
[23]. Results from this full scale prototype chip are presented<br />
in [24].<br />
D General remarks<br />
All of the above developments are either aiming at, or have<br />
achieved, noise levels of less than 200 e - rms and a threshold<br />
uniformity at or below that level. The requirements in timing<br />
and power consumption are similar also. It is interesting to<br />
note that the developments using DMILL have chosen<br />
architectures which require < 100 transistors per pixel while<br />
the other developments use many hundreds of transistors with<br />
the attendant increase in functionality and pixel complexity.<br />
A common issue however, is that the tracking precision of<br />
all of these detectors is not limited by pixel dimension but by<br />
material thickness. Thinner detectors and electronics are<br />
desirable but these are fragile. A main contributor to material<br />
is the cooling systems which are required to evacuate the heat<br />
dissipated by the electronics. At around 100 µW/pixel the<br />
total power consumption is around 0.5 W/cm 2 . This gives<br />
some hints about where future technical efforts should focus.<br />
IV. FUTURE TRENDS<br />
4-bit<br />
FIFO<br />
R W<br />
data<br />
FF<br />
Vertex detector development is now and forever<br />
intimately linked to developments in the electronics industry.<br />
Even the CERN pixel developments follow Moore's law of<br />
exponentially increasing component density with time [25].<br />
Experience with the LHC pixel detectors has taught us that we<br />
ou<br />
t<br />
Previous pixel<br />
DFF<br />
Next pixel<br />
DFF<br />
must use as much as possible industry standard technology to<br />
be able to achieve our aims within a tolerable price range.<br />
New particle accelerators beyond LHC and the Tevatron<br />
are being discussed. It may be that technical limits in the<br />
detectors should now be given serious consideration in the<br />
earliest stages of design of new machines. One of the major<br />
issues here is the problem of cooling and its influence on<br />
material and hence tracking precision. As the push towards<br />
smaller pixels continues the power consumption density (for<br />
the same time precision) increases. This is because the input<br />
capacitance is dominated by pixel-to-pixel capacitance and<br />
the total pixel-to-pixel capacitance per unit area increases<br />
with granularity. Increasing the bunch crossing frequency<br />
would lead to a further increase in power density.<br />
Radiation tolerance of future CMOS technologies seems<br />
to be obtainable using the design techniques already<br />
mentioned. However, it will be necessary to monitor carefully<br />
each generation of technologies for unexpected phenomena.<br />
In all cases Single Event Upset will probably present the main<br />
challenge to designers.<br />
There seem to be two kinds of machine emerging each<br />
having a distinctive physics reach and environment: the large<br />
linear electron-positron colliders, and the next generation of<br />
hadron colliders. The e-p machines will provide events which<br />
are essentially clean with low multiplicity and with event rates<br />
in the range of kHz. For these applications it may still be<br />
possible to use projective detectors or pixel-type detectors<br />
where every pixel is read out. CCD and APS sensors seem the<br />
most likely candidates here. Both detectors provide the very<br />
highest spatial resolution but the readout tends to be relatively<br />
slow. APS sensors based on standard CMOS technologies are<br />
being studied [26].<br />
For the hadron machines the pattern recognition capability<br />
of pixel detectors is likely to still be the dominant<br />
requirement. The very tiny charge collection from standard<br />
APS detectors makes achieving good pattern recognition<br />
extremely difficult. An interesting modification to the APS<br />
idea is presented in [27]. In this development cooling is used<br />
to obtain a larger charge collection in the substrate of the<br />
standard CMOS components. Also an interesting detector<br />
biasing scheme is proposed. Cryogenic CMOS is discussed<br />
later. However, developing cryogenic mixed-mode electronics<br />
on top of the sensor volume remains a formidable challenge.<br />
There are some developments which could be applied in<br />
many future pixel systems. MCM-D is a technology which is<br />
of great interest although it has only been studied so far by the<br />
Atlas pixel community [28] who had both the courage and the<br />
resources to investigate it. Here the detector is used as the<br />
substrate and the MCM layers, which are alternately BCB and<br />
metal, are grown on top. In this way each pixel is connected<br />
through the layers to its bump bond and all of the readout<br />
lines and power supply lines can be brought in on the same<br />
substrate. As high dielectric constant insulating layers may be<br />
used power supply decoupling can be provided as well. This<br />
technology offers great promise as it leads to a reduction in<br />
the overall mass of the detector while offering good
mechanical rigidity and at the same time the possibility to<br />
map sensor elements to readout channels of different<br />
dimensions.<br />
Some detector development work based at<br />
Stanford/Hawaii [29] uses reactive ion etching to make high<br />
aspect ratio holes in Si. These are subsequently filled with<br />
doped Si to make a detector which is made up of pillars of<br />
alternating p +<br />
and n +<br />
doping. This has the advantage that the<br />
voltage needed to deplete the detector is much reduced and<br />
may be of particular interest in detectors where radiation<br />
damage causes inverse annealing to take place. Of particular<br />
interest to the pixel community in general would be the<br />
possibility this technology might offer in reducing strongly<br />
the dead area which surrounds present pixel detectors for<br />
guard rings. One could imagine that the same technique is<br />
used to etch a very clean cut near the edge of the sensitive<br />
pixel matrix. This edge may have doped Si applied and this<br />
would limit the electric field laterally. Also the clean edge<br />
from the etch might reduce surface leakage currents.<br />
Cryogenic operation of the readout electronics has the<br />
advantage that a better transconductance to drain current ratio<br />
is obtained even if the transistor thresholds are increased [30].<br />
This may lead to the possibility of reducing the problem of<br />
power consumption density. In any case cryogenically cooled<br />
Si detectors, with or without defect engineering [30, 31], will<br />
probably be used in future experiments. There is a whole new<br />
field of mixed-mode, cryogenic deep sub-micron design<br />
opening up.<br />
V. SUMMARY AND CONCLUSIONS<br />
Pixel detectors are key components in most new large<br />
scale HEP experiments. The developments for the LHC<br />
experiments are well under way with most groups now<br />
focussing on deep sub micron CMOS. All of the present<br />
systems are limited in physics performance by materials<br />
budget which is strongly correlated to power consumption<br />
density and the subsequent cooling systems. Expected<br />
increases in granularity and speed in future systems will<br />
require very careful system optimisation. Novel circuit<br />
architectures should be developed and some technology<br />
advances may also help.<br />
VI. ACKNOWLEDGEMENTS<br />
Many people assisted me in preparing this manuscript and<br />
the associated talk. I am particularly indebted to Kevin<br />
Einsweiller of Atlas, Roland Horisberger of CMS and Abder<br />
Mekkaoui and Ray Yarema of BTeV for providing me with<br />
material. I receive constant help and encouragement from my<br />
friends and colleagues in the CERN Microelectronics Group,<br />
the Alice and LHCb RICH pixel teams and the Medipix<br />
Collaboration. Dave Barney and Mike Lamont advised me on<br />
machine matters. Erik Heijne and Cinzia Da Via' contributed<br />
greatly with discussions about present and future detectors.<br />
VII. REFERENCES<br />
[1] F. Antinori et al., "First results from the 1994 lead run of<br />
WA97," Nucl. Phys. A590 (1995) 139c-146c.<br />
[2] J.Heuser, "Construction, Operation and Application of the<br />
DELPHI Pixel Detector at LEP2" PhD Thesis, University<br />
Wuppertal, Germany WUB-DIS 99-1, January 1999.<br />
[3] ATLAS Technical Design Report, ATLAS TDR 5,<br />
CERN/LHCC/97-17, April 1997.<br />
[4] CMS, The Tracker Project, Technical Design Report,<br />
CMS TDR 5, CERN/LHCC 98-6, April 1998.<br />
[5] LHCb RICH Technical Design Report, LHCb TDR 3,<br />
CERN LHCC 2000-037.<br />
[6] ALICE Inner Tracking System Technical Design Report,<br />
ALICE TDR 4, CERN/LHCC 99-12 June 1999.<br />
[7] Proposal for an Experiment to Measure Mixing, CP<br />
Violation and Rare Decays in Charm and Beauty Particle<br />
Decays at the FermiLab Collider - BTeV, FermiLab, May<br />
2000.<br />
[8] E.H.M.Heijne, "Semiconductor micropattern pixel<br />
detectors: a review of the beginnings," Nucl. Instr. and Meth.<br />
A 465 (2001) 1-26.<br />
[9] S.O.Rice, Bell Sys. Tech. J. 23 (1944) 282 and 24 (1945)<br />
46.<br />
[10] P.Middelkamp, "Tracking with active pixel detectors,"<br />
PhD Thesis, University Wuppertal, Germany WUB-DIS 96-<br />
23, December 1996.<br />
[11] Z.Y.Chang and W.M.C.Sansen, "Low-noise wide-band<br />
amplifiers in bipolar and CMOS technologies," Kluwer<br />
Academic Pulishers, ISBN 0-7923-9096-2.<br />
[12] T.Rohe et al., "Sensor design for the ATLAS-pixel<br />
detector," Nucl. Instr. and Meth. A 409 (1998) 224-228.<br />
[13] G.Bolla et al., "Sensor development for the CMS pixel<br />
detector," Preprint submitted to Elsevier Preprint, PSI,<br />
Villigen, 16 July 2001.<br />
[14] M.Dentan et al., "DMILL, a mixed-mode analog-digital<br />
radiation hard technology for high energy physics<br />
electronics," RD-29 Final Status Report, CERN/LHCC/98-37<br />
(1998).<br />
[15] L.Blanquart et al., "Front-End electronics for ATLAS<br />
Pixel detector," Proceedings of the Sixth Workshop on<br />
Electronics for LHC Experiments, Krakow, Poland, Sept.<br />
2000, CERN/LHCC/2000-041.<br />
[16] C. Berg et al., "Bier&Pastis, a pixel readout prototype<br />
chip for LHC," Nucl. Instr. and Meth. A 439 (2000) 80-90.<br />
[17] R.Baur, "Readout architecture for the CMS pixel<br />
detector," Nucl. Instr. and Meth. A 465 (2001) 159-165.<br />
[18] R.Horisberger, "Design requirements and first results of<br />
an LHC adequate analogue block for the CMS pixel detector,"<br />
Nucl. Instr. and Meth. A 395 (1997) 310-312.<br />
[19] W.Snoeys et al., "Radiation tolerance beyond 10Mrad for<br />
a pixel readout chip in standard CMOS," Proceedings of the<br />
Fourth Workshop on Electronics for LHC Experiments,<br />
Rome, Italy, Sept. 1998, CERN/LHCC/98-36.<br />
[20] A.Mekkaoui, J.Hoff, "30Mrad(SiO2) radiation tolerant<br />
pixel front-end for the BTeV experiment," Nucl. Instr. and<br />
Meth. A 465 (2001) 166-175.
[21] G.Anelli, "Design and characterization of radiation<br />
tolerant integrated circuits in deep submicron CMOS<br />
technologies for the LHC experiments," PhD Thesis, Institut<br />
National Polytechnique de Grenoble, France, Dec. 2000.<br />
[22] R.Dinapoli et al., "An analog front-end in standard<br />
0.25µm CMOS for silicon pixel detectors in Alice and<br />
LHCb," Proceedings of the Sixth Workshop on Electronics<br />
for LHC Experiments, Krakow, Poland, Sept. 2000,<br />
CERN/LHCC/2000-041.<br />
[23] K.Wyllie et al., "A Pixel Readout Chip for Tracking at<br />
ALICE and Particle Identification at LHCb," Proceedings of<br />
the Fifth Workshop on Electronics for LHC Experiments,<br />
Snowmass, Colorado, Sept. 1999, CERN/LHCC/99-33.<br />
[24] J.J. Van Hunen , "Irradiation Tests and Tracking<br />
Capabilities of the Alice1LHCb Pixel Chip," these<br />
Proceedings.<br />
[25] M.Campbell et al., "An introduction to deep sub-micron<br />
CMOS for vertex applications," submitted to Nucl. Instr. and<br />
Meth. A, Proceedings of the 9 th<br />
International Workshop on<br />
Vertex Detectors, Vertex 2000, Sleeping Bear Dunes National<br />
Shoreline, Michigan, September 2000.<br />
[26] R.Turchetta et al., "A monolithic active pixel sensor for<br />
charged particle tracking and imaging using standard VLSI<br />
CMOS technology," Nucl. Instr. and Meth. A 458 (2001) 677-<br />
689.<br />
[27] V.Palmieri et al. "A monolithic semiconductor detector,"<br />
World patent no. WO0103207.<br />
[28] O.Baesken et al., "First MCM-D modules for the Bphysics<br />
layer of the ATLAS pixel detector," IEEE Trans.<br />
Nucl. Sci. 47, (2000) 745-749.<br />
[29] C.Kenney et al., "Silicon detectors with 3-D electrode<br />
arrays: fabrication and initial test results," IEEE Trans. Nucl.<br />
Sci. 46, (1999) 1224-1236<br />
[30] W.F.Clark et al., "Low-temperature CMOS-a brief<br />
preview," 1991 Proceedings. 41st Electronic Components and<br />
Technology Conference, Atlanta, GA, USA, 11-16 May 1991,<br />
p.544-50, IEEE, New York, NY, USA, 1991, xvi+901 pp,<br />
ISBN: 0-7803-0012-2.<br />
[31] Lindstroem et al., "Development for radiation hard<br />
silicon detectors by defect engineering - results by the <strong>Cern</strong><br />
RD48 (ROSE) collaboration," Nucl. Instr. and Meth. A 462<br />
(2001) 474-483<br />
[31] C Da Via' and S. Watts, "New results for a novel<br />
oxygenated silicon material," presented to the E-MRS, June<br />
2001, to be published in Nucl. Instr. and Meth. B.
Abstract<br />
Some principal design features of front-end electronics<br />
for calorimeters in experiments at the LHC will be<br />
highlighted. Some concerns arising in the transition from<br />
the research and development and design phase to the<br />
construction will be discussed. Future challenges will be<br />
indicated.<br />
I. INTRODUCTION<br />
Calorimetry in large detectors at the LHC poses some<br />
requirements on readout electronics that are quite different<br />
than for any other detector subsystem. The main distinction<br />
is, a) in the large dynamic range of energies to be measured;<br />
and b) uniformity of response and accuracy of calibration<br />
over the whole detector. As in all other functions of the<br />
detector, low noise of front-end amplifiers is essential.<br />
Unique, too, is the requirement for very low coherent noise,<br />
as the energy measurement involves summation of signals<br />
from a number of calorimeter sections (towers, strips,<br />
preshower detectors). Power dissipation and cooling is a<br />
major concern as in any other detector subsystem, in some<br />
respects only more so, since all the elements of the signal<br />
processing chain require more power due to the large<br />
dynamic range, speed of response, high precision, and low<br />
noise at higher values of electrode capacitance.<br />
The key requirements on the calorimetry readout<br />
electronics are summarized in Table 1. The requirements<br />
are clearly most demanding in electromagnetic (EM)<br />
calorimetry. The dynamic range is somewhat smaller for<br />
hadron calorimeters. However, the noise has to be low, if<br />
muons have to be observed, and since in hadron shower<br />
energy measurement the signals from a larger volume of<br />
the calorimeter have to be added up.<br />
While there are quite significant differences in the<br />
principles and the technology among various scintillatorbased<br />
calorimeters and those based on ionization in liquids,<br />
the signal is finally reduced to charge (current) from a<br />
capacitive source in all cases, and the signal processing<br />
chain – in a simplified picture – could be identical.<br />
* Work supported by the U.S. Department of Energy: Contract No. DE-<br />
AC02-98CH10886.<br />
Electronics for Calorimeters at LHC<br />
Veljko Radeka<br />
Brookhaven National Laboratory, Upton, NY 11973-5000<br />
radeka@bnl.gov<br />
Major design differences in the readout design arise in<br />
different experiments from trying to balance the answers<br />
to two questions: a) how much electronics and what<br />
electronic functions need to be on the calorimeter; and b)<br />
how to minimize the number of interconnections and<br />
transmission lines (copper or fiber) for transfer of<br />
information from the calorimeter.<br />
Each experiment uses a unique approach, in which<br />
preference of the designers has played a decisive role. One<br />
of the design considerations was how near to the signal<br />
input to digitize and, consequently, whether to have an<br />
analog or digital pipeline. The readout systems were<br />
described in some detail in the Proceedings of the 6th Workshop on Electronics for LHC Experiments, 2000. The<br />
two large hadron collider experiments, CMS and ATLAS,<br />
each have several different types of EM and hadron<br />
calorimeters for the barrel, end-cap, and forward regions.<br />
I will comment here only on some common or unique<br />
readout aspects, and not attempt to review readouts for all<br />
calorimeter components. ALICE has one EM calorimeter<br />
over a small solid angle, based on lead tungstate crystals<br />
and avalanche photodiodes as in CMS, but taking advantage<br />
of a lower operating temperature (-25C) and a longer<br />
shaping time (~ 3 microseconds) to obtain a larger signal.<br />
Table 1: Key Requirements on Calorimeter Readout<br />
1. Energy Resolution<br />
EM:
ECAL<br />
HCAL<br />
SPD<br />
PreShower<br />
clip<br />
PM<br />
clip<br />
PM<br />
3m Optical fibre PM<br />
SPD<br />
VFE<br />
3m Optical fibre PM PRS<br />
VFE<br />
Analogue 10 m<br />
same<br />
electronics<br />
Digital 5 -15 m<br />
Analogue 5-15 m<br />
II. UNIQUE ASPECTS OF LHC CALORIMETER FRONT-<br />
END ELECTRONICS<br />
A. LHCb<br />
Although a limited solid angle experiment, LHCb has a<br />
large sampling calorimeter based on scintillatorwavelength<br />
shifter technology and on photomultiplier light<br />
readout. The readout concept for the four calorimeter<br />
components, and the location of various functions, is shown<br />
in Fig. 1. ECAL and HCAL have a significantly larger light<br />
output with smaller fluctuations and they use a<br />
deadtimeless integrator (integrator filter without any<br />
switches). The concept is illustrated in Fig. 2. The dominant<br />
component of the photomultiplier signal decays<br />
exponentially with a time constant of ~ 10 ns, allowing<br />
formation of a short pulse by delay line clipping. The flat<br />
top provides a degree of independence from small<br />
fluctuations in the time of arrival and the shape of the<br />
ADC<br />
Pipeline<br />
ADC<br />
Pipeline<br />
L0<br />
L0<br />
Trigger<br />
Readout<br />
Readout<br />
Trigger<br />
Figure 1. Front-End Electronics for LHC calorimeters. (Design of the<br />
Integrator Filter for ECAL and HCAL is by LAL Orsay, and VFE for<br />
the Scintillator Pad Detector and for the Preshower is by LPC Clermont-<br />
Ferrand.)<br />
PM<br />
AMS BiCMOS 0.8u ASIC<br />
4 channels per chip<br />
50 Ω<br />
5ns - 50 Ω 25ns - 100 Ω<br />
100 Ω<br />
100 Ω<br />
Analog chip<br />
Rf = 12 MΩ<br />
Cf = 2pF<br />
-<br />
ADC<br />
+<br />
100nF<br />
22nF<br />
Buffer Integrator<br />
Figure 2. Principle of the Integrator Filter for the LHCb ECAL and<br />
HCAL. The exponentially decaying signal is first clipped at the<br />
photomultiplier output to 5 ns. The integrator receives the clipped<br />
signal, and then after 25 ns the same but with opposite polarity, to<br />
provide an output with a flat top, which returns to zero, as shown in<br />
Fig. 3.<br />
Current [a.u.]<br />
Voltage [a.u.]<br />
0<br />
Buffer output current before integration<br />
T=0 T=25ns<br />
Output integrated signal<br />
0<br />
0 10 20 30<br />
Time [ns]<br />
40 50 60<br />
Strobe on the flat top ( ≈ 10ns).<br />
Shaping : h(t) = ∫ [i(t)-i(t+25ns)]dt<br />
Figure 3. Signals in the Integrator Filter in Fig. 2.<br />
signal. At the integrator output, the pileup for consecutive<br />
pulses spaced more than 25 ns is negligible. The dynamic<br />
range of this circuit is about 4×103 .<br />
The scintillator pad detector and the preshower have a<br />
lower light output with larger fluctuations, so that switched<br />
integrators were found to be more appropriate.<br />
The design for essentially all circuits for LHCb<br />
calorimeters has been completed, and prototype circuits<br />
completed and tested (also in the test beam). (One more<br />
ASIC iteration for the Integrator Filter is planned only for<br />
a small change.) The review of calorimeter electronics<br />
performed in April, 2001 was, overall, very positive. The<br />
only significant finding was that the power distribution<br />
system has not been designed yet.<br />
B. CMS Electromagnetic Calorimeter<br />
This is a total energy absorption calorimeter based on<br />
lead tungstate (PbWO ) crystals and avalanche photodiode<br />
4<br />
(APD) readout for the barrel, and vacuum phototriode in<br />
the endcaps. An outline of the electronics chain is shown<br />
in Fig. 4, illustrating the “light-to-light” readout, with an<br />
ADC at the detector and an optical fiber for data<br />
transmission for each crystal. This is the most ambitious<br />
subsystem in terms of the number of optical fibers. The<br />
dynamic range for the ADC is reduced by using four<br />
different gains in a configuration called Floating Point<br />
PreAmplifier (FPPA)[1], Fig. 6. The signal before and after<br />
shaping is shown in Fig. 5. The noise requirements in this<br />
case are rather stringent: a) the light signal from lead<br />
tungstate is small, resulting in ~5 photoelectrons/MeV in a<br />
pair of APDs; b) APD gain is expected to be ~50; and c) the
Energy<br />
→<br />
Light<br />
Light<br />
→<br />
Current<br />
Current<br />
→<br />
Voltage<br />
PbWO 4 APD FPPA<br />
Voltage<br />
→<br />
Bits<br />
ADC<br />
Bits<br />
→<br />
Light<br />
Σ<br />
Pipeline<br />
To DAQ<br />
Upper-Level VME Readout Card<br />
(in Counting Room)<br />
Digital<br />
Trigger Σ<br />
Figure 4. Readout chain for the CMS electromagnetic calorimeter. Preamplifier with range selection (Floating Point Preamplifier – FPPA) and<br />
analog-to-digital converter are located at the detector (PbWO 4 crystals with avalanche photodiode readout). There is 1 fiber per crystal for data<br />
transmission from the detector. Digital pipelines are in the counting room.<br />
Preamp Output [V]<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
Pk-2<br />
Pk<br />
Pk-1<br />
Pk+1<br />
0<br />
0<br />
0 50 100 150 200 250<br />
Time [ns]<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
Photocurrent [mA]<br />
Figure 5. PbWO 4 /APD signal before and after shaping. Sampling points<br />
every 25 ns are indicated.<br />
External feedback<br />
Cd<br />
APD<br />
Q2<br />
R1<br />
Q1’<br />
Q1<br />
Rf<br />
Cf<br />
Cc<br />
-A<br />
Baseline<br />
control<br />
Input transconductance<br />
dominated by R<br />
Current feedback amplifiers<br />
(Closed loop BW: 250 MHz)<br />
Vref<br />
(from ADC)<br />
+<br />
X1<br />
-<br />
+<br />
X5<br />
-<br />
+<br />
X9<br />
-<br />
+<br />
X33<br />
-<br />
R<br />
R<br />
R<br />
R<br />
Matched R and C<br />
C<br />
C<br />
C<br />
C<br />
To FPU<br />
Figure 6. Circuit topology of the CMS ECAL floating point preamplifier.<br />
Four samples at different gains are stored every 25 ns. The sample<br />
with the highest gain below saturation is selected and fed to the ADC<br />
via a multiplexer, resulting in a waveform as in Fig. 7.<br />
capacitance of interconnections and APDs optimized with<br />
respect to the crystal and the sensitivity to shower particles<br />
is ~200 pF. The noise in an ideal case is determined by<br />
transistors Q and Q and by R . The signal at the output of<br />
1 2 1<br />
the analog multiplexer, composed of analog samples with<br />
different gains, is shown in Fig. 7.<br />
The FPPA has been fabricated in bipolar technology and<br />
functional tests have been satisfactory, but one more run<br />
is in process to satisfy the noise requirements.<br />
Power dissipation per channel is expected to be ~1.4<br />
watts, resulting in about 100 kwatt at the detector.<br />
Preamp. output<br />
33 9 5 9<br />
33<br />
Gain<br />
FPPA output<br />
40MHz clock<br />
Figure 7. Waveform at the output of the analog multiplexer of CMS<br />
ECAL floating point preamplifier (FPPA). It consists of consecutive<br />
samples every 25 ns, each sample taken from one of the four amplifiers<br />
which provides the best precision for a given signal amplitude (a short<br />
transient is superposed as boundaries between different gains are<br />
crossed; sample magnitudes are measured at points in between the<br />
transients).
Test Pulse<br />
Test Pulse<br />
SPLITTER<br />
INTEGRATOR<br />
ENCODER<br />
CMS QIE<br />
CMS QIE<br />
FADC<br />
Exponent(1:0)<br />
CapID(1:0)<br />
Mantissa(4:0)<br />
Clock<br />
Reset<br />
Pedestal(3:0)<br />
Clock<br />
Reset<br />
Pedestal(3:0)<br />
Exponent(1:0)<br />
CapID(1:0)<br />
Mantissa(4:0)<br />
Channel<br />
Control<br />
ASIC<br />
Global Reset<br />
Bunch Crossing Zero<br />
40 MHz Clock<br />
Clock<br />
Data(15:0)<br />
Control<br />
Serial Control Bus<br />
800 Mbit/s<br />
Serializer<br />
Figure 8. Block diagram of the CMS hadron tile calorimeter readout. The front end, including analog-to-digital conversion, is based on the<br />
pipelined multi-ranging integrator and encoder (ref. ), known as “QIE”. The encoder is based on a multi-ranging current splitter and a nonlinear<br />
flash ADC. Encoded samples are serialized and transferred from the detector by (one) optical link for every two channels.<br />
C. CMS Hadron Calorimeter<br />
The photodetector for the tile calorimeter is a hybrid<br />
photomultiplier (HPD), which can operate in the strong<br />
magnetic field. This unique approach [2,3], illustrated by<br />
the block diagram in Fig. 8, has been developed in an<br />
attempt to digitize the signal very near to its source. It<br />
combines a multi-ranging integrator and a nonlinear flash<br />
ADC, with a response as in Fig. 9. The nonlinear 5-bit ADC<br />
provides constant resolution of ~0.9% rms in each range.<br />
All of these functions have been incorporated in a single<br />
ASIC, realized in a BiCMOS technology. Functional tests<br />
of the prototype have been satisfactory. One more run<br />
before production may be needed.<br />
D. ATLAS Liquid Argon Calorimeter<br />
Each readout channel has three gain ranges with 12-13<br />
bit dynamic range each and linear response. It is also the<br />
only one of the LHC experiments using switched capacitor<br />
arrays as analog memories. After digitization, the data are<br />
transferred via optical links. The only communication via<br />
(differential) copper transmission lines is for analog sums<br />
Nominal QIE8 ADC Counts<br />
35<br />
30<br />
25<br />
20<br />
15<br />
10<br />
5<br />
0<br />
VCSEL<br />
for level 1 trigger. The power dissipation in the front-end<br />
board is ~0.7 watts/channel for all functions. The design<br />
of all the circuits located on the detectors in the front-end<br />
crates, which serve as a “Faraday Cage” (Figs. 10 and 11),<br />
has been completed. Analog parts (preamps, shapers, SCAs)<br />
are in mass production in radiation-resistant technologies.<br />
Range 0<br />
Range 1<br />
Range 2<br />
Range 3<br />
0 5 10 15 20 25 30<br />
Input Charge (pC)<br />
Figure 9. Response of the CMS charge integrator encoder (QIE) over<br />
the four gain ranges.
Accordion<br />
Electrodes<br />
SB<br />
MB<br />
Mother Boards<br />
& Calibration<br />
Back<br />
Front<br />
Cold Vessel Warm Vessel<br />
Cryogenic Services<br />
Faraday Cage<br />
Front End Board<br />
Pedestal and Warm Cable<br />
Vacuum Cable<br />
Cold Cable & Pigtail<br />
Feedthrough<br />
Figure 10. An illustration of the readout of the ATLAS liquid argon<br />
(barrel) EM calorimeter. The crates (“Faraday Cage”) containing all<br />
the functions outlined in the lower half of Fig. 11 are mounted directly<br />
on signal feedthroughs.<br />
The digital part has been prototyped in radiation hard<br />
technology. The emphasis in testing has been on fine<br />
effects important for calibration and coherent noise. These<br />
are illustrated in Figs. 12-14. Due to the uniformity and<br />
stability of an ionization calorimetry response, an accurate<br />
intercalibration by electronic means is practical. A small<br />
effect on calibration (a few tenths of one percent) of the<br />
small inductance of electrode connections is illustrated in<br />
Fig. 12. This effect is inversely proportional to the shaping<br />
time. Fig. 14 shows the dependence of the coherent noise<br />
on shielding and grounding of the front-end boards in the<br />
front-end crate. An attenuation of the EMI (from digital<br />
operations) of ~10 6 is required to achieve a usable dynamic<br />
range approaching 10 5 .<br />
III. DYNAMIC RANGE IN THE FRONT-END<br />
All the readout schemes for a large dynamic range<br />
(approaching 105 ) require multiple gain ranges (or multiranging)<br />
prior to analog-to-digital conversion at the speed<br />
of interest at the LHC. To achieve this dynamic range, an<br />
input stage with sufficiently low noise – where the noise<br />
of a single input transistor dominates – is required. An<br />
example is the configuration in Fig. 15. More generally,<br />
the problem is illustrated in Fig. 16. The noise value<br />
assumed is for the best bipolar junction transistors and<br />
advanced CMOS devices. It corresponds to an equivalent<br />
series noise resistance of ~15 ohms. Even if the intrinsic<br />
device noise could be reduced below this value (by<br />
increasing the device width and/or reducing the electron<br />
transit time), lower values are difficult to realize in practice<br />
due to additional resistances, e.g., in the base or in the<br />
metalization in monolithic circuits. The maximum signal<br />
at the preamplifier output is likely to be even less than 3<br />
volts, particularly as the trend to lower operating voltages<br />
continues. This limits the dynamic range of a linear frontend<br />
stage to about 10 5 (an analysis with respect to the<br />
current gives the same result). This happens to be just<br />
sufficient for EM calorimetry at the LHC.<br />
IV. MAIN CONCERNS FOR LHC CALORIMETRY<br />
ELECTRONICS<br />
A. Transition from R&D Mode to Production<br />
Mode<br />
A number of ASICs for almost all calorimeters need “one<br />
more iteration”. Finalizing an ASIC is a balance between<br />
when and where to stop making incremental improvements<br />
and the designer’s reluctance to sign off on the final design.<br />
This has affected the construction schedules to the extent<br />
that electronics has become a critical path item in several<br />
calorimeter subsystems.<br />
B. Radiation Hardness<br />
The progress in designing circuits in radiation hard<br />
technologies and in testing and qualifying commercial-offthe-shelf<br />
components has been good, but this very tedious<br />
process will also have an adverse effect on the construction<br />
schedules. The advent of 0.25 micron CMOS technology,<br />
and the contribution by the CERN group to the design of<br />
standard cells, have been most valuable.<br />
C. Low Voltage Regulators<br />
Some readout boards require 10-20 radiation-resistant,<br />
low voltage regulators. These are under development by<br />
the CERN project RD49 with S. T. Microelectronics. Some<br />
development problems appear to have been overcome and<br />
a critical evaluation of a large number of samples is<br />
expected to be performed soon. These regulators are the<br />
most prominent item on the critical path list for the design<br />
and construction of readout boards.
C d<br />
Cryostat<br />
Motherboard<br />
Electrode<br />
T=90K<br />
~15k<br />
~180k<br />
Calibration<br />
Preamps<br />
Tower Builder<br />
Control<br />
On Detector (Faraday Cage)<br />
Front End Crate<br />
Front End<br />
Board<br />
R c<br />
L c<br />
Clock<br />
Shapers<br />
DAC<br />
SCA (144 Cells)<br />
Control<br />
I<br />
Layer Sum<br />
Buffering<br />
&<br />
ADC<br />
40MHz CLK<br />
LV1 Acc.<br />
Reset<br />
Controller<br />
Optical<br />
Link<br />
32 Bits<br />
40 MHz<br />
CPU<br />
Level 1<br />
Interface<br />
System Control<br />
Mapping Board<br />
E= Σ a S<br />
i i<br />
T= Σ b S<br />
i i<br />
TTC Interface<br />
Trigger Cavern<br />
Calorimeter<br />
Monitoring<br />
TTC<br />
ROD<br />
Level 1<br />
Processor<br />
Figure 11. Readout chain of the ATLAS liquid argon calorimeter. After the preamplifiers, the readout chain is identical for all the liquid argon<br />
calorimeters. The end-cap hadron calorimeter has preamplifiers based on GaAs technology at the electrodes inside the cryostat. Each preamplifier<br />
output is split into three shaping amplifiers with different gains and analog (switched capacitor) memories, and then digitized with 12 bit<br />
resolution. Data are transferred from the detector via one optical link for every 128 signal channels.<br />
128 outputs<br />
5Ω<br />
0.1%<br />
5Ω<br />
0.1%<br />
PMOS<br />
10000/0.8<br />
PMOS<br />
10000/0.8<br />
+5V<br />
NE 856<br />
-<br />
Low<br />
Offset<br />
+<br />
enable<br />
+<br />
Low<br />
Offset<br />
128<br />
DAC<br />
16 bits<br />
16<br />
Driver<br />
White<br />
follower<br />
8<br />
Calib<br />
Logic<br />
SPAC<br />
Slave<br />
TTCRx<br />
Figure 12. Calibration signal generator. A dc current controlled to 0.1% is switched off to generate on an LR network an exponentially decaying<br />
current pulse which approximates closely (within the shaping time) the calorimeter signal.<br />
Delay<br />
CTP<br />
2<br />
2<br />
1<br />
Ext.<br />
Trig.<br />
Network<br />
DAQ
ADC Counts [RMS]<br />
25<br />
20<br />
15<br />
10<br />
Calibration Board<br />
Amplitude [V]<br />
-0.35<br />
0<br />
No shields<br />
MotherBoard &<br />
Summing Board<br />
R inj<br />
PA-out<br />
Detector<br />
Max Signal 3 TeV ~7.5mA 2.5V<br />
Random noise/channel 40MeV 100nA 33µV<br />
Sum of 64 channels 320MeV 0.8µA 260µV<br />
Coherent noise in the sum<br />
~ 200nA < 64 µV<br />
Coherent noise in one channel 3nA
1 ⁄ gm1 RC1 ZIN = ---------------- + --------------------------<br />
G 1 + R2 ⁄ R1 RC2 G = -----------------------------------------<br />
1 ⁄ gm2 + R1//R 2<br />
ΖIN<br />
T1<br />
Vcc1<br />
RC1<br />
Vcc2<br />
RC2<br />
Figure 15. LAr calorimeter preamplifier configuration with well defined<br />
input impedance. The conversion gain (output voltage/input current)<br />
is determined only by R C1 , and the noise is determined only by T 2 .<br />
E. Availability of Semiconductor Technology<br />
and Lack of Resources for Spares<br />
We have to assume that some (or most) of the technologies<br />
of ASICs will not be available throughout the lifetime of the<br />
experiments. This requires careful planning for acquisition<br />
of ASICs, and/or additional wafers for any replacement<br />
maintenance.<br />
V. FUTURE CHALLENGES – “ENERGY AND LUMINOSITY<br />
FRONTIERS”<br />
From recent discussions and studies about future hadron<br />
accelerator developments, two major advances are being<br />
considered. One is a continuing quest for increasing<br />
luminosity – an increase by an order of magnitude at the<br />
LHC is already being contemplated. Considering the<br />
difficulties that had to be overcome, and the time and effort<br />
still being expended in the development of the radiation<br />
hard electronics for the present design luminosity, this will<br />
be a challenge which will require a renewed major R&D<br />
effort.<br />
On a longer time scale, there is also a continuing quest<br />
for higher energies (Snowmass 2001). The dynamic range<br />
required in EM calorimetry at the LHC is just about at the<br />
limit that a front-end amplifier device can accommodate<br />
in linear regime, as discussed in Section III. A very large<br />
hadron collider (“VLHC”) will require a different approach<br />
to the dynamic range problem than the present designs for<br />
the LHC.<br />
R1<br />
T2<br />
1<br />
R2<br />
Vo<br />
I(t)<br />
Preamplifier Gain ~10<br />
e n<br />
C D<br />
Z f<br />
-<br />
e n ≅ 0.5 nV/Hz 1/2<br />
BW ≅10MHz<br />
n e n<br />
n ≥ 2<br />
2 nd stage<br />
} ⇒3µV<br />
Noise at preamplifier output ≅ 30 µV (Gain ~ 10 to overcome second<br />
stage noise)<br />
Max signal at preamplifier output ≅3V (technology dependent)<br />
Max dynamic range (with respect to rms noise) ~ 10 5 (or 16-17 bits)<br />
Figure 16. Dynamic range limit for the linear preamplifier is limited by<br />
the ratio of the maximum voltage amplitude at the output divided by<br />
the noise over the bandwidth of interest.<br />
VI. ACKNOWLEDGEMENTS<br />
The information and some of the material for this report<br />
has been generously provided by J. Christiansen for LHCb,<br />
P. Denes and J. Elias for the CMS, B. Skaali for ALICE, and<br />
W. Cleland and C. de La Taille for ATLAS. Discussions<br />
with them are gratefully acknowledged. In the brief<br />
comments here, it was not possible to acknowledge the<br />
large efforts and individual contributions that went into<br />
the development of the elaborate readout electronics for<br />
LHC calorimeters.<br />
The author is grateful to his colleague, B. Yu, for his<br />
help in preparing this report.<br />
VII. REFERENCES<br />
1. P. Denes, private communication.<br />
2. R. Yarema, et al., “A Pipelined Multiranging Integrator<br />
and Encoder ASIC for Fast Digitization of Photomultiplier<br />
Tube Signals”, Fermilab-Conf-92/148.<br />
3. T. Zimmerman and M. Sarraj, “A Second Generation<br />
Integrator and Encoder ASIC”, IEEE Trans. on Nucl. Sci.<br />
Vol. 43, No. 3, June (1996), p1683.<br />
4. H. Takai, et al., “Development of Radiation Hardened<br />
DC-DC Converter for the ATLAS Liquid Argon<br />
Calorimeter”, these Proceedings.
I. EVOLUTION AND REVOLUTION<br />
FPGA progress is evolutionary and revolutionary.<br />
Evolution results in bigger, faster, and cheaper FPGAs, in<br />
better software with fewer bugs and faster compile times, and<br />
in better technical support.<br />
Users expect large capacity at reasonable cost (100,000 to<br />
millions of gates, on-chip RAM, DSP support through fast<br />
adders and dedicated multipliers). System clock rates now<br />
exceed 150 MHz, which requires sophisticated clock<br />
management. I/Os have to be compatible with many new<br />
standards, and must be able to drive transmission lines.<br />
Designers are in a hurry, and expect push-button tools with<br />
fast compile times, and a wide range of proven, reliable<br />
cores, including microprocessors. And power consumption is<br />
a serious concern.<br />
Progress is driven by semiconductor technology, giving us<br />
smaller geometries, and more and faster transistors. Improved<br />
wafer defect density makes it possible to build larger and<br />
denser chips on larger wafers at lower cost.<br />
Innovative architectural and circuit features are equally<br />
important, as are advancements in design methodology,<br />
modular team-based design, and even internet-based<br />
configuration methods.<br />
Figure 1: A Decade of Progress<br />
1000x 1000<br />
100x 100<br />
10x<br />
10<br />
1x<br />
1<br />
Capacity<br />
Speed<br />
Price<br />
XC4000<br />
1/91 1/92 1/93 1/94 1/95 1/96 1/97 1/98 1/99 1/00 1/01<br />
Year<br />
Evolution, Revolution, and Convolution<br />
Recent Progress in Field-Programmable Logic<br />
P. Alfke<br />
Xilinx, 2100 Logic Drive, San Jose, California 95124<br />
peter.alfke@xilinx.com<br />
Virtex-II<br />
(excl. Block RAM)<br />
Virtex & Virtex-E<br />
(excl. Block RAM)<br />
Spartan<br />
II. HISTORY<br />
Over the past 10 years, the max FPGA capacity has<br />
increase more than 200-fold (from 7,000 to 1.5 million<br />
gates), speed has increased more than 20-fold, and the cost<br />
for a 10,000-gate functionality has decreased by a factor of<br />
over a hundred. There is every indication that this evolution,<br />
the result of “Moore’s Law”, will continue for many more<br />
years.<br />
Supply voltage is dictated by device geometries, notably<br />
oxide thickness, and is on a steady downward path. This<br />
results in faster and cheaper chips, and it reduces power<br />
consumption dramatically, but it also causes problems in<br />
power distribution and decoupling on the PC-board. That is<br />
the price of progress!<br />
XC4000 and Spartan families use a 5-V supply, The –XL<br />
families use 3.3 V, Virtex and Spartan-II use 2.5 V, (but also<br />
3.3 V for I/O). Virtex-E uses 1.8 V, and Virtex-II and the<br />
upcoming Virtex-IIPro use 1.5 V, but maintain 3.3-V<br />
tolerance on their outputs.<br />
Over the past 16 years, Xilinx has introduced a series of<br />
FPGA families with increasing capabilities in size and in<br />
features.<br />
Figure 2: Logic Capacity and Features<br />
LUTs & FFs Additional Features<br />
• XC4000/Spartan: 152…12,312 Carry, LUT-RAM<br />
• Virtex/Spartan-II: 432…27,648 4K-BlockRAM, DLL, SRL16<br />
• Virtex-E: 1,728…43,200 differential I/O<br />
• Virtex-II: 512…67,548 18K-BlockRAM,<br />
Multipliers, DCM,<br />
Controlled Impedance I/O<br />
• Virtex-II Pro: 2,816…45,184 PowerPC,<br />
3.125 Gbit/sec I/O
Many of the earlier families are still in production (except<br />
XC2000 and XC6200) but the old 5-V families should not be<br />
considered for new designs. 5V was the dominant standard<br />
for over 30 years, but it is now obsolete. Designers must learn<br />
to migrate fast to the newer families that provide a much<br />
more attractive cost/performance ratio. As a general rule, IC<br />
technology matures 15 times faster than a human being. A<br />
technology introduced barely 4 years ago is now well beyond<br />
its prime, and should not be a candidate for new designs,<br />
except in certain niche applications.<br />
For new designs, use Spartan-II, Virtex, and Virtex-E for<br />
their maturity, availability and price, use Virtex-II for higher<br />
performance and advanced features. But for designs starting<br />
in 2002, consider Virtex-IIPro with on-chip PowerPC<br />
microprocessors and gigabit serial I/O.<br />
III. EVOLUTIONARY FEATURES<br />
Virtex devices offer better global clock distribution with<br />
short delays and extremely small skew (
This flexibility is essential when the FPGA must interface<br />
to a wide variety of other ICs. The drive capability is<br />
important for driving transmission lines, since many<br />
interconnect lines must now be treated as transmission lines.<br />
Signal delay on a PC-board is 50…70 ps per cm, which<br />
means that - at a 1-ns transition time - interconnects as short<br />
as 7 cm must be treated as transmission lines to avoid<br />
excessive ringing and other signal integrity issues. The line<br />
must be terminated either at the driving end (series<br />
termination) or at the far end (parallel termination).<br />
Placing these termination resistors around and very close<br />
to 400 – 1100-pin fine-pitch BGA packages is not only<br />
difficult and expensive, but also wasteful in PC-board area.<br />
That’s why Virtex-II now has an option that converts any<br />
output into a controlled-impedance driver, matched to the line<br />
it has to drive. Or any input can be made a termination<br />
resistor. All this is implemented in the I/O buffer on the chip,<br />
right where it is needed. There is no cost and no wasted<br />
space. Digitally controlled impedance is the only practical<br />
way to deal with fast signal edges between high pin-count<br />
packages. And it is available today.<br />
Figure 5: Digitally Controlled Impedance<br />
~12 9<br />
50 9<br />
Impedance<br />
Controller<br />
33 9<br />
50 9<br />
Conventional I/Os<br />
50 9<br />
SelectIO -Ultra<br />
External resistors eliminated<br />
Impedance maintained by FPGA<br />
Figure 6: PC Board Routing Impact<br />
IC1<br />
Resistor<br />
IC2<br />
Conventional<br />
Multiply this by 1000 pins per chip, and by the N chips per board<br />
IC1<br />
IC2<br />
No resistor<br />
DCI<br />
Fewer Layers, fewer resistors, smaller board<br />
In the past, system clock rates have doubled every 5<br />
years, and IC geometries have shrunk 50% every 5 years.<br />
Trace width on the PC-board has always been about 100<br />
times wider than inside the IC. Whenever the clock rate<br />
doubles, the distance a signal can travel in, say 25% of a<br />
clock period, is being cut in half. At 3 MHz in 1970 it was 20<br />
m, at 200 MHz in 2000 it was barely 30 cm, and it will shrink<br />
to 15 cm in 2005, and 7 cm in 2010, as system clock rates<br />
keep doubling. Not a pretty outlook!<br />
This indicates the demise of traditional synchronous board<br />
design. The next wave will be source-synchronous design,<br />
where the clock is intermingled with the data busses, and<br />
clock delay thus equals data delay. High-speed designs will<br />
use double-data-rate clocking, which means the clock<br />
bandwidth need not be higher than the max data bandwidth.<br />
The disadvantage of source-synchronous clocking is the<br />
unidirectional nature of the clock distribution, and thus the<br />
need for significantly more clock pins and clock lines, and<br />
the need to handle multiple clock domains on-chip.<br />
Figure 7: Evolution<br />
Max Clock Rate (MHz)<br />
Min IC Geometry (µ)<br />
Number of IC Metal Layers<br />
PC Board Trace Width (µ)<br />
Number of Board Layers<br />
1965<br />
1<br />
-<br />
1<br />
2000<br />
1-2<br />
Figure 8: Moore vs. Einstein<br />
1980<br />
10<br />
5<br />
2<br />
500<br />
2-4<br />
1995<br />
100<br />
0.5<br />
3<br />
100<br />
4-8<br />
2010<br />
1000<br />
0.05<br />
10<br />
25<br />
8-16<br />
Every 5 years: System speed doubles, IC geometry shrinks 50%<br />
Every 7-8 years: PC-board min trace width shrinks 50%<br />
2048<br />
1024<br />
512<br />
256<br />
128<br />
64<br />
32<br />
16<br />
8<br />
Moore Meets Einstein<br />
Clock Frequency<br />
in MHz<br />
Trace Length<br />
in cm per 1 /4 clock period<br />
4<br />
2<br />
1<br />
’65 ’70 ’75 ’80 ’85 ’90 ’95 ’00 ’05 ’10<br />
Year<br />
M Speed Doubles Every 5 Years…<br />
…but the speed of light never changes
The future solution is bit-serial self-clocking data transfer at<br />
gigabit rates, first 3.125 Gbps for 2.5 Gbps data rate in 2002,<br />
and up to 10 Gbps later. This approach saves pins and makes<br />
physical distances almost irrelevant, especially when using<br />
optical interconnects. The on-chip serializer/ deserializer<br />
(SERDES) performs the function of an ultra-fast UART with<br />
a PLL for clock recovery, 8B/10B encoding/decoding and<br />
local FIFOs, to reduce the parallel data rate by a factor of 16<br />
or even 32.<br />
C. Microprocessors<br />
Incorporating a microprocessor inside the FPGA gives the<br />
user additional freedom to divide the task at hand: use the<br />
FPGA fabric for its very fast, massively parallel operation,<br />
and the microprocessor for the more sophisticated sequential,<br />
and thus slower operations. Soft implementations are<br />
available today. MicroBlaze from Xilinx is a 32-bit RISC<br />
processor running at 125 MHz and using less than 900 Logic<br />
Cells, i.e.
B. Designing for Signal Integrity<br />
Signal Integrity refers to signal quality on the PC-board,<br />
where it is important to avoiding reflections which show up<br />
as ringing, resulting in erroneous clocking or even data dropout.<br />
The user should develop a good understanding of<br />
transmission-line effects, and the various methods to<br />
terminate the lines.<br />
The controlled-impedance output drivers, available on all<br />
Virtex-II outputs, are a big help.<br />
Power supply decoupling is becoming more and more<br />
important. In CMOS circuits, power-supply current is<br />
predominantly dynamic. In a single-clock synchronous<br />
system, there is a supply-current spike during each active<br />
clock edge, but no current in-between. This dynamic current<br />
can be many times the measured dc value, and these current<br />
spikes cannot possibly be supplied from the far-away power<br />
supply. They must come from the local decoupling<br />
capacitors. The rule is: attach one 0.01 to 0.1 uF very closely<br />
to each Vcc pin, and tie them directly to the ground plane.<br />
The capacitance is not critical, low resistance and inductance<br />
are far more important. Two capacitors in parallel are much<br />
better than one large capacitor.<br />
Model the PC-board behavior with HyperLynx. Multilayer<br />
PC-boards with uninterrupted ground- and Vcc planes<br />
are a must, as is the controlled-impedance routing of clock<br />
lines.<br />
1) Tricks of the Trade<br />
To improve signal integrity, reduce output strength. Both<br />
LVTTL and LVCMOS have options for 2, 4, 6, 8, 12, 16, and<br />
24mA sink and source current. Controlled-impedance outputs<br />
(series-termination) is even better, but watch out for loads<br />
that are distributed along the line. They will see a staircase<br />
voltage, which will cause severe problems.<br />
Explore different supply voltages and I/O standards.<br />
Optimize drive capability and input threshold for the task at<br />
hand. Use differential signaling, e.g. LVDS when necessary.<br />
Avoid unnecessary fan-out, load capacitance and trace length.<br />
To combat Simultaneously Switching Output (SSO)<br />
problems causing ground-bounce, add virtual ground pins:<br />
High sink-current output pins that are internally and<br />
externally connected to ground.<br />
2). Test for Performance and Reliability<br />
You can manipulate the IC speed while it sits on the<br />
board:<br />
High temperature and low Vcc = slow operation,<br />
Low temperature and high Vcc = fast operation.<br />
If operation fails at hot, the circuit is not fast enough.<br />
Check the design for speed bottlenecks, add pipeline stages,<br />
or buy a faster speed-grade device.<br />
If operation fails at cold, the circuit is too fast. Check the<br />
design for signal integrity and hold-time issues, check for<br />
clock reflections. Look for internal clock delays causing<br />
hold-time issues, look for “dirty asynchronous tricks” inside<br />
the chip, like decoders driving clocks. In short, if it fails cold,<br />
there is something wrong with the design, not with the<br />
device.<br />
C. BlockROM State Machines.<br />
The Virtex-II BlockROMs can be used as surprisingly<br />
efficient state machines.<br />
With a common algorithm stored in the RAM (used as<br />
ROM) one BlockRAM can implement a 20-bit binary or<br />
Grey counter, or a 6-digit BCD counter (with the help of one<br />
additional CLB). More generally, the two ports of one<br />
BlockRAM can be assigned each half of the RAM space, and<br />
one port be configured 1k x 9. It can be used as a 256-state 4way<br />
branch Finite State Machine. The other port can be<br />
configured 256 x 36, sharing its eight address inputs with the<br />
first port. This one BlockRAM, without any additional logic,<br />
is a 256-state Finite State Machine where each state can jump<br />
to any four other states under the control of two inputs, and<br />
each state has 37 arbitrarily assigned outputs. There are no<br />
constraints, and the design runs at >150 MHz.<br />
Figure 10: Block RAM State Machine<br />
Branch<br />
8 bits<br />
BlockROM<br />
1K x 9<br />
8 + 1 bits<br />
Control bits<br />
Unused<br />
Output<br />
M 256 states, 4-way branch, 150 MHz operation<br />
Branch<br />
Control 2 bits<br />
Block ROM<br />
1K x 9<br />
8 bits 8 + 1 bits<br />
8 bits 256 x 36<br />
36 bits<br />
M 36 additional parallel outputs<br />
Output
D. Designing for Radiation Tolerance<br />
Radiation can hurt CMOS circuits in three different ways:<br />
In the extreme case, it can trigger any CMOS buffer to be<br />
a very low on-impedance SCR. This is called latch-up, and<br />
often destroys the device. In the best case, it requires Vcc<br />
recycling.<br />
“Total dose” effects cause premature aging (threshold<br />
shifts, increased leakage current, and decreased transistor<br />
gain) over time, usually over weeks and months.<br />
There is always the probability of “single-event upsets”<br />
that cause data corruption by changing the state of a flip-flop,<br />
causing a non-destructive soft error.<br />
Xilinx offers variations of certain XC4000XL and Virtex<br />
circuits, manufactured with an epitaxial layer underneath the<br />
transistors, but otherwise identical with their namesake nonepitaxial<br />
commercial parts. These devices have been tested to<br />
be immune to latch-up for radiation up to 120 MeVcm2/mg<br />
@ 125ûC.<br />
These devices tolerate between 60 and 300 krads of total<br />
ionizing dose.<br />
Like with all CMOS circuits, there is the probability of<br />
single-event upsets. But they can easily be detected by<br />
readback of the configuration and flip-flop data, and they can<br />
be mitigated by continuous scrubbing and partial<br />
reconfiguration.<br />
Xilinx and Xilinx users have also tested designs using<br />
triple redundancy to avoid any functional interrupt. For<br />
details, see:<br />
www.xilinx.com/products/hirel_qml.htm<br />
VI. CIRCUIT TRICKS FROM THE XILINX ARCHIVES.<br />
A. Asynchronous clock multiplexing<br />
This circuit handles three totally asynchronous inputs,<br />
Clock A, Clock B, and Select. The output is guaranteed not to<br />
have any glitches or shortened pulses.<br />
Figure 11: Asynchronous Clock MUXing<br />
Clock A<br />
Select<br />
Clock B<br />
D<br />
QA<br />
D QB<br />
Output<br />
Clock<br />
The circuit waits for the presently selected clock signal to<br />
go Low, then keeps its output Low until the other clock input<br />
goes Low and then High.<br />
B. Schmitt Trigger<br />
This simple circuit provides user-defined hysteresis on<br />
one input, but it requires the use of two device pins, plus two<br />
external resistors. It is practical only when significant<br />
hysteresis is absolutely required.<br />
Figure 12: Schmitt Trigger<br />
To<br />
FPGA<br />
Logic<br />
C. RC Oscillator<br />
FPGA<br />
• Hysteresis = 10% of Vcc<br />
This circuit has a wide frequency range, using resistors<br />
from 100 Ohm to 100 kilohm, and capacitors from 100 pF to<br />
1 microfarad. The circuit is guaranteed to start up, is<br />
insensitive to Vcc and temperature changes, and can easily be<br />
turned on or off from inside the chip.<br />
Figure 13: RC Oscillator<br />
Oscillator<br />
Output<br />
FPGA<br />
10R<br />
C<br />
R<br />
R<br />
R<br />
Input<br />
Signal
D. Coping with Clock Reflections<br />
In some cases, the user may have to accept bad clock<br />
reflections. When the PC-board is already laid out it may cost<br />
too much time and money to change the clock lines to have<br />
good signal integrity. The following two circuits suppress the<br />
effect of incoming clock ringing.<br />
The first circuit suppresses ringing on the active clock<br />
edge, shown here as the rising clock edge. A delay in front of<br />
its D input can make any flip-flop insensitive to fast double<br />
triggering. Since the extra clock pulse usually occurs within<br />
2 ns after the active clock edge, the added delay need only be<br />
a few ns, and will thus not interfere with normal operation,<br />
e.g. of a counter.<br />
Figure 14: Reflection on the Active Edge<br />
Delay<br />
• Problem: Double pulse<br />
on the active edge<br />
• Solution: Delay D,<br />
to prevent the flip-flop<br />
from toggling soon again<br />
The second circuit protects against ringing on the other<br />
clock edge, when the flip-flop mysteriously seems to change<br />
state on the wrong clock polarity. No flip-flop can possibly<br />
change state on the wrong polarity clock edge! This<br />
perplexing problem can easily be resolved by using the<br />
inverted clock as a delayed enable input. Right after the<br />
falling clock edge, the flip-flop is still disabled and will,<br />
therefore, ignore the double pulse on the clock line.<br />
Figure 15: Reflection on the Inactive Edge<br />
Delay CE<br />
D Q<br />
D Q<br />
External<br />
Clock<br />
Internal<br />
Clock<br />
Data<br />
Delayed<br />
Data<br />
External<br />
Clock<br />
Internal<br />
Clock<br />
Clock<br />
Enable<br />
These circuits are just BandAids for a poorly executed<br />
design, but they have proven useful in desperate cases.<br />
D. Floating-Point Adder/Multiplier<br />
The combinatorial multiplier in Virtex-II can also be used<br />
as a shifter. Four multipliers can multiply 32 x 32 bits, and<br />
other multipliers can perform the normalizing shift<br />
operations.<br />
This makes it possible to design either IEEE-standard or<br />
even other performance-optimized floating-point units. Fast<br />
floating point is now possible in FPGAs.<br />
VII. THE FUTURE<br />
In 2005, FPGAs will implement 50 million system gates,<br />
have 2 billion transistors on-chip, using 70-nm technology,<br />
with 10 layers of copper metal. An abundance of hard and<br />
soft cores will be available, among them microprocessors<br />
running at a 1-GHz clock rate, and there will be a direct<br />
interface to 10 Gbps serial data.<br />
FPGAs have not only become bigger, faster, and cheaper.<br />
They now incorporate a wide variety of system functions.<br />
FPGAs have truly evolved from glue logic to cost-effective<br />
system platforms.<br />
VIII. LIST OF GOOD URLS<br />
M www.xilinx.com<br />
M www.xilinx.com/support/sitemap.htm<br />
— www.xilinx.com/products/virtex/handbook/index.htm<br />
— www.xilinx.com/support/techxclusives/techX-home.htm<br />
— www.xilinx.com/support/troubleshoot/psolvers.htm<br />
General FPGA-oriented Websites:<br />
—www.fpga-faq.com<br />
—www.optimagic.com<br />
Newsgroup: comp.arch.fpga<br />
All datasheets: www.datasheetlocator.com<br />
Search Engine (personal preference): www.google.com
Single Event Upset Tests of Commercial FPGA for Space Applications 1<br />
Abstract<br />
Space based systems are looking more and more to the<br />
benefits from high performance, reconfigurable computing<br />
systems and Commercial Of The Shelf components (COTS).<br />
One critical reliability concern is the behaviour of the<br />
complex integrated circuits in a radiation environment. Field<br />
programmable gate arrays (FPGAs) are well suited for the<br />
small volumes in space applications. This type of products<br />
are driven by the commercial sector, so devices intended for<br />
the space environment must be adapted from commercial<br />
product. Heavy ion characterisation has been performed on<br />
several FPGA types and technologies to evaluate the onorbit<br />
radiation performance. As the geometry keeps<br />
shrinking, the relative importance of various radiation<br />
effects may change. Investigation of radiation effects on<br />
each technology generation is found to be necessary. This<br />
paper presents methodologies and results of radiation tests<br />
performed on commercial FPGA s for space applications.<br />
Mitigation of Single Event Upsets will be discussed.<br />
I. INTRODUCTION<br />
Programmable logic has advantages over ASIC designs<br />
for the space community in the faster and cheaper<br />
prototyping, and reduced lead-time before flight. FPGAs<br />
based on antifuse technology are frequently used in space<br />
applications. Reprogrammable logic would offer additional<br />
benefit of allowing on-orbit design changes. From Single<br />
Event Upsets point of view the antifuse technology has<br />
offered better control and reliability. However, mitigation<br />
methods for reprogrammable logic technologies are under<br />
constant development. This paper discusses the Heavy Ion<br />
SEU testing of several Actel antifuse-based FPGAs and<br />
Xilinx Virtex FPGA.<br />
A. Test Board<br />
II. RADIATION TEST SYSTEM<br />
The test system developed by Saab Ericsson Space<br />
consist of two boards, one Controller board managing the<br />
test sequence and the serial interface to the PC and one<br />
DUT-board housing two Devices Under Test (DUT). A<br />
Stanley Mattsson<br />
Saab Ericsson Space AB, S-40515 Goteborg, Sweden<br />
stanley.mattsson@space.se<br />
1 This work performed by Saab Ericsson Space, is supported by the European space Agency<br />
schematic drawing is given in Fig.3.<br />
The Controller board tests one DUT at a time using a<br />
"virtual golden chip" test method. The principal of the<br />
measuring technique is to compare each output from the<br />
DUT with the correct data stored in SRAM’s. The general<br />
procedure for the tests are to load data into the devices<br />
under test, pause for a pre-set time, read data out, and<br />
analyse the errors for various error type signatures. New<br />
data are loaded into the DUT at the same time as old are<br />
read out. When an error is detected (when outputs do not<br />
match), the state of all outputs and position in cycle of the<br />
failing shift register will be temporarily stored in FIFOs. Data<br />
in the FIFOs is continually send to a PC through a RS232<br />
serial interface. After each test run the data are analysed<br />
and stored in a database by the controlling PC. For each<br />
DUT, errors can be traced down to the logic module, logic<br />
value and position.<br />
B. Test Facility<br />
Heavy ion tests were performed at the CYClotron of<br />
LOuvain la NEuve (CYCLONE), Belgium. This accelerator<br />
can cover an energy range of 0.6 to 27.5 MeV/AMU for<br />
heavy ions produced in a double stage ECR source. The<br />
use of an ECR source allow the acceleration of an ion<br />
"cocktail" composed of ions with very close mass over<br />
charge ratio. The preferred ion is selected by fine-tuning of<br />
the magnetic field or a slight change of the RF frequency.<br />
Within the same cocktail it takes only a few minutes to<br />
change ion species.<br />
The facility provides beam diagnostic and control with<br />
continuous monitoring of beam fluence and flux via plastic<br />
scintillators. The irradiations are performed in a large<br />
vacuum chamber with the test board mounted on a movable<br />
frame. Normally each device is tested with a variety of<br />
atomic species up to a fluence of 1e+6- 1e+7 ions/cm2,<br />
depending on the cross section for the device under test.
III. ANTIFUSE FPGA TECHNOLOGY<br />
FPGA’s from Actel Corporation are widely used in<br />
Aerospace applications. The company has been providing<br />
products to the stringent space requirements for several<br />
years. During the last years several new products have<br />
been introduced with the aim of having improved radiation<br />
resistance and logic circuit density.<br />
The company uses several different manufacturers for<br />
the wafer production. Only wafers manufactured by<br />
Matshushita (MEC) are used in the products for space. The<br />
same products sold under the same electrical specification<br />
are likely manufactured in several fabs. Some of these<br />
products have been tested for total dose and found only<br />
good for a few krad(Si) total dose. Over the years there have<br />
been many SEU tests using heavy ions performed on Actel<br />
products, both to determine the SEU probability for the user<br />
logic’s as well as determining effects of heavy ions on the<br />
antifuses. [1] Results obtained by Saab Ericsson Space are<br />
presented below.<br />
IV. RESULTS ON ANTIFUSE FPGA<br />
All results presented below have been tested with the<br />
same test method and test board described above.<br />
1.0E-04<br />
1.0E-05<br />
1.0E-06<br />
1.0E-07<br />
1.0E-08<br />
1.0E-09<br />
A14100A Vcc=3.3V & 5V<br />
0 20 40 60 80 100 120<br />
LET (MeV/mg/cm2)<br />
Figure 1 Heavy ion data on Actel A14100A S-module for<br />
5V and 3.3V biasing conditions. Flip from logic “0” to<br />
logic “1” is noted S0. The opposite is noted S1. The<br />
dashed curves are showing the 3.3V data for the two SEU<br />
modes.<br />
S0<br />
S1<br />
S0<br />
S1<br />
Page 2<br />
A. Actel A14100A<br />
This FPGA is manufactured by Matshushita (MEC) in<br />
antifuse ONO gate 0.8μm two-level metal CMOS technology<br />
with 1153 logic modules. The SEU behaviour of this device<br />
is very typical for Actel devices. Biasing at 3.3V give a<br />
higher SEU probability. Actel has a large asymmetry in the<br />
flip-flop sensitivity between flip from logic “zero” to logic<br />
“one” compared to the reverse. This device type has been<br />
on the market for several years and according to Actel there<br />
are at the moment no plans to take it out of the market. This<br />
device type has been designed in on many spacecraft’s, but<br />
only a few have been launched so far.<br />
B. Actel RT54SX16<br />
This FPGA is manufactured by Matshushita (MEC) in<br />
antifuse metal-to-metal gate 0.6μm 3-metal CMOS<br />
technology. This device type exists in a 32 kgate version as<br />
well. However, the device types became obsolete before it<br />
came out on the market because MEC decided to close<br />
down the 0.6μm line.<br />
The SEU behaviour of this device is very similar to<br />
A14100A. The large asymmetry in the flip-flop sensitivity<br />
between flip from logic “zero” to logic “one” compared to<br />
the reverse could be observed here as well. The total dose<br />
tolerance for this type is around 50 krad(Si) compared to<br />
that of A14100A which only around 10-15 krad(Si). There<br />
are large differences in total dose tolerance between<br />
different production lots of these types.<br />
Critical functions in space applications must be triple<br />
module redundant to mitigate for SEU. This consumes large<br />
portion of the devices and the cost per bit becomes quite<br />
expensive.<br />
1.E-04<br />
1.E-05<br />
1.E-06<br />
1.E-07<br />
1.E-08<br />
1.E-09<br />
1.E-10<br />
0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0<br />
LET [ MeV/mg/cm2 ]<br />
S/N#3 ; 1-0<br />
S/N#3 0-1<br />
S/N#4 ; 1-0<br />
S/N#4 0-1<br />
NASA<br />
A14100 S 0-1<br />
A14100 S 1-0<br />
Figure 2. Heavy ion data on Actel RT54SX16 R-module.<br />
The data for A14100A are shown as dashed curves.
V. SRAM FPGA TECHNOLOGY<br />
The Xilinx Virtex FPGA is an SRAM based device that<br />
supports a wide range of configurable gates from 50k to 1M.<br />
It is fabricated on thin-epitaxial silicon wafers using the<br />
commercial mask set and the Xilinx 0.25µ CMOS process<br />
with 5 metal layers. SEU risks dominate in the use of this<br />
technology for most applications. In particular, the<br />
reprogrammable nature of the device presents a new<br />
sensitivity due to the configuration bitstream. The function<br />
of the device is determined when the bitstream is<br />
downloaded to the device. Changing the bitstream changes<br />
the design’s function. While this provides the benefits of<br />
adaptability, it is also an upset risk. A device configuration<br />
upset may result in a functional upset. User logic can also<br />
upset in the same fashion seen in fixed logic devices. These<br />
two upset domains are referred to as configuration upsets<br />
and user-logic upsets. Two features of the Virtex<br />
architecture can help overcome upset problems. The first is<br />
that the configuration bitstream can be read back from the<br />
part while in operation, allowing continuous monitoring for<br />
an upset in the configuration and the part supports partial<br />
reconfiguration, which allows for real-time SEU correction.<br />
Secondly, Triple Module Redundancy (TMR) can be<br />
implemented in order to filter out SEU effects.<br />
VI. TEST METHODS FOR SRAM FPGA<br />
A. SRAM Bitstream Readback<br />
On the test board described above, a configuration<br />
controller chip on the DUT-board is controlling a PROM<br />
and configuration ports of the DUT. A program command<br />
can be sent to the DUT, which clears its configuration<br />
memory and starts an automatic re-configuration of the<br />
Figure 3 Schematic drawing of DUT board with<br />
configuration interface for the Virtex device<br />
Page 3<br />
DUT from the PROM During the test of the DUT the<br />
configuration controller is continuously scrubbing the DUT<br />
configuration memory with new configuration data from the<br />
PROM’s. A schematic drawing of the test board is shown in<br />
Fig 3<br />
All data from the PROM’s to the DUT is transferred<br />
through the parallel SelectMAP interface, which supports<br />
the partial configuration feature making it possible to<br />
continuously scrub the device with new configuration data<br />
during operation.<br />
B. Error Separation<br />
Errors could originate from SEU in registers of the<br />
device, SEU in the configuration data causing functional<br />
errors in parts of the device and from errors in control<br />
registers of the device causing global functional errors. The<br />
analysed data errors are separated into three different<br />
domains, SEU in registers, SEU in configuration data, and<br />
SEU in device control registers.<br />
SEU in the register is corrected with the new data loaded<br />
into the DUT. The error will not last in next test cycle.<br />
SEU in the configuration data would be permanent until<br />
the device is scrubbed with new configuration data. The<br />
SEU gives an error in only part of the device and could, for<br />
example, corrupt the function of one of the shift registers in<br />
the DUT. This means that the shift register will be out of<br />
function until the configuration data is corrected with new<br />
data.<br />
The control register “POR” controls the initialisation<br />
sequence of the device when it powers up. An SEU in this<br />
register could change state of the whole device by initiating<br />
a complete clearing of the configuration memory. This type<br />
of error is detected when all shift registers go out of<br />
function at the same time.<br />
C. DUT Designs<br />
Two design methods were tested for comparison, TMR<br />
and non-TMR designs. Both designs have the same basic<br />
functionality. The TMR version uses the Triple Module<br />
Redundancy design techniques that Xilinx recommends for<br />
use with the Virtex FPGA [3]. The non-TMR design is a<br />
standard design used for Actel antifuse as well.<br />
The non-TMR design, schematically shown in figure 4,<br />
implements into the device 14, 144 stage, pipeline shift<br />
register and a small self test circuit.<br />
The TMR design, schematically shown in figure 5,<br />
implements a functionally equivalent circuit as the non-<br />
TMR design but with full internal triple redundancy. The<br />
outputs of the TMR design use triple tri-state drivers to<br />
filter data errors from the output.
Figure 4 Schematic drawing of non-TMR DUT design<br />
D. Other Test Considerations<br />
An SEU in configuration data causing a functional error<br />
is corrected when new configuration data is written to the<br />
DUT. To be able to detect all of these errors the DUT must<br />
be continuously tested. Since the DUT is paused in our<br />
tests, we will not see all of these errors. Therefor we have to<br />
estimate the fraction of errors that we detect (Detection<br />
factor).<br />
Two different pause times (time where DUT is not<br />
clocked between read/write of data) are used during the<br />
tests, 223ms and 4ms. Testing the non-TMR design mostly<br />
used the long pause time since the flow of error data was<br />
too high.<br />
The test system allows selecting the scrub time between<br />
10,38ms / 22,93ms and 166ms. The longer scrubbing rates<br />
were only used in the first test runs for calibration<br />
purposes.<br />
VII. SRAM TEST RESULTS<br />
Each test was performed with a variety of atomic species<br />
up to a fluence of 1e+6 ions/cm2, or until either one of the<br />
shift registers was permanently disabled by the “Persistent”<br />
error or all 14 shift registers were eliminated by the “SEFI”<br />
error. With this error in a shift register no data came out and<br />
the registers couldn’t be tested. The fluence is calculated<br />
from the total fluence of the test and the mean value until<br />
each Persistent or SEFI type error. In this way the fluence of<br />
when the device is actually tested is achieved.<br />
Page 4<br />
Figure 5 Schematic drawing of TMR DUT design<br />
A. Configuration induced Error types<br />
Errors that are caused by SEU in the configuration are<br />
quantified by observing the following signatures in the test<br />
data: The results are shown in figure 6<br />
1) Routing<br />
An SEU in the configuration logic (routing bits and<br />
lookup tables) may cause errors in the configured function<br />
of the operational device. This gives errors from the shift<br />
registers that are permanent until next time the device is<br />
scrubbed with new configuration data.<br />
2) Persistent<br />
A persistent error is a permanent error that is not<br />
corrected with new configuration data. The device needs to<br />
be reset to correct this error. This is the result of SEU in<br />
“week keeper” circuits used in the Virtex architecture when<br />
logical constants are implied in the configured design such<br />
as unused clock enable signals for registers.<br />
3) SelfTest<br />
SelfTest errors are of same type as the routing type, but<br />
instead of interrupting a shift register it interrupts the<br />
function of the SelfTest module.<br />
4) SEFI type<br />
Function of the whole device is interrupted in one hit<br />
and all shift register data is lost. The device requires a reset<br />
and complete reconfiguration for correction.<br />
These errors are tested in a dynamic way, but due to<br />
limitations of the test system the device is rested between<br />
clocking of data. Since the device is continuously scrubbed<br />
with new configuration data, there will be a significant<br />
amount of errors of the routing and SelfTest data not seen
at read out (corrected before read out). The detection factor<br />
correlates the results for this.<br />
5) Non-TMR design<br />
At a LET of 2.97 MeV/mg/cm 2 each configuration type<br />
error was observed. Cross-sections are presented in Fig. 6.<br />
The presented data for all configuration type errors are<br />
correlated with an estimated “detection factor”. With a<br />
scrub time of 10 ms and a pause time of 4 ms the detection<br />
factor is estimated to be 0.6 and with the longer pause time<br />
of 223 ms, it is estimated to be 0.05.<br />
The cross section is specific for this design. To predict<br />
cross section for a 100% utilised device you must multiply<br />
these cross sections with the utilisation factor for this<br />
design (about 32% for the routing errors and maybe 5% for<br />
the SelfTest module).<br />
Cross Section [ cm2/device ]<br />
1.0E-01<br />
1.0E-02<br />
1.0E-03<br />
1.0E-04<br />
1.0E-05<br />
1.0E-06<br />
1.0E-07<br />
0 10 20 30 40<br />
LET [MeV/mg/cm2 ]<br />
Routing<br />
Persistent<br />
SelfTest Mod.<br />
SEFI type<br />
Figure 6 Configuration errors for non-TMR Design. The cross<br />
sections are per device and are specific for this design. For the<br />
non-TMR design one SEFI type error was recorded, at a LET of<br />
14.1 MeV/mg/cm 2 . This is likely due to the very low fluence<br />
required for the test to finish. Arrows indicate test without any<br />
errors.<br />
6) TMR design<br />
The SEFI type error was the only observable error type.<br />
The Persistent error is not observed. The SEFI was<br />
observed at a LET of 5.85 MeV/mg/cm2. This demonstrated<br />
that the TMR design method effectively eliminated all non-<br />
SEFI configuration induced errors.<br />
The “SEFI type” error is believed to be an SEU in the<br />
POR control register, clearing the whole device from<br />
configuration data. All I/Os are 3-stated in this state and<br />
this was detected at the read out data, which slowly went<br />
from read high state to read low state after some test cycles.<br />
Page 5<br />
Cross Section [ cm2/device ]<br />
1.0E-04<br />
1.0E-05<br />
1.0E-06<br />
1.0E-07<br />
1.0E-08<br />
0 10 20 30 40<br />
LET [MeV/mg/cm2 ]<br />
SEFI type<br />
Routing<br />
Persistent<br />
SelfTest Mod.<br />
Figure 7 Configuration errors for TMR Design. Except for SEFI<br />
errors only one “routing” error was recorded at a LET of 14.1<br />
MeV/mg/cm 2 . Arrows indicate test without any upsets.<br />
In one test run a “Routing” error was observed. The flux<br />
was ~1333 ions/cm2/s and the device were scrubbed with<br />
new configuration data every 10,38ms. This gives a<br />
flux/scrub-cycle ratio of 13ions/cm2/scrub.<br />
Xilinx has reported that the number of accumulated<br />
configuration bit upsets to cause a functional failure in a<br />
TMR design ranges between 2 and 30 bits. It is therefore<br />
possible that enough errors in the configuration logic were<br />
allowed to accumulate before the next scrub cycle to cause<br />
the error. Therefore, the observed errors are most likely an<br />
artefact of the flux/scrub-cycle ratio.<br />
B. Register Error Types<br />
These errors are tested in a static fashion. Data is<br />
clocked into the shift registers, held for a pre-set time, and<br />
then clocked out for comparison. The procedure is repeated<br />
constantly during the test run. The data are analysed for<br />
single bit errors and categorised into the following error<br />
types:<br />
FF(0-1) Read ‘1’ from flip-flop registers when ‘0’ is<br />
expected.<br />
FF(1-0) Read ‘0’ from flip-flop registers when ‘1’ is<br />
expected.<br />
FF A summation of all FF errors (above) read from the shift<br />
registers.<br />
DataSwap This was an error type that had two errors in<br />
registers next to each other in the register chain. First a ‘0’<br />
was read when ‘1’ was expected and in the next register a ‘1’<br />
is read when a ‘0’ was expected. This error was isolated for<br />
two registers in the whole chain of 144 registers and didn’t<br />
occur again in the next test cycle.<br />
One possible explanation for this error type is that a<br />
routing bit error was being corrected just as test data was<br />
being read out for comparison.
1) Non-TMR design<br />
FF errors were observed at a LET greater than 2.97<br />
MeV/mg/cm2 with a saturation cross-section of ~1e-6cm2.<br />
Cross Section [ cm2/bit ]<br />
1.0E-05<br />
1.0E-06<br />
1.0E-07<br />
1.0E-08<br />
1.0E-09<br />
1.0E-10<br />
0 5 10 15 20 25 30 35 40<br />
LET [MeV/mg/cm2 ]<br />
FF(0-1)<br />
FF(1-0)<br />
FF<br />
DataSwap<br />
Figure 8 Register errors for non-TMR Design. Arrows<br />
indicate test without any upsets.<br />
2) TMR design<br />
Only one FF error was observed at a LET 14.1<br />
MeV/mg/cm2 with an estimated cross-section of ~5e-10cm2.<br />
No other FF errors were recorded in absence of a SEFI type<br />
error. It is considered that this error is the result of the<br />
flux/scrub-cycle ratio as previously mentioned.<br />
Cross Section [ cm2/device ]<br />
1.0E-04<br />
1.0E-05<br />
1.0E-06<br />
1.0E-07<br />
1.0E-08<br />
0 10 20 30 40<br />
LET [ MeV/mg/cm2 ]<br />
TMR<br />
non-TMR<br />
Figure 10 SEFI errors for non-TMR and TMR design.<br />
The non-TMR tests were performed to less fluence than the<br />
TMR, therefor less SEFI errors have been observed for non-<br />
TMR design. In principal the SEFI error cross section<br />
should be the same for the two designs. With the<br />
assumption the control registers have the same heavy ion<br />
sensitivity as the user registers. The number of fatal failure<br />
control bits of the device seems to be around ten. The LET<br />
threshold of the SEFI errors would with this assumption be<br />
around 5 MeV/mg/cm 2 .<br />
Page 6<br />
VIII PROTON INDUCED SEU<br />
The main mechanism in energy loss leading to single<br />
event phenomena is due to inelastic collisions between<br />
incident protons and atoms in the substrate. The recoiling<br />
nucleus will thus be the particle that causes the SEU. The<br />
final mechanism for proton induced SEU is therefore very<br />
similar to that envisaged for heavy ions.<br />
In Fig 11 below are proton data from Actel A14100A and<br />
Xilinx Virtex shown. The cross sections for proton SEU are a<br />
factor 10 -8 lower than those observed for heavy ions. The<br />
low threshold observed for Xilinx manifest itself in the<br />
sensitivity for low energetic protons. For A14100A, is likely<br />
only the flip of logic “0” to logic “1” that is observed in the<br />
proton SEU. Circuits having a threshold higher than LET=<br />
15 MeV/mg/cm2 are not sensitive to proton upset.<br />
1.E-10<br />
1.E-11<br />
1.E-12<br />
1.E-13<br />
1.E-14<br />
1.E-15<br />
1.E-16<br />
1.E-17<br />
1.E-18<br />
0 100 200 300 400<br />
Proton Energy (MeV)<br />
Virtex<br />
XQVR300<br />
A14100A<br />
Fig 11 Proton upsets as a function of proton Energy for<br />
Actel A14100A and Xilinx Virtex QRV300. For Actel<br />
A14100A, no proton upsets have been observed at<br />
energies below 150 MeV. The results for Xilinx is taken<br />
from Ref [3]
IX SINGLE EVENT TRANSIENTS<br />
In addition to “conventional” SEUs, charge particles can<br />
also induce transients in combinatorial logic, in global clock<br />
lines and in global control lines. These single event<br />
transients (SET) have only minor effects on technologies<br />
around 0.8-0.5 μm since the speed of these circuits are<br />
insufficient to propagate the 100 to 200 ps wide SET pulse<br />
any appreciable distance. However, as smaller feature size<br />
technologies are being used in spaceborne systems, these<br />
transients become indistinguishable from normal circuit<br />
signals.<br />
If a charge particle strike occurs within the combinatorial<br />
logic block of a sequential circuit, and the logic is fast<br />
enough to propagate the induced transient, then the SET<br />
will eventually appear at the input of data latch where it may<br />
be interpreted as a valid signal. Similar invalid transient data<br />
might appear at the outputs of lookup tables and on routing<br />
lines due to SETs generated in the programming elements.<br />
While conventional SEU error rates are independent of<br />
the chip clock frequency, SET increase in direct proportion<br />
of the operating frequency. Smaller feature size results in<br />
smaller gate delays that permit circuits to be operated at<br />
higher clock frequencies. For typical FPGA designs, SET<br />
induced error rates may actually exceed the SEU rate of<br />
unhardened latches as clock speeds approach 100 MHz for<br />
CMOS designs.<br />
Fig.12 Critical Transient Width vs Feature Size for<br />
unattenuated propagation. The picture is taken from Ref<br />
[4]<br />
In figure 12 is illustrated the critical transient pulse width<br />
as function of technology feature size needed to propagate<br />
without attenuation through any number of gates [x]. At<br />
pulse widths smaller than the critical width, the inherent<br />
inertial delay of the gate will cause the transient to be<br />
attenuated and the pulse will after passing a few gates die<br />
Page 7<br />
out. At pulse widths equal or larger than the critical width,<br />
the transient will propagate through the gate just as though<br />
it was a normal circuit signal.<br />
A. RT54SX-S Details<br />
The architecture of Actel RT54SX-S devices is an<br />
enhanced version of Actel SX-A device architecture. The<br />
RT54SX-S devices are manufactured using a 0.25µm<br />
technology at the Matsushita (MEC) facility. The RT54SX-S<br />
family incorporates up to four layers of metal interconnects.<br />
To achieve good SEU requirements each register cell (R-<br />
Cell) in the RT54SX-S are build up with Triple Module<br />
Redundancy (TMR) The R-cells in the SX-S device consists<br />
of three master and three slave latches gated by opposite<br />
edges of the clock. The feedback path of each of the three<br />
latches is voted with the outputs of the other two latches. If<br />
one of the three latches is struck by an ion and starts to<br />
change state, the voting with the other two latches prevents<br />
the change from feeding back and permanently latching.<br />
With this solution the latches is continuously corrected<br />
and theoretically the only possibility for a SEU in a R-<br />
register is to have two latches hit by two ions within the<br />
recovery time of the transient created by the ions.<br />
B. SEU Results of RT54SX32-S<br />
No SEU in the R-register cells have been observed<br />
under static conditions up to LET= 64.5 MeV/cm2/mg.<br />
Irradiation with heavy ions under 5 MHz dynamic<br />
condition resulted in errors, which had the same signature<br />
as if they were proper SEU. When lowering the FPGA<br />
operating frequency by a factor of 4 to 1.25 MHz no errors<br />
could be observed. From the static condition test it was<br />
concluded that the R-cells do not upset. Thus, the errors<br />
observed in 5 MHz dynamic mode are very likely due to<br />
transient effects SET which are clocked through to the<br />
output.<br />
The duration and magnitude of the transients are,<br />
however, technology and circuitry design dependent. In the<br />
present experimental set-up it is not possible to isolate the<br />
error data to certain areas or functions of the device.
CrossSection [/bit/cm2]<br />
1.00E-08<br />
1.00E-09<br />
1.00E-10<br />
1.00E-11<br />
RT54SX32s<br />
1.00E-12<br />
0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0<br />
LET [ MeV/mg/cm2 ]<br />
Fig 13 Single Event Transient cross section as a<br />
function of LET value for RT54SX32S. Errors have only<br />
been detected in 5 MHz dynamic test mode. The data<br />
points with arrows indicate fluence for test run without<br />
errors. The error bars are the standard deviation<br />
indicating counting statistics for each test run. No<br />
conventional SEU can be detected for this device type.<br />
X. CONCLUSION<br />
Test results presented in this paper are all on COTS type<br />
of FPGAs. The use of COTS in radiation environment<br />
require, however, that testing to the needed reliability<br />
requirements are performed. A vast majority of complex<br />
IC’s will not pass the minimum requirement of being latchup<br />
free in a charge particle environment. Once the single<br />
event upset problems have been characterised there are<br />
techniques to mitigate the SEU problems. Such knowledge<br />
helps selecting between hardened technologies and speed<br />
and area trade-offs in a softer technology. With decreasing<br />
feature size, the single event transients will become more<br />
important. For frequencies above 100 MHz, the probabilities<br />
are in the same order as for conventional upsets. So far no<br />
experimental data have been published which show the<br />
transient probabilities at proton energies for complex CMOS<br />
technologies.<br />
S/N#01 Static<br />
S/N#01 5 MHz<br />
S/N#02 Static Low<br />
S/N#01 1 Mhz<br />
Page 8<br />
XI REFERENCES<br />
[1] http://klabs.org/fpgas.htm, https://escies.org/<br />
[2 Earl Fuller, Michael Caffrey, Anthony Salazar, Carl<br />
Carmichael, Joe Fabula, Radiation Characterization,<br />
and SEU Mitigation, of the Virtex FPGA for Space<br />
based Reconfigurable Computing, NSREC 2000,<br />
October 2000.<br />
[3] C. Carmichael, E. Fuller, J. Fabula, Fernanda De Lima,<br />
proton Testing of SEU Mitigation Methods for the<br />
Virtex FPGA. Xilinx Report<br />
[4] D.G.Mavis and P.H.Eaton, Temporally Redundant<br />
Latch for Preventing Single Event Disruptions in<br />
Sequential IC, Technical Report P8111.29, Mission<br />
Reseach Corporation, 1998.
Electronics Commissioning Experience at HERA-B<br />
Bernhard Schwingenheuer<br />
Max-Planck-Institut fur Kernphysik, 69117 Heidelberg, Germany<br />
Bernhard.Schwingenheuer@mpi-hd.mpg.de<br />
Abstract<br />
The readout of Hera-B has been uni ed to a large extend.<br />
Only the HELIX and ASD8 chips with corresponding<br />
readout electronics were used and the data acquisition<br />
is constructed entirely with Sharc DSPs. This approach<br />
minimized the work load and was successful. The feedback<br />
of the ASD8 digital outputs to the analog inputs caused<br />
oscillations and the e orts to solve this problem still continued<br />
in the commissioning phase. The electronics of the<br />
sophisticated hardwaretriggerwas commissioned successfully<br />
while some problems remain with the self-made 900<br />
Mbit optical data transmission.<br />
I Introduction and Overview<br />
Hera-B is a xed-target experiment at the HERA storage<br />
ring at DESY, Hamburg [1]. Protons interact with thin<br />
target wires of di erent materials. The wires and hence the<br />
rate of interactions are steerable. A silicon vertex detector<br />
(VDS) is located downstream of the target. A dipole magnet<br />
with tracking chambers inside and after the magnetic<br />
eld follow. Because of the anticipated particle ux and<br />
radiation damage the tracking chamber are divided into<br />
an inner part with high track density (ITR, micro strip gas<br />
chambers with gas electron multiplier foils) and an outer<br />
part (OTR, honeycomb driftchambers). Kaon identi cation<br />
is performed with a ring imaging Cherenkov detector<br />
(RICH). An electromagnetic calorimeter (ECAL) and<br />
a muon detector (MUON) allow for lepton identi cation.<br />
A special set of three layers of tracking chambers (HighPt)<br />
are foreseen inside the magnetic eld. Their signals allow<br />
fast triggering on tracks with large transverse momentum.<br />
Table 1 gives an overview over the applied detector technologies,<br />
readout chips, front-end technologies and whether<br />
the subdetector is used by the hardware trigger (FLT).<br />
With the exception of the ECAL only two readout chips<br />
were applied (HELIX and ASD8) and consequently only<br />
two versions of front-end electronics had to be developed.<br />
1 The TDCs for the ASD8 digitization are on the detector<br />
while the data of the HELIX chips are digitized in the<br />
trailer. The latter is always accessible in a low radiation<br />
1 The TDC chip could be operated in a binary readout mode for<br />
MUON, RICH and HighPt.<br />
area and houses in addition all components of the data acquisition<br />
(sect. VII) and of the hardware trigger (sect. VI).<br />
Hera-B was largely assembled by theendof1999and<br />
commissioning took place in 2000. The goal of measuring<br />
CP violation in the neutral B meson system was not<br />
reached. The shortcuts due to problems with the electronics<br />
and some of the experiences gained during the construction<br />
are described in this article. For a detailed description<br />
of the electronics components itself the reader is referred<br />
to the references.<br />
From September 2000 to July 2001 the accelerator was<br />
shut down for a luminosity upgrade for the collider experiments<br />
H1 and ZEUS. Hera-B has used this time to solve<br />
most of the identi ed problems.<br />
II Vertex Detector and Inner Tracker<br />
The Vertex Detector (VDS) and the Inner Tracker (ITR)<br />
both use the HELIX readout chip [2]. Hence most of the<br />
electronics like the digital control signal generation (including<br />
their optical transmission to the detector), the analog<br />
optical data transmission to the trailer and the digitization<br />
of the data is common for both systems. The low voltage<br />
power supplies and the technique of programming the HE-<br />
LIX chips di ered.<br />
The VDS was fully commissioned by 2000 [3]. For the<br />
electronics an important feature of the HELIX was used<br />
intensively for monitoring: the analog data is stored internally<br />
in a pipeline and upon a trigger the data together<br />
with the pipeline location is available at the output. By<br />
comparing this location from all chips the synchronization<br />
of the VDS can be guaranteed.<br />
In 1998 a rst version of the optical transmission for the<br />
digital control signals was installed using commercial components.<br />
The receivers were located in a low radiation area<br />
under the magnet. Particles hitting the receiver's pin diode<br />
generated spurious digital signals because of low switching<br />
threshold. Consequently within minutes of operation parts<br />
of the VDS became asynchronous. The self-made receiver<br />
for the analog optical signals in conjunction with a comparator<br />
did not show this problem and is used instead.<br />
During the 2000 operation several HELIX chips ceased<br />
functioning correctly. The fraction increased with time<br />
from about 1% at the beginning to 4% at the end. Most of
Table 1: Characterization of the readout electronics for all subdetectors.<br />
subdetector technology readout chip chn digitization data transm. to trailer used by FLT<br />
VDS silicon microstrip HELIX 150k FADC analog optical no<br />
ITR MSGC with GEM HELIX 130k FADC analog optical yes<br />
OTR honeycomb drift ASD8 120k TDC LVDS digital yes<br />
RICH Cherenkov +PMT ASD8 28k binary(TDC) LVDS digital no<br />
MUON tube,pad,pixel ASD8 30k binary(TDC) LVDS digital yes<br />
HighPt pad,pixel ASD8 25k binary(TDC) LVDS digital yes<br />
ECAL shashlik PMT 8k ADC analog coaxial yes<br />
these chips were however not broken. Several procedures<br />
were tried to revive them(changing the phase between signals,<br />
turning them o /on) with varying success. Because<br />
of the redundancy of layers in the VDS these losses did<br />
not seriously a ect the tracking e ciency. A clear understanding<br />
of the problem is not yet reached but recently<br />
some problems in the download software were found which<br />
explain some observations.<br />
The Inner Tracker [4] was delayed by two years because<br />
of radiation hardness problems of the MSGC technology.<br />
The 2000 run was thus its rst commissioning period. The<br />
initial grounding scheme asked for one central point per<br />
chamber as a "reference ground" to avoid ground loops.<br />
The backside of the MSGC, the PCBs with the HELIX<br />
chips and other boards had a ground connection to this<br />
point. Further optimization studies showed that a massive<br />
direct ground connection between the MSGC and the HE-<br />
LIX PCB and using a large surface ground bar with short<br />
connections is much more favorable. Especially when the<br />
prompt digital trigger outputs of the HELIX (open collector)<br />
are activated the new grounding reduces crosstalk<br />
of the digital outputs to the analog inputs substantially.<br />
Modi cations on the HELIX and a reduction of the collector<br />
pull-up voltage reduces the crosstalk further.<br />
The low voltage power supplies have a "power factor correction"<br />
(PFC) circuit to ensure that the phase between<br />
current and voltage is not distorted by the device under<br />
load. While these power supplies have been operated in<br />
the lab routinely for long time the PFC broke repeatedly<br />
during the 2000 operation in the experiment. It is known<br />
from other HERA experiments that the 240 Volt power<br />
lines have spikes in the experimental halls close to the accelerator.<br />
It was therefore advised to add lters and ferrite<br />
rings to reduce spikes in the power lines. Whether this<br />
cures the PFC failures is not yet known but seems likely.<br />
III The ASD8 Commissioning<br />
The ASD8 [5] is used by the gaseous detectors (Outer<br />
Tracker, Muon detector and HighPt detector) and by the<br />
RICH detector (for PMT signal readout). It consists<br />
for each of 8 channels of a di erential input ampli er,<br />
a two-stage shaper, a discriminator with externally pro-<br />
grammable threshold and an open collector di erential output<br />
stage. The shaping time is below 10 nsec.<br />
Figure 1 shows the OTR on-detector electronic components<br />
as an example [6]. The anode wire is at high voltage<br />
and connected via a coupling capacitor to one ASD8 input.<br />
The second ASD8 di erential input is connected to<br />
the cathode, i.e. to the ground of the chamber. The connection<br />
between the analog ground of the ASD8 board and<br />
the chamber ground was found to be very important for<br />
noise reduction, especially the Copper-Beryllium springs<br />
which hold the board at the chamber make good contact.<br />
Another problematic item is the crosstalk (feedback) between<br />
the digital output of the ASD8 and its analog input.<br />
The size of this e ect depends on the exact con guration<br />
and the number of the cables and hence can not be easily<br />
estimated in the lab. Operation of the ASD8 at a threshold<br />
with large hit e ciency is however impossible without<br />
special e orts.<br />
The subdetectors have followed di erent strategies to reduce<br />
the crosstalk. All groups use shielding over the rst<br />
meters of the twisted-pair cables from the ASD8 to the<br />
TDC. In most con gurations a good connection of the cable<br />
shield to the digital ground of the ASD8 is su cient<br />
while for some HighPt chambers this method is not adequate.<br />
Instead the connection is made with a 50 resistor.<br />
A possible explanation of this behavior is that the phase<br />
of the feedback to the analog ASD8 input is changed by<br />
the resistor and thus a constructive interference and oscillations<br />
are avoided. In case of the Muon detector [7] large<br />
e orts went into the routing of the cables and spacers were<br />
added to avoid crosstalk from one cable to the next.<br />
All chambers could be operated in 2000 with hit e ciencies<br />
well above 90%but for the Muon pad chambers the<br />
above mentioned e orts were not su cient. Each Muon<br />
pad is connected to a preampli er mounted directly on the<br />
chamber. The signal is then driven by a di erential ECL<br />
line driver via a 3 m twisted-pair cable to the inputs of<br />
the ASD8. In the original design the backside of the pad is<br />
oating. Connecting this plane to ground reduces the noise<br />
substantially but the signal size is deteriorated as well such<br />
that the hit e ciencies are limited to around 90%. Since<br />
the pad chambers were too noisy (oscillating) without this<br />
modi cation almost all modules were modi ed in the con-
CHAMBER<br />
HV BOARD<br />
GAS BOX<br />
HV<br />
00 11<br />
00 11<br />
00 11<br />
01<br />
01<br />
01<br />
01<br />
01<br />
01<br />
ASD BOARD<br />
ASD<br />
LV SUPPLY<br />
TEST PULSE<br />
DISTRIBUTION BOARD<br />
TDC<br />
Figure 1: Schematic of the on-detector electronics of the OTR.<br />
struction phase. The unmodi ed ones as well as some others<br />
with poor grounding (about 20% of the channels) were<br />
noisy in 2000.<br />
Recently test beam measurements were performed to<br />
nd a reliable solution. The most promising one is to exchange<br />
the preamp and some modules have beenmodi ed<br />
in Hera-B .<br />
In the 2000 commissioning an additional source of feedback<br />
to the analog input of the ASD8 was observed. The<br />
TDC board has digital outputs for the connection to the<br />
hardware trigger (FLT in gure 1) and when those cables<br />
were plugged ASD8 oscillations were observed. In<br />
this case the crosstalk could occur via spikes on the TDC<br />
ground. 2 Ferrite rings were added on the cables and the<br />
driver strength of the TDC signals was reduced. Test measurements<br />
indicate su cient suppression of the feedback<br />
after these modi cations.<br />
IV OTR High Voltage Channels Loss<br />
During the 2000 run about 0.5% of the OTR anode wires<br />
developed a "short" and had to be disconnected. This<br />
corresponds to arateofone per 7 hours. Because of the<br />
grouping of HV channels about 8<br />
The HERA luminosity shutdown was used to disassemble<br />
all chambers and the cause for the shorts were identied:<br />
remnants from the soldering of lter capacitors on the<br />
backside of the HV board became conductive with time.<br />
This problem was not observed in the pre-series production<br />
since the soldering technique applied at that time was<br />
di erent. In addition the time constant of this problem is<br />
50000 hours and it would have been di cult to discover<br />
the failure with the pre-series boards in any case. By now<br />
all 14000 a ected capacitors have been replaced and the<br />
losses are an order of magnitude smaller.<br />
2 The pull-up resistors for the open collector outputs of the ASD8<br />
are located on the TDC board with a pull-up voltage of 1.25 Volt.<br />
The 2 mA current per channel ows through one of the wires of the<br />
twisted-pair cable to the ASD8 and back via the ground connection<br />
of the power supplies to the TDC. Obviously any disturbance on the<br />
ground level and/or pull-up voltage will couple to the ASD8. A LVDS<br />
driver on the ASD8 board would have been a more robust solution<br />
but impossible to implement at the time the problem was discovered.<br />
DSP<br />
FLT<br />
V ECAL Noise<br />
The signals from the electromagnetic calorimeter are<br />
transmitted with (8000) coaxial cables to the ADC boards<br />
in the trailer. The calorimeter is oating with respect to its<br />
frame in the experimental area and the ground is de ned<br />
via the connections of the cable shielding. Each signal is<br />
terminated at both ends with 50 to avoid re ections.<br />
During last year's running an excess of noise was observed<br />
corresponding to a voltage of a few mV at the input<br />
of the ADCs. This noise limited the resolution of the ECAL<br />
but had litte impact on the pretrigger performance. The<br />
origin of the noise was traced back to di erent ground levels<br />
of the readout crates in the trailer and hence di erent<br />
ground levels along a coaxial cable. Via the 50 termination<br />
on the PMT side any ground bounce will couple<br />
to the signal line and hence create noise. All terminators<br />
were exchanged on the PMT side by 10 k resistors and<br />
rst measurements indicate a su cient suppression of the<br />
noise.<br />
VI The Hardware Trigger<br />
The main thrust of Hera-B was to nd CP violation<br />
in the decay channel B o ! J= K S. Since the anticipated<br />
rate of 5 proton interactions every 96 nsec is large, a sophisticated<br />
hardware trigger (First Level Trigger) [8, 9]was<br />
designed to reduce the event rate to 50 kHz, a level which<br />
can be handled by the data acquisition and a PC farm for<br />
further processing.<br />
The basic idea is to detect both electrons or both muons<br />
from the J= decay, calculate their momenta and the invariant<br />
mass. The search starts with a pre-trigger for electrons<br />
(looking for ECAL towers above threshold) and for<br />
muons (calculating coincidences in the pad chambers of the<br />
last two superlayers).<br />
The "work horse" of the trigger is the track nding unit<br />
(TFU). There are 72 TFUs in the entire system, typically<br />
10 per superlayer of the tracking chambers. The TFUs of<br />
one layer receive the hit information of the corresponding<br />
chambers for every bunch crossing. In addition they receive<br />
messages from the TFUs of the previous layer and<br />
transmit messages to the TFU of the next layer. A message<br />
contains the current parameters of track candidates<br />
and their uncertainties. From the incoming message the<br />
TFU calculates a region-of-interest where the track should<br />
have passed through the superlayer and use the information<br />
from the chamber to search for con rmation hits in<br />
three stereo views. 3 If found the track parameters will be<br />
updated and a new message will be sent to the next TFU<br />
layers. If no con rmation hit is found (in one or more<br />
stereo views) no output message will be generated.<br />
3 There are two layers of tracking chambers per stereo view and<br />
the OR is used in the trigger.
The search direction is opposite to the particle direction<br />
and starts with the pre-triggers at the downstream end of<br />
the detector. It ends at the chamber closest to the magnet.<br />
At thispoint the track parameters are well determined and<br />
the momentum can be calculated (with a track parameter<br />
unit) assuming that the track comes from the target. For<br />
two tracks the invariant mass can be determined (with the<br />
trigger decision unit). The trigger hence consists only of<br />
3 di erent types of boards and the total processing time<br />
including the pre-triggers is less than 10 sec. Its data input<br />
is 1 Tbit/sec from the tracking chambers and at design<br />
rate about 500 million track candidates per second are followed<br />
through the hardware. The rate reduction should be<br />
at least 200. Hence the 10 MHz bunch crossing frequency<br />
is reduced to a trigger rate of 50 kHz.<br />
The TFU is the most complicated board of Hera-B<br />
(23000 solder pads). The hardware and rmware is designed<br />
such thatsoftware tests allow rigorous debugging of<br />
the entire board and the detailed identi cation of problems<br />
like bad solder points. These boards were tested for one<br />
week before they arrived at DESY and showed no problems.<br />
The large amount of data transmitted from the tracking<br />
chambers to the trigger (1 Tbit/sec) is realized with about<br />
1500 self-made 900 Mbit optical links [10]. At the time<br />
of the design there was no commercially available solution<br />
at this speed available. Our design uses the Autobahn<br />
spanceiver from Motorola to serialize a 20 MHz 32-bit wide<br />
input. The serial (di erential PECL) Autobahn output is<br />
transmitted with a VCSEL from the experiment to the<br />
trailer. The optical receiver converts the light back into<br />
a serial (di erential PECL) signal and a second Autobahn<br />
recovers the parallel data.<br />
The requirement of the hardware trigger is that close<br />
to 100% of these data links have towork perfectly since a<br />
single missing hit causes ine ciencies of the trigger. Unfortunately<br />
about 5% of the links were periodically unstable,<br />
i.e. had a large bit error rate. Ine ciencies were nevertheless<br />
avoided since the TFU hardware could arti cially set<br />
all hits to \1" for the identi ed links. The most relevant<br />
data link problems were due to instabilities of the VC-<br />
SEL (changes of the light output and poor eye pattern),<br />
mechanical problems with the ST-connectors and the fact<br />
that the duty cycle of the serial bit stream varies 4 which<br />
reduces the stability of the transmission line.<br />
Recently remotely programmable DACs have been added<br />
to adjust the amplitude and o set of the VCSELs. Thus<br />
some of the problems should be solved. However the data<br />
transmission remains a worry for the next data taking.<br />
The muon pre-trigger [11] nds track seeds by calculating<br />
coincidences of the pad (and pixel) chambers signals of the<br />
last two superlayers. Since several pad modules were un-<br />
4 The duty cycle is almost identical to the occupancy since the<br />
Autobahn simply serializes the input data stream.<br />
stable and individual channels became noisy for some time<br />
during the operation, those channels contributed largely to<br />
the coincidence rate and had to be masked. The identi -<br />
cation was relatively easy because of a build-in feature of<br />
the hardware: a small fraction of the coincidence messages<br />
were not only sent to the hardware trigger but also written<br />
to a VME accessible register. The online software was<br />
hence able to locate those channels quickly and mask them<br />
online without stopping the data acquisition.<br />
The online masking is an involved task since the updated<br />
mask has to be stored in a database and the information<br />
of the new version has to be distributed to all PCs of the<br />
Second Level Trigger farm. In total the online software<br />
consists of about 20 di erent processes running on 10 different<br />
computers. About 10 man years have been invested<br />
in the software (including o ine monitoring) which almost<br />
equals the e ort for building the hardware (15 man years).<br />
For the ECAL pre-trigger [12] online masking was also<br />
needed. Here the origin was a di erent one: the quality<br />
of the commercially produced boards was very poor which<br />
caused long delays and for the installed boards bit errors<br />
in the cluster energy. Hence local hot towers were observed<br />
and had to be masked.<br />
VII The Data Acquisition<br />
While the hardware trigger is processing hits of a given<br />
bunch crossing the detector front-end keeps the full data in<br />
a pipeline. When a trigger is issued the digitized event is<br />
stored in the Second Level Bu ers (SLB) with a depth of<br />
270 events. The maximum event input rate is 50 kHz. The<br />
Second Level Trigger (a farm of 240 PCs) then accesses the<br />
event data via the Switch fromtheSLBs and performs a<br />
partial event reconstruction based on the tracks found by<br />
the hardware trigger [13]. Another process (called Event<br />
Controller) keeps track of the free bu ers on the SLB and<br />
the idle PCs.<br />
The Event Controller, the SLBs and the Switch are realized<br />
with Sharc DSPs from Analog Devices. Each Sharc<br />
has six 40 MB/sec input ports, a 32-bit bus for Sharc-to-<br />
Sharc connections, a 40 MHz CPU and 4 Mbit dualported<br />
memory. For Hera-B a custom made VME board with<br />
6 DSPs was developed of which about 200 are in use. The<br />
board worked reliably.<br />
The challenge is to guarantee that no message from the<br />
Second Level Trigger to the SLBs and back is lost, i.e. to<br />
back pressure messages and not to loose any interrupt on<br />
a Sharc. For speed consideration Assembler was used for<br />
the Switch programming while the language C was used<br />
elsewhere [14].<br />
The advantage of the uni ed hardware approach for the<br />
data acquisition is the easy connectivity in the entire experiment<br />
and the minimal amount of man power needed<br />
for development (6manyears) and maintenance.
The maximum bandwidth of the Switch is1GB/secand<br />
the limit on message rate is 2.6 MHz. This is according the<br />
speci cations and the same is true for the SLBs and the<br />
Event Controller.<br />
VIII Summary<br />
Hera-B was largely completed at the end of 1999. The<br />
electronic commissioning in 2000 revealed some surprises<br />
(like the feedback from the TDC-FLT connection to the<br />
ASD8). Prior to the installation and during the commissioning<br />
oscillation problems of the ASD8 readout chip<br />
caused problems and delays. Perfect grounding could reduce<br />
the noise in most cases to an acceptable level. For<br />
the Muon pad system this could only be accomplished by<br />
grounding the backside of the pads which let a smaller hit<br />
e ciency. During the HERA shutdown most of these problems<br />
could be xed or reduced and for the next data taking<br />
period a large improvement is expected.<br />
The electronic of the hardware trigger was working reliably,<br />
especially the TFUs. The only exception is the optical<br />
data transmission from the tracking chambers to the<br />
trigger which remains problematic for future running.<br />
The data acquisition and the front-end electronics<br />
(FADC boards for the HELIX digitization and the TDC<br />
boards for ASD8 digitization) were largely uni ed in the<br />
experiment andworked successfully.<br />
The example of the trigger shows that the hardware design<br />
has to support debugging and online monitoring. High<br />
quality software tools are needed for commissioning and<br />
the amount ofmanyears equals the time used to build the<br />
hardware.<br />
Let me conclude with two personal recommendations.<br />
The experience shows that many small design mistakes<br />
can have a large impact on the quality of the experiment.<br />
To nd them regular reviews by experts from other experiments<br />
would help. Although this is a big e ort the<br />
knowledge about problems and solutions would spread fast<br />
within the HEP community. One example is the usage of<br />
open collector outputs. They should be avoided and be<br />
replaced by LVDS signals.<br />
References<br />
[1] E.Hartouni et al., Hera-B Design Report, DESY-<br />
PRC 95/01, (1995).<br />
[2] W. Fallot-Burghardt et al., HELIX128S-2 Users<br />
Manual, HD-ASIC-33-0697, http://wwwasic.kip.uniheidelberg.de/<br />
~ feuersta/projects/Helix/index.html.<br />
[3] C.Bauer et al., Status of the Hera-B Vertex Detector,<br />
Nucl. Instr. Meth. A447 (2000) 61.<br />
[4] W.Gradl, Nucl. Instr. Meth. A461 (2001) 80-81�<br />
T.Hott, Nucl. Instr. Meth. A408 (1998) 258-265.<br />
[5] M.Newcommer et al., IEEE Trans. Nucl. Sci. NS-40<br />
(1990) 690.<br />
[6] K.Berkhan et al., Large System Experience with the<br />
ASD8 Chip in the Hera-B Experiment, Proceedings<br />
of the 5th Workshop on Electronics for LHC, Snwomass<br />
1999.<br />
[7] V.Eiges et al., The Muon Detector at the Hera-B<br />
Experiment, Nucl. Instr. Meth. A461 (2001) 104-106.<br />
[8] T.Fuljahn et al., Concept of the First Level Trigger<br />
for Hera-B , IEEE Trans. Nucl. Sci. NS-45 (1998)<br />
1782-1786.<br />
[9] M.Bruinsma, D.Ressing at al., these proceedings.<br />
[10] J.Gla et al., Terabit per Second Data Transfer for the<br />
Hera-B First Level Trigger, Proceedings of IEEEE<br />
Conference on Realtime Systems, pp 38/42, Valencia,<br />
Spain, June 2001.<br />
[11] M.Bocker et al., The Muon Pretrigger System of the<br />
Hera-B Experiment, IEEE Trans. Nucl. Sci. NS-48<br />
(2001) TNS-00118-2000.<br />
[12] C.Baldanza et al., The Hera-B Electron Pre-Trigger<br />
System, Nucl. Instr. Meth. A409 (1998) 643.<br />
[13] J.M.Hernandez et al., Hera-B Data Acquisition<br />
System, Proceedings of IEEEE NSS-MIC conference,<br />
Lyon, France, November 2000.<br />
[14] Hera-B Collaboration, Digital Signal Processor<br />
Software for the Hera-B Second Level Trigger,<br />
Proceedings of the Conference on Computing<br />
in High Energy Physics, Chicago, August 1998,<br />
http://www.hep.net/chep98/PDF/109.pdf.
Design of ladder EndCap electronics for the ALICE ITS SSD<br />
R. Kluit, P. Timmer, J. D. Schipper, V. Gromov<br />
NIKHEF, Kruislaan 409, 1098SJ Amsterdam, The Netherlands<br />
r.kluit@nikhef.nl<br />
A.P. de Haas<br />
NIKHEF, Princetonplein 5 (BBL), 3584 CC, Utrecht, The Netherlands<br />
A.P.dehaas@phys.uu.nl<br />
Abstract<br />
The design of the control electronics of the front-end of the<br />
ALICE SSD is described. This front-end will be built with the<br />
HAL25 (LEPSI) chip. The controls are placed in the ladder<br />
EndCap. The main EndCap functions are; power regulation<br />
and latch-up protection for the front-end, controlling the local<br />
JTAG bus, distribution of incoming control signals for the<br />
front-end and buffering of the outgoing analogue detector<br />
data. The system uses AC coupled signal transfer for doublesided<br />
detector readout electronics.<br />
Due to radiation-, power-, and space requirements, two<br />
ASIC’s are under development, one for analogue buffering<br />
and one with all other functions combined.<br />
I. INTRODUCTION<br />
For the control- and readout functions for the ALICE Silicon<br />
Strip Detector (SSD) of the Inner Tracker System<br />
(ITS)[5], additional electronics is needed between the Data<br />
acquisition and the front-end modules. The SSD consists of 2<br />
layers of ladders, the inner with 34 and the outer with 38 ladders.<br />
The inner ladders contain 23 modules and the outer 26.<br />
The detector modules will be connected to the DAQ- & Control<br />
system via EndCap units mounted at the end of each side<br />
of the ladder via Kapton cables (Figure 1). So one EndCap<br />
controls ½ a ladder and we need 144 EndCap’s in total. The<br />
available space is ~ 70x70x45 mm for one unit.<br />
The limited available space requires a dense design, which<br />
immediately adds the requirement for low power. In addition,<br />
the volume of the cabling should be kept as low as possible.<br />
Therefore serializing and multiplexing of data and the use of<br />
low-mass cables is necessary. The latter requires regeneration<br />
of signals in order to guarantee a proper quality at the frontend.<br />
The SSD is based on Double Sided Detectors with 768<br />
strips on each side. The detectors will be read out by the<br />
HAL25 (Hardened ALice128 in .25µCMOS) front-end chip<br />
developed by LEPSI/Ires Strasbourg. This chip contains 128<br />
analogue channels with preamp and shaper. An analogue multiplexer<br />
provides a serial readout. All bias voltages and cur-<br />
(For the ALICE collaboration)<br />
rents are programmable via internal DAC’s and the chip has<br />
binary controls for readout, test and status check functions.<br />
These functions can be addressed via a serial bus that uses the<br />
JTAG protocol.<br />
Figure 1 Two modules with cable on a ladder.<br />
Because double-sided detectors are used, the readout electronics<br />
on both sides operate at a different potential (detector<br />
bias max. 100V). To avoid ADC- and control modules to operate<br />
at these bias potentials, all signals will be AC coupled to<br />
the corresponding voltage level. In addition, the analogue<br />
readout data will be AC coupled to a multiplexer/buffer,<br />
which is able to drive the differential signal to ADC modules<br />
(over 25m @ 10MHz). The front-end chips of the detector<br />
modules are readout successively; the P- and N side of one<br />
detector module occupy one ADC channel.<br />
The required low voltage power for the front-end chips<br />
(2.5V) of the P- and N-side is regulated inside the EndCap.<br />
This circuit not only provides the latch-up protection for the<br />
front-end but also for the control electronics and buffers in the<br />
EndCap itself.<br />
Since the front-end electronics is controlled via the JTAG<br />
bus, the bus is also used to control and monitor the EndCap<br />
functions. Errors like latch-up can be detected and appropriate<br />
action can now be taken. Defect front-end chips can be disabled<br />
and can be put in “by-pass” mode. This information is<br />
available for the DAQ system. During the assembly and test<br />
phase of the EndCap, the JTAG bus will be used to test interconnections<br />
inside the module during the production.
The EndCap will communicate with the Front-End Read<br />
Out Module (FEROM) placed at a 25m distance from the<br />
detector. LVDS levels are used to reduce interference between<br />
the signals and the environment. Inside the EndCap, only the<br />
JTAG signals will be carried out with single ended CMOS<br />
levels (2.5V).<br />
A. Radiation Environment<br />
In the initial phase of the design, no clear numbers were<br />
available for the expected radiation levels inside the SSD area<br />
of the ITS. The Front-end would be designed in 1.2µm CMOS<br />
and the EndCap would consist of commercial components.<br />
Radiation studies showed that Single Event Latch-up<br />
(SEL) would occur once every 5 minutes in these SSD frontend<br />
chips [1]. For this reason it was decided that a latch-up<br />
protection circuit must be included in the power supply. The<br />
supply circuit itself must also be insensitive for SEL. Due to<br />
the uncertainty of the expected dose levels and the expected<br />
SEL frequency at that time, the step was made to switch to<br />
0.25µ CMOS with the use of radiation tolerant design techniques<br />
to improve the susceptibility for latch-up and total dose<br />
irradiation damage. Now all SSD electronics inside the detector<br />
volume is designed using a 0.25µ CMOS process.<br />
Radiation calculations [2] predict max 100Gy (10krad) total<br />
dose and 4*10 11 neutrons/cm 2 1Mev equivalent over 10<br />
years. With the use of a 0.25µm CMOS process, the design<br />
can be made with an acceptable susceptibility for radiation.<br />
II. FUNCTIONALITY<br />
The overall functionality is that the EndCap is the interface<br />
for and distributor of the control and data signals between<br />
the detector modules and the Data acquisition 25 m<br />
further on. We can classify four main functions:<br />
• Power control and protection<br />
• Signal buffering and coupling<br />
• Readout Control<br />
• JTAG detector control and monitoring<br />
A. Power Control<br />
By nature, the front-end chips need burnout protection for<br />
SEL. This means that if the supply current reaches a specified<br />
level, the power of the corresponding hybrid must be switched<br />
off. Of course, the other electronics in the EndCap needs the<br />
same protection. For that reason, the ASIC’s inside the End-<br />
Cap will receive their power from an identical circuit as the<br />
hybrids. The supply itself must be insensitive for SEL. The<br />
output voltage is programmable and the supply can be<br />
switched on/off via the JTAG bus. A local voltage reference<br />
defines the proper value for the output voltage. The status as<br />
well as the output voltage can be monitored via the control<br />
bus. If a latch-up or other error occurs, an OR-ed error signal<br />
of all supplies in the EndCap will notify the DCS immediately.<br />
The DCS can readout the status and can find out which<br />
supply (hybrid of EndCap control) is switched off. The power<br />
supplies provide a power on reset for the front-end chips as<br />
well as for the local controllers and coupler circuits.<br />
The EndCap has separated power for P-side, N-side, and<br />
Interface electronics at ground potential.<br />
B. Buffering and coupling<br />
All signals will cross the detector bias potentials once<br />
(Figure 4). This means that after the signals are at the right<br />
bias potential, the local bus in the EndCap and the drivers<br />
work at this potential. All connections to the outside of the<br />
detector volume are at ground potential. Since all signals<br />
cover a distance of ~25m, they have to be regenerated before<br />
being send to the front-end chips. After these receivers, the<br />
signals are coupled (AC) to the corresponding bias potential.<br />
In case of a SEL on a hybrid that must be reset, the outputs<br />
of the signal drivers to the hybrids must go in high impedance<br />
state so they are not able to conduct any current to the frontend.<br />
The output signals of the EndCap are 1 error signal and 14<br />
analogue outputs. The analogue output multiplexer (Figure 3)<br />
makes it possible to readout a P- and N-side hybrid succes-<br />
sively. This results in one<br />
analogue signal per module.<br />
In order to create a<br />
signal with one polarity,<br />
one of the two signals is<br />
inverted (Figure 2).<br />
C. Readout control<br />
Figure 2 Simplified Analogue Output<br />
The serial readout of the front-end chips is based on token<br />
passing. Once the token has entered the chip, the 128 channels<br />
will put its sampled voltage on the output at each 10MHz<br />
clock cycle. The front-end chip needs some additional clock<br />
cycles before and after the readout for token handling.<br />
clock<br />
token<br />
P- Hybrid with 6 FE chips N- Hybrid with 6 FE chips<br />
token P delayed token N<br />
controller<br />
error<br />
amplitude<br />
return token P<br />
return token N<br />
analogue data<br />
768 P-channels<br />
from 6 P-side<br />
FE chips<br />
Det. Module<br />
EndCap<br />
Figure 3 Token Readout principle<br />
768 N-channels<br />
from 6 N-side<br />
FE chips<br />
mux driver<br />
need to be<br />
inverted<br />
to ADC<br />
The readout is organised in a way that clock and token are<br />
being send to the Detector, and all modules are read out in<br />
parallel. One detector module consists of one P- and one Nside<br />
detector hybrid with 6 front-end chips (HAL25). One<br />
module connects to one ADC input, so the two signals are<br />
multiplexed to one line at ground potential (Figure 3). This<br />
means that halfway the readout a multiplexer has to switch,<br />
and that the two hybrids receive their tokens successively.<br />
The switching and token handling will be done in the End-<br />
Cap, via a programmable token delay. It is programmable<br />
because a broken front-end chip requires a decrease of the<br />
number of clock cycles for the readout. A “bypass” capability<br />
time
will maintain the token passing for the readout. Once the last<br />
chip in the chain is read out, the token is passed to the controller<br />
again. It checks if it arrived at the correct time. If not, an<br />
error signal will be generated, indicating a problem in the<br />
readout sequence.<br />
In case of a ‘Fast Clear” during readout, all delays will be<br />
reset and the event can be cleared from all buffers. Within a<br />
few clock cycles, the system is ready for a new readout cycle.<br />
D. JTAG control and monitoring<br />
The control of the front-end chip goes via the JTAG bus.<br />
Therefore, it is obvious that it should also be used to control<br />
and monitor the EndCap. An important function in the End-<br />
Cap is that the power will switch off in case of a SEL in a<br />
hybrid. The controller looks after an uninterrupted JTAG<br />
chain. The controller restores the original chain after the hybrid<br />
power is switched on again.<br />
The EndCap JTAG logic has registers for the power supply<br />
voltage DAC’s, readout control, the ADC to monitor temperature<br />
and detector bias current, and for Boundary Scan<br />
interconnection tests.<br />
III. ENDCAP IMPLEMENTATION<br />
Since the Detector Control and Data Acquisition electronics<br />
are at ground potential, the interface connections work on<br />
the same level. Therefore, the signals should cross the bias<br />
levels inside the EndCap before they are connected to the<br />
corresponding detector side.<br />
FEROM<br />
control<br />
analogue<br />
data<br />
Error &<br />
JTAG<br />
Interface Card (1x) Supply Card (7x)<br />
Interface<br />
buffers<br />
& control<br />
(GND)<br />
��������Ã<br />
14x<br />
Ccouple<br />
P-side<br />
buffer<br />
&<br />
control<br />
��������Ã<br />
N-side<br />
buffer<br />
&<br />
control<br />
��������Ã<br />
Figure 4 EndCap architecture<br />
To minimise space and power, AC coupling has been chosen<br />
to cross the bias potentials for differential and single<br />
ended digital signals. The analogue output that is clocked out<br />
at 10MHz is also carried out with differential AC coupling.<br />
This concept has been proven in the HERMES Lambdawheels<br />
(HERA, DESY).<br />
As shown in Figure 4, the EndCap is built out of 8 PCB’s,<br />
1 InterfaceCard, and 7 SupplyCard’s. These PCB’s will be<br />
placed on a back plane using miniature connectors. Kapton<br />
cables (max 70cm) connect each SupplyCard to 4 hybrids on<br />
the ladders. The FEROM is connected to a patch panel using<br />
“standard” cables and from here to the InterfaceCard with a<br />
Kapton cable.<br />
bus<br />
bus<br />
������Ã<br />
P-side<br />
supply<br />
&<br />
buffers<br />
(14x)<br />
��������Ã<br />
N-side<br />
supply<br />
&<br />
buffers<br />
(14x)<br />
��������Ã<br />
to 28<br />
hybrids<br />
All functions of the EndCap electronics are being integrated<br />
into two ASIC’s; the control chip “ALice Control And<br />
POwer NExus” named ALCAPONE, and the analogue buffer<br />
“ALice Analogue BUFfer” named ALABUF. The<br />
ALCAPONE chip integrates the control- and power functions<br />
plus the LVDS and CMOS buffers that are used for the AC<br />
coupling of the digital signals (Figure 6). The multiplexer and<br />
analogue buffers that send the serial data to the ADC are<br />
placed in the second chip. The InterfaceCard houses 3<br />
ALCAPONE chips (Figure 4), one at ground potential for the<br />
interface and two at the detector bias levels after the AC coupling.<br />
By using the same IC process for the ASIC’s as the<br />
front-end chip, the signal levels are fully compatible.<br />
A. The control chip, ALCAPONE<br />
This ASIC (Figure 5) has the following main features:<br />
• LVDS & CMOS (AC) buffering<br />
• JTAG control of readout, monitor and power<br />
functions<br />
• Power supply and reference.<br />
Temp<br />
Det. Current<br />
Error<br />
out<br />
A<br />
D<br />
C<br />
CMOS receivers & drivers<br />
LVDS receivers & drivers<br />
Reference &<br />
Shunt-<br />
Regulator<br />
JTAG logic + control<br />
registers<br />
Control logic<br />
D<br />
A<br />
C<br />
Vref Supply power<br />
Power<br />
supply<br />
Figure 5 Block diagram of the ALCAPONE ASIC<br />
Token check<br />
Sel P/N readout<br />
Error in<br />
For the receivers “standard” cells are used. The drivers<br />
have an additional tri-state output capability. The positive<br />
feedback (Figure 6) of the receiver output creates a latch function<br />
after the AC coupling. An additional reset circuit for the<br />
latch ensures that the correct signal polarity after power on<br />
can be determined.<br />
Single Ended<br />
Ccouple<br />
in<br />
15pF<br />
pos. feedback<br />
50k<br />
buffer<br />
out<br />
Capacitors not integrated in the ASIC’s<br />
Differential<br />
in<br />
Ccouple<br />
15pF<br />
pos. feedback<br />
50k<br />
Figure 6 Digital AC coupling with receivers<br />
The design of the JTAG interface is according to the IEEE<br />
standard. Additional is a parity check for all bits in the registers<br />
to detect a Single Event Upset. Control- and status registers<br />
exist to check for power- and readout errors. Errors can<br />
be masked in case of real defects.<br />
The chip is designed in such a way that it can be used as<br />
EndCap interface, AC coupler, and hybrid driver. Hence, the<br />
ALCAPONE chips exist with 3 different functions in the<br />
EndCap on 2 different types of PCB’s.<br />
50k<br />
out
B. Power supply<br />
The power supply has two main parts, a regulator with<br />
voltage reference, and the supply<br />
circuit. Since the incoming power<br />
voltage is about 3V, it needs to be<br />
regulated down to 2.5V. The Bandgap-<br />
and the supply circuit voltage<br />
may not exceed this value. This is<br />
realised with a shunt regulator in<br />
parallel with the Bandgap reference.<br />
Vin 2.7-5V<br />
Reference<br />
Vref<br />
1.25V<br />
Regulator shunt<br />
500<br />
Vdd<br />
2.5V<br />
supply<br />
circuit<br />
Figure 7 Shunt regulator diagram<br />
Now that the primary power voltage is generated, the supply<br />
circuit (Figure 8) can be turned on. It uses a start-up circuit<br />
that includes a timer that switches off the current limit<br />
during the first 250µs (charge up of load capacitance). It also<br />
generates a power-on reset signal to clear the error latch.<br />
Vref<br />
On/Off<br />
OK<br />
Control<br />
start-up<br />
circuit with<br />
timer<br />
Vdd<br />
GND<br />
error<br />
amplifier<br />
overcurrent<br />
circuit with<br />
timer<br />
External Power<br />
2.7-5V<br />
external<br />
transistor &<br />
R-network<br />
PO reset<br />
Figure 8 Block diagram of the power supply<br />
Power Out<br />
2.5V<br />
An external sense resistor (75mΩ for 320mA) defines the<br />
current limit. In case of over current, the over current timer<br />
will delay the signal for 25µs before the output is switched<br />
off. Other external components (7 x Res., 2 x Trans.) are used<br />
for current- and voltage feedback and power regulation. The<br />
transistors are necessary because the externally delivered<br />
voltage (>2.7 V) is higher than the maximum allowed voltage<br />
for the used IC process (2.5V). The minimum dropout voltage<br />
is 200mV and the maximum output current with the used<br />
components is 2A. The current used by the circuit itself is<br />
600µA.<br />
Figure 9 Layout of the supply circuit (280 x 70 µm)<br />
C. The analogue buffer chip, ALABUF<br />
The analogue buffer must be able to drive the signal over a<br />
25m distance with 10-bit accuracy in the required range. The<br />
range of the front-end chip is 0-13 MIP, but the highest accuracy<br />
is required below 5 MIP detector signal. The required<br />
linearity below 5 MIP must be better then 1%. Settling time at<br />
the output should be max. 20ns. Additional RMS noise <<br />
1mV.<br />
Because a SupplyCard contains the electronics for two detector<br />
modules, the ALABUF chip (Figure 10) is equipped<br />
with two buffers with analogue multiplexers.<br />
The analogue multiplexer<br />
(Figure 11) connects the hybrid outputs<br />
of one module to the analogue<br />
buffer. Between two readout cycles,<br />
the outputs are connected to a reference<br />
voltage to avoid any voltage<br />
drift during this quiet state.<br />
data from<br />
P-hybrid<br />
data from<br />
N-hybrid<br />
200<br />
200<br />
200<br />
200<br />
Ccouple<br />
ext. int.<br />
100nF<br />
Vref<br />
Vref<br />
P<br />
notP<br />
N<br />
notN<br />
in P<br />
inN<br />
in P<br />
inN<br />
SelP<br />
SelN<br />
SelP<br />
SelN<br />
Figure 11 Analogue multiplexer diagram<br />
disable<br />
Hybrid A<br />
Hybrid B<br />
Figure 10 ALABUF ASIC<br />
Vdd<br />
buffer<br />
GND<br />
The OPAMP used in the buffer (Figure 12) is a fully<br />
complementary Self-biased CMOS differential amplifier [4].<br />
Additional RC networks have been added for stability of the<br />
complete circuit to ensure that some expected capacitive load<br />
would not influence the behaviour.<br />
-in<br />
+in<br />
Input Amplifier<br />
10k<br />
10k<br />
30k<br />
-<br />
+<br />
30k<br />
-<br />
+<br />
Vref<br />
10k<br />
10k<br />
output stage<br />
Common mode<br />
feedback<br />
-out<br />
+out<br />
Figure 12 Simplified Analogue buffer schematic<br />
Cable<br />
The buffer must amplify the signal from the front-end chip<br />
to maximum output range in order to minimise any “pick-up”<br />
while being transferred over the cable. The differential input<br />
amplifier has an extra feedback OPAMP. This feedback circuit<br />
reduces the common-mode error and ensures that the output<br />
voltage “zero level” has the value of Vref (½ the supply<br />
voltage).<br />
In order to drive a 100Ω cable, the output must be able to<br />
drive 20mA for an output voltage of 2V. Therefore, the driver<br />
circuit as in Figure 13 is used. This circuit has a gain of 1, so<br />
the input amplifier defines the gain<br />
Vdd<br />
of the complete buffer (gain = 2.6).<br />
+<br />
The buffers are provided with a<br />
-<br />
in<br />
out<br />
disable function to reduce the<br />
+<br />
power consumption between read-<br />
-<br />
outs. Two of these circuits create<br />
disable GND<br />
the differential output.<br />
Figure 13 Buffer output stage<br />
100
Analogue switches<br />
Figure 14 Analogue buffer Layout (600 x 400µm)<br />
D. InterfaceCard<br />
On the InterfaceCard, the AC coupling and the control<br />
power supplies are located (Figure 4). The EndCap receiver’s<br />
–in one ALCAPONE- are connected (AC) to two buffer<br />
ALCAPONE chips. These two chips drive bus lines to the<br />
SupplyCard’s. The shunt regulators are used only at this Card,<br />
to generate the local power for the control electronics.<br />
E. SupplyCard<br />
Input amplifier<br />
Two output stages<br />
The Hybrid power supplies are placed on the SupplyCard.<br />
Each card contains 2 P-side- and 2 N-side supplies (4x<br />
ALCAPONE), plus one ALABUF chip at ground potential to<br />
drive the two analogue module outputs to the ADC. These<br />
ALCAPONE chips are powered from the InterfaceCard.<br />
IV. MECHANICS<br />
Because of the space limitation, the temperature control of<br />
the EndCap needs extra attention. Therefore, the PCB’s will<br />
be made of an Aluminium carrier with Kapton multi layer<br />
PCB’s on both sides. At the short sides, the cards have the<br />
hybrid cable connectors and at one long side the back plane<br />
connector. The other long side is used for a heat bridge to the<br />
cooling tubes that are used for cooling the detector modules.<br />
The power budget for one EndCap is 10W, and simulations<br />
and measurements indicate that it is feasible.<br />
V. SIMULATION AND MEASUREMENTS<br />
Irradiation of the transistors of the power supply with 10 13<br />
neutrons/cm 2 1Mev equivalent shows degradation in β of<br />
60%. The supply can work within specifications after this 25<br />
times higher flux than expected in 10<br />
years of operation.<br />
First measurements of the buffer<br />
shows a difference in gain, exp. 2.6,<br />
measured 2.3. The circuit is not stable<br />
due to larger capacitances of the<br />
NWELL resistors then expected.<br />
Figure 15 Simulation of the buffer<br />
The simulation (Figure 16) of the power supply shows the<br />
start-up sequence of the circuit with an over current detection.<br />
After the start-up timer is finished,<br />
the over-current is detected.<br />
After the over-current<br />
timer is finished the output power<br />
is switched off. Measurements on<br />
the prototype show that the functionality<br />
is correct. The specifications<br />
need to be verified.<br />
Figure 16 Simulation of the power supply<br />
VI. STATUS AND PLANS<br />
At this time, (Sept. 2001) the EndCap is in the design and<br />
prototype phase. Two test IC’s have been submitted and just<br />
became available for tests. The produced circuits are the analogue<br />
buffer + AC coupling, power supply, and the power<br />
supply shunt regulator that delivers the power for the supply<br />
circuit.<br />
Later this year a prototype of the ALCAPONE is planned,<br />
together with the final prototype of the ALABUF chip. The<br />
circuits will be irradiated to test their tolerance. The first<br />
complete EndCap is planned for Q2 2003.<br />
Mechanical prototyping has started to investigate the cooling,<br />
cabling, and PCB handling.<br />
VII. CONCLUSIONS<br />
Full SEL protection of all EndCap electronics is possible.<br />
Simulations of the individual circuits showed the feasibility of<br />
using the 0.25µ CMOS process for the electronic components<br />
of the EndCap.<br />
VIII. ACKNOWLEDGEMENTS<br />
Acknowledgements go out to the ALICE Pixel chip design<br />
collaboration for the use of the design of the voltage DAC<br />
(designed by D. San Segundo Bello, NIKHEF, Amsterdam),<br />
the CERN microelectronics design group for the use of the<br />
Bandgap voltage reference cell (Paulo Moreira) and the support<br />
for prototype production. Most “components” of the<br />
ASIC’s come out of the Library delivered by RAL [3].<br />
IX. REFERENCES<br />
1. Measurement of Single Event Latch-up Sensitivity of the<br />
Alice128C chip; Wojeiech Dulinski LEPSI, 20/10/1998<br />
2. Status of radiation calculations for mis-injected beam into<br />
LHC; Blahoslav Pastircak, ALICE pres. 31/8/2000<br />
3. Design Kit for 0.25u CMOS technology with radiation<br />
tolerant layout; Support by RAL.<br />
4. Two Novel Fully differential self-biased CMOS differential<br />
amplifiers; Mel Bazes, IEEE J. of Solid State Circuits,<br />
Vol.26, no. 2, p 165-168.<br />
5. ALICE TDR 4, CERN/LHCC 99-12, 18 June 1999.
È�Ö�ÓÖÑ�Ò � Ó� Ø�� ���ØÐ� Ê���ÓÙØ ���Ô �ÓÖ ÄÀ��<br />
��×ØÖ� Ø<br />
Æ��Ð× Ú�Ò ����Ð ÂÓ Ú�Ò ��Ò �Ö�Ò� À�Ò× Î�Ö�ÓÓ���Ò<br />
�Ö�� ÍÒ�Ú�Ö×�ØÝ Ó� �Ñ×Ø�Ö��Ñ ÆÁÃÀ�� �Ñ×Ø�Ö��Ñ<br />
��Ò��Ð ��ÙÑ��×Ø�Ö £ Ï�ÖÒ�Ö ÀÓ�Ñ�ÒÒ Ã�ÖÐ Ì�××Ó ÃÒ�ÓÔ� ËÚ�Ò Ä�Ó �Ò�Ö<br />
Å� ���Ð Ë �Ñ�ÐÐ�Ò� ����Ö Ë�Ü�Ù�Ö Ý ÍÐÖ� �ÌÖÙÒ�<br />
Å�Ü ÈÐ�Ò � ÁÒ×Ø�ØÙØ� �ÓÖ ÆÙ Ð��Ö È�Ý×� × À����Ð��Ö�<br />
Å�ÖØ�Ò ��Ù�Ö×Ø� � Ê���Ð� Þ<br />
Ì��× Ô�Ô�Ö ��Ø��Ð× Ø�� ��Ú�ÐÓÔÑ�ÒØ ×Ø�Ô× Ó� Ø�� �<br />
��ÒÒ�Ð Ô�Ô�Ð�Ò�� Ö���ÓÙØ ��Ô ���ØÐ� Û�� � �× ��<br />
�Ò� ��×��Ò�� �ÓÖ Ø�� ×�Ð� ÓÒ Ú�ÖØ�Ü ��Ø� ØÓÖ Ø�� �ÒÒ�Ö<br />
ØÖ� ��Ö Ø�� Ô�Ð� ÙÔ Ú�ØÓ ØÖ����Ö �Ò� Ø�� ÊÁ�À ��Ø�<br />
ØÓÖ× Ó� ÄÀ��<br />
Ë� Ø�ÓÒ ÁÁ ×ÙÑÑ�Ö�Þ�× Ø�� ���ØÐ� ��Ô �Ö ��Ø�<br />
ØÙÖ� Ë� Ø�ÓÒ ÁÁÁ ×�ÓÛ× Ø�� ��Ý Ñ��×ÙÖ�Ñ�ÒØ× ÓÒ Ø��<br />
¬Ö×Ø ��Ô Ú�Ö×�ÓÒ ���ØÐ� Û�� � �ÖÓÚ� Ø�� ��×��Ò<br />
��Ò��× �ÓÖ Ø�� ���ØÐ� ��Ö×Ø Ô�Ö�ÓÖÑ�Ò � ��Ø� Ó�<br />
Ø�� Ò�Û ��Ô �× ÔÖ�×�ÒØ�� �Ò ×� Ø�ÓÒ ÁÎ Û��Ð� �Ò ÓÙØ<br />
ÐÓÓ� ÓÒ Ø�� �ÙØÙÖ� Ø�×Ø �Ò� ��Ú�ÐÓÔÑ�ÒØ Ó� Ø�� ��Ô<br />
�Ö� ��Ú�Ò �Ò ×� Ø�ÓÒ Î<br />
Á ÁÒØÖÓ�Ù Ø�ÓÒ<br />
Ë�Ò � Ø�� ����ÒÒ�Ò� Ó� Ø�� ��Ú�ÐÓÔÑ�ÒØ Ó�Ø�� ���ØÐ�<br />
��Ô�ÒÐ�Ø� ���Ø�� ��Ô ��Ñ�ÐÝ ��× �ÖÓÛÒ ØÓ Ñ�Ñ<br />
��Ö× Ó� ÓÑÔÐ�Ø� Ö���ÓÙØ ��Ô× ���ØÐ� �Ò� ���<br />
ØÐ� �Ò� � ¢ ÑÑ ��Ô× �ÑÔÐ�Ñ�ÒØ�Ò� Ø�×Ø<br />
×ØÖÙ ØÙÖ�× �Ò� ÔÖÓØÓØÝÔ� ÓÑÔÓÒ�ÒØ×<br />
�Ù� ØÓ � Ð�ÝÓÙØ �ÖÖÓÖ �Ò Ø�� ÓÒØÖÓÐ �Ö Ù�ØÖÝ Ø��<br />
¬Ö×Ø ÔÖÓØÓØÝÔ� Ó� � ÓÑÔÐ�Ø� Ö���ÓÙØ ��Ô ���ØÐ�<br />
�× ÓÒÐÝ �ÙÒ Ø�ÓÒ�Ð Û�Ø� � Ô�Ø � Ì�� ×Ù �××ÓÖ Ú�Ö×�ÓÒ<br />
���ØÐ� ¬Ü�× Ø��× �Ù� �ÑÓÒ� ÓØ��Ö×<br />
ÁÁ ���Ô �Ö ��Ø� ØÙÖ�<br />
Ì�� ���ØÐ� � ℄� ℄ �Ò �� ÓÔ�Ö�Ø�� �× �Ò �Ò�ÐÓ�Ù� ÓÖ<br />
�ÐØ�ÖÒ�Ø�Ú�ÐÝ �× � ��Ò�ÖÝ Ô�Ô�Ð�Ò�� Ö���ÓÙØ ��Ô ÁØ �Ñ<br />
ÔÐ�Ñ�ÒØ× Ø�� ��×� Ê� �ÖÓÒØ�Ò� �Ð� ØÖÓÒ� × �Ö ��Ø�<br />
£ �Ñ��Ð� ��ÙÑ��× �×� ÙÒ� �����Ð��Ö� ��<br />
ÝÒÓÛ �Ø� ���ÐÓ� Ë�Ñ� ÓÒ�Ù ØÓÖ× �Ñ�À Ã�Ö ����Ñ Æ���ÖÒ<br />
��ÖÑ�ÒÝ<br />
ÞÒÓÛ �Ø� �Ù��Ø×Ù Å��ÖÓ�Ð��ØÖÓÒ�� �Ñ�À �Ö���� �<br />
�Ù �× �Ð�� ��ÖÑ�ÒÝ<br />
�Ò �×� ÑÙÐØ��ÒÓ�� Ô�ÓØÓÑÙÐØ�ÔÐ��Ö ØÙ��× �Ö� Ù×��<br />
ÍÒ�Ú�Ö×�ØÝ Ó� À����Ð��Ö�<br />
Æ�Ú�ÐÐ� À�ÖÒ�Û Æ���Ð ËÑ�Ð�<br />
ÍÒ�Ú�Ö×�ØÝ Ó� ÇÜ�ÓÖ�<br />
ØÙÖ� � ℄ ���ÙÖ� ×�ÓÛ× � × ��Ñ�Ø� �ÐÓ � ����Ö�Ñ Ó�<br />
Ø�� ��Ô<br />
Ì�� ��Ô �ÒØ��Ö�Ø�× � ��ÒÒ�Ð× �� � ��ÒÒ�Ð ÓÒ<br />
×�×Ø× Ó� � ÐÓÛ ÒÓ�×� ��Ö�� ×�Ò×�Ø�Ú� ÔÖ��ÑÔÐ�¬�Ö �Ò<br />
� Ø�Ú� �Ê Ê� ÔÙÐ×� ×��Ô�Ö �Ò� � �Ù«�Ö Ì�� Ö�×�Ø�Ñ�<br />
Ó� Ø�� ×��Ô�� ÔÙÐ×� �× � � Ò× Ø�� ×Ô�ÐÐ ÓÚ�Ö Ð��Ø � Ò×<br />
��Ø�Ö Ø�� Ô��� �× ��ÐÓÛ Ì�� ��Ô ÔÖÓÚ���× ØÛÓ<br />
��«�Ö�ÒØ Ö���ÓÙØ Ô�Ø�× �ÓÖ Ø�� ÔÖÓÑÔØ ��Ò�ÖÝ Ö���<br />
ÓÙØ Ø�� �ÖÓÒØ�Ò� × ÓÙØÔÙØ ÓÙÔÐ�× ØÓ � ÓÑÔ�Ö�ØÓÖ<br />
Û�� � ���ØÙÖ�× � ÓÒ¬�ÙÖ��Ð� ÔÓÐ�Ö�ØÝ ØÓ ��Ø� Ø �ÒÔÙØ<br />
×��Ò�Ð× Ó� �ÓØ� ÔÓÐ�Ö�Ø��× �Ò� �Ò �Ò��Ú��Ù�Ð Ø�Ö�×�<br />
ÓÐ� Ð�Ú�Ð �ÓÙÖ ���� �ÒØ ÓÑÔ�Ö�ØÓÖ ��ÒÒ�Ð× �Ö� ÐÓ��<br />
�ÐÐÝ ÇÊ�� Ð�Ø ��� ÑÙÐØ�ÔÐ�Ü�� �Ý � �� ØÓÖ Ó� �Ò�<br />
ÖÓÙØ�� Ó« Ø�� ��Ô Ú�� ÐÓÛ ÚÓÐØ��� ��«�Ö�ÒØ��Ð ×��<br />
Ò�Ð�Ò� ÄÎ�Ë ÔÓÖØ× �Ø � ÅÀÞ Ì�� Ô�Ô�Ð�Ò�� Ö���<br />
ÓÙØ Ô�Ø� �Ò ÓÔ�Ö�Ø� �Ò ��Ø��Ö � ��Ò�ÖÝ ÑÓ�� �Ý Ù×<br />
�Ò� Ø�� ÓÑÔ�Ö�ØÓÖ ÓÙØÔÙØ× ÓÖ �Ò �Ò�ÐÓ�Ù� ÑÓ�� �Ý<br />
×�ÑÔÐ�Ò� Ø�� �ÖÓÒØ�Ò� × �Ù«�Ö ÓÙØÔÙØ Û�Ø� Ø�� ÄÀ�<br />
�ÙÒ � ÖÓ××�Ò� �Ö�ÕÙ�Ò Ý �Ø � ÅÀÞ Ì�� ×�ÑÔÐ�� �Ñ<br />
ÔÐ�ØÙ��× �Ö� ×ØÓÖ�� �Ò �Ò �Ò�ÐÓ�Ù� Ñ�ÑÓÖÝ Ô�Ô�Ð�Ò�<br />
Û�Ø� � ÔÖÓ�Ö�ÑÑ��Ð� Ð�Ø�Ò Ý Ó� Ñ�Ü � ×�ÑÔÐ�Ò�<br />
�ÒØ�ÖÚ�Ð× �Ò� �Ò �ÒØ��Ö�Ø�� ØÖ����Ö �Ù«�Ö ¬�Ó Ó� �<br />
×Ø���× Ì�� ×��Ò�Ð ×ØÓÖ�� �Ò Ø�� Ô�Ô�Ð�Ò� �× ØÖ�Ò×��Ö�� ØÓ<br />
Ø�� ÑÙÐØ�ÔÐ�Ü�Ö Ú�� � Ö�×�Ø��Ð� ��Ö�� ×�Ò×�Ø�Ú� �ÑÔÐ�<br />
¬�Ö Ï�Ø��Ò � Ö���ÓÙØ Ø�Ñ� Ó� � Ò× ÙÖÖ�ÒØ �Ö�Ú�Ö×<br />
�Ö�Ò� Ø�� ×�Ö��Ð�Þ�� ��Ø� Ó« ��Ô Ì�� ÓÙØÔÙØ Ó� �<br />
�ÙÑÑÝ ��ÒÒ�Ð �× ×Ù�ØÖ� Ø�� �ÖÓÑ Ø�� �Ò�ÐÓ�Ù� ��Ø�<br />
ØÓ ÓÑÔ�Ò×�Ø� ÓÑÑÓÒ ÑÓ�� �«� Ø× �ÐÐ �ÑÔÐ�¬�Ö<br />
×Ø���× �Ö� ���×�� �Ý �ÓÖ �� ÙÖÖ�ÒØ× ÇÒ ��Ô ����Ø�Ð<br />
ØÓ �Ò�ÐÓ�Ù� ÓÒÚ�ÖØ�Ö× ���× Û�Ø� ��Ø Ö�×ÓÐÙØ�ÓÒ<br />
��Ò�Ö�Ø� Ø�� ���× ÙÖÖ�ÒØ× �Ò� ÚÓÐØ���× �ÓÖ Ø�×Ø �Ò�<br />
�Ð��Ö�Ø�ÓÒ ÔÙÖÔÓ×�× � ��Ö�� �Ò�� ØÓÖ Û�Ø� ���Ù×Ø��Ð�<br />
ÔÙÐ×� �����Ø �× �ÑÔÐ�Ñ�ÒØ�� ÓÒ �� � ��ÒÒ�Ð Ì�� ���×<br />
×�ØØ�Ò�× �Ò� Ú�Ö�ÓÙ× ÓØ��Ö Ô�Ö�Ñ�Ø�Ö× Ð��� Ø�� ØÖ����Ö<br />
Ð�Ø�Ò Ý �Ò �� ÓÒØÖÓÐÐ�� Ú�� � ×Ø�Ò��Ö� Á � �ÒØ�Ö�� �<br />
��℄ �ÐÐ ����Ø�Ð ÓÒØÖÓÐ �Ò� ��Ø� ×��Ò�Ð× �Ü �ÔØ Ø�Ó×�<br />
�ÓÖ Ø�� Á � ÔÓÖØ× �Ö� ÖÓÙØ�� Ú�� ÄÎ�Ë ÔÓÖØ×<br />
Ì�� ��Ô �× ���Ö� �Ø�� �Ò � �Ñ ×Ø�Ò��Ö� �ÅÇË<br />
Ø� �ÒÓÐÓ�Ý �Ò� ��× � ��� ×�Þ� Ó� � ¢ � � ÑÑ Ì��
Test<br />
Input<br />
Analog<br />
In<br />
Itp<br />
Testpulse<br />
Generator<br />
Vfp<br />
Ipre<br />
preamplifier<br />
Vfs<br />
Isha Ibuf<br />
shaper<br />
Vfp<br />
Vfs<br />
Ipre<br />
Isha<br />
Ibuf<br />
Icomp<br />
Ithmain<br />
Ithdelta<br />
Itp<br />
FETestOut PipeampTestOut<br />
Frontend<br />
Bias−Generator<br />
buffer<br />
Testchannel<br />
comparator<br />
Icomp<br />
Pipeline<br />
Control<br />
Polarity<br />
Ithmain<br />
Ithdelta<br />
CompClk<br />
D Q<br />
1 of 160+16+10 cells<br />
Write<br />
Read<br />
I2C Interface<br />
pipeline<br />
Vd<br />
Or Mux LVDS @ 80 MHz<br />
Vdcl<br />
Ivoltbuf<br />
Vd<br />
Dummy channel<br />
1 of 128 channels<br />
pipeline<br />
readout−amplifier<br />
Vdcl<br />
Reset<br />
Reset<br />
Ipipe Ivoltbuf<br />
1 of 16 channels<br />
CompOut<br />
notCompOut<br />
Ipipe<br />
Icurrbuf<br />
Isf<br />
Backend<br />
Bias−Generator<br />
���ÙÖ� � Ë ��Ñ�Ø� �ÐÓ � ����Ö�Ñ Ó� Ø�� ���ØÐ� Ö���ÓÙØ ��Ô<br />
Ð�ÝÓÙØ Û�Ø� Ø�� ÓÖÖ�×ÔÓÒ��Ò� ÓÓÖ ÔÐ�Ò �× ��Ô� Ø�� �Ò<br />
¬�<br />
Ì�� ��Ô �× ��×��Ò�� ØÓ Û�Ø�×Ø�Ò� � ØÓØ�Ð �Ó×� �Ò �Ü<br />
�×× Ó� ÅÖ�� ��Ý Ø���Ò� Ø�� �ÓÐÐÓÛ�Ò� ��×��Ò<br />
Ñ��×ÙÖ�× ��℄� �Ò ÐÓ×�� ��Ø� ×ØÖÙ ØÙÖ�× �ÓÖ ÆÅÇË ØÖ�Ò<br />
×�×ØÓÖ× ×ÙÔÔÖ�×× �Ò �Ò Ö��×� �Ò Ð������ ÙÖÖ�ÒØ ÙÒ��Ö<br />
�ÖÖ����Ø�ÓÒ� � ÓÒ×�×Ø�ÒØ Ù×� Ó� �Ù�Ö� Ö�Ò�× Ñ�Ò�Ñ�Þ�×<br />
Ø�� Ö�Ø� Ó� ×�Ò�Ð� �Ú�ÒØ �«� Ø× ��℄ �ÓÖ �� ���× ÙÖÖ�ÒØ×<br />
�Ö� Ù×�� �Ò �ÐÐ �Ò�ÐÓ�Ù� ×Ø���× �Ò×Ø��� Ó� ¬Ü�� ÒÓ��<br />
ÚÓÐØ���×<br />
ÁÁÁ Ì�� ���ØÐ� ���Ô<br />
���ØÐ� �× Ø�� ¬Ö×Ø ÔÖÓØÓØÝÔ� Ó� � ÓÑÔÐ�Ø� Ö���ÓÙØ<br />
��Ô ÁØ Û�× ×Ù�Ñ�ØØ�� �Ò �ÔÖ�Ð Ì��× ��Ô Ú�Ö<br />
×�ÓÒ ��× ØÓ �� Ô�Ø ��� � � Û�Ø� � �Ó Ù×�� �ÓÒ ���Ñ<br />
ØÓ �� �ÙÒ Ø�ÓÒ�Ð � Ð�ÝÓÙØ �ÖÖÓÖ �Ò � ØÖ�×Ø�Ø� �Ù«�Ö Ó�<br />
Ø�� ÓÒØÖÓÐ �Ö Ù�ØÖÝ ÔÖ�Ú�ÒØ× ÔÖÓ�Ö�ÑÑ�Ò� Ø�� ��Ô<br />
Ú�� Ø�� Á � �Ù× Ì�� ��Ô × �ÒØ�ÖÒ�Ð ��Ø� �Ù× �× Ô�Ö<br />
Ñ�Ò�ÒØÐÝ �ÓÖ �� ØÓ ÐÓ�� �Ù� ØÓ � �Ù� �Ò Ø�� �Ü<br />
ØÖ� Ø�ÓÒ ×Ó�ØÛ�Ö� Ø��× �ÖÖÓÖ Û�× ÒÓØ ��× ÓÚ�Ö�� �Ý Ø��<br />
�Ú��Ð��Ð� �� ��Ò� ØÓÓÐ× � �Ó Ù×�� �ÓÒ ���Ñ Ô�Ø ���×<br />
���Ò �ÔÔÐ��� ØÓ � ×�Ò�Ð� ��� Ì�� Ô�Ø ��ÓÛ�Ú�Ö �Ò��Ð�×<br />
ÓÒÐÝ � ÛÖ�Ø� � �×× ØÓ Ø�� ��Ô Ì�� ��Ô Ö���×Ø�Ö× �Ò<br />
ÒÓØ �� Ö��� �� � ��� ×�ÓÛ× Ø�� ÓÙØÔÙØ ×��Ò�Ð Ó�<br />
Ø�� Ô�Ø ��� ��� Ù×�Ò� Ø�� �Ò�ÐÓ�Ù� Ô�Ô�Ð�Ò�� Ö���ÓÙØ<br />
Ô�Ø� �ÐÐ � ��ÒÒ�Ð× �Ö� ÑÙÐØ�ÔÐ�Ü�� ÓÒ ÓÒ� ÔÓÖØ<br />
Ì�� ¬�ÙÖ� �× �Ò ÓÚ�ÖÐ�Ý Ó���«�Ö�ÒØ�Ú�ÒØ× Û�Ø� �ÒÔÙØ<br />
multiplexer<br />
4 x (32 to 1)<br />
Isf<br />
current<br />
buffer<br />
Icurrbuf<br />
Out[3:0]<br />
notOut[3:0]<br />
×��Ò�Ð× ÓÖÖ�×ÔÓÒ��Ò� ØÓ � �Ò� � ÅÁÈ× �Ô<br />
ÔÐ��� ØÓ � ×�Ò�Ð� �Ò� � �ÖÓÙÔ Ó� � ���� �ÒØ ��ÒÒ�Ð×<br />
Ó� Ø�� ��Ô ÇÒ Ø�� ¬�ÙÖ� Ø�� ��«�Ö�ÒØ �ÒÔÙØ Ð�Ú�Ð× �Ö�<br />
Ð��ÖÐÝ Ú�×��Ð� ÓÒ Ø�� �ÖÓÙÔ Ó� � ��ÒÒ�Ð× Ì�� ��×�Ð�Ò�<br />
×���Ø �× �Ù� ØÓ � ÚÓÐØ��� �ÖÓÔ ÓÒ Ø�� Î� Ð ���× Ð�Ò� Ó�<br />
Ø�� Ô�Ô�Ð�Ò� Ö���ÓÙØ �ÑÔÐ�¬�Ö � ¬� Ì�� �����Ö �×<br />
ÓÖÖ� ØÐÝ �Ò Ó��� �ÙØ ��× ÛÖÓÒ� ÚÓÐØ��� Ð�Ú�Ð× �Ù� ØÓ<br />
� �Ù� �Ò Ø�� ÑÙÐØÔÐ�Ü�Ö<br />
ÁÒÚ�×Ø���Ø�ÓÒ× ÓÒ Ø�� ���ØÐ�È� Ø�×Ø ��Ô Û�� ��Ñ<br />
ÔÐ�Ñ�ÒØ× Ø�� Ô�Ô�Ð�Ò� Ö���ÓÙØ �ÑÔÐ�¬�Ö Û�Ø� � �×× ØÓ<br />
�ÐÐ �ÒØ�ÖÒ�Ð ÒÓ��× Ö�Ú��Ð�� � �Ù� �Ò Ø�� Ð�ÝÓÙØ Ó� Ø��<br />
ØÖ�Ò×Ñ�××�ÓÒ ��Ø� Ù×�� ØÓ Ö�×�Ø Ø�� �ÑÔÐ�¬�Ö Ì�� ×�Ñ�<br />
�ÖÖÓÖ �× ÔÖ�×�ÒØ �ÒØ�� ×Û�Ø ��× Ó� Ø�� ÑÙÐØ�ÔÐ�Ü�Ö� �<br />
×�ÓÖØ�� ØÖ�Ò×�×ØÓÖ Û�� ��×Ù×�� �× �ÙÑÑÝ ��Ú� � �Ò<br />
Ø�� ØÖ�Ò×Ñ�××�ÓÒ ��Ø� �× �Ò ÓÖÖ� ØÐÝ Û�Ö�� Ì��× �Ù×�×<br />
Ø�� �Ò�� Ø�ÓÒ Ó� ��Ö�� �ÒØÓ Ø�� �ÑÔÐ�¬�Ö × �ÒÔÙØ Û�� �<br />
Ö�×ÙÐØ× �Ò ×���Ø�Ò� �Ø × ÓÔ�Ö�Ø�Ò� ÔÓ�ÒØ Ì��× �ÖÖÓÖ Û�×<br />
ÒÓØ ��Ø� Ø�� �Ý Ø�� Ð�ÝÓÙØ Ú�Ö×Ù× × ��Ñ�Ø� ÄÎË<br />
�� � �� �Ù×� ���Ð�×× ×�ÓÖØ�� ØÖ�Ò×�×ØÓÖ× �Ö� ÒÓØ �Ü<br />
ØÖ� Ø�� �× Ô�Ý×� �Ð ��Ú� �×<br />
ÁÎ Ì�� ���ØÐ� ���Ô<br />
Ì�� ���ØÐ� ��Ô Ú�Ö×�ÓÒ Û�× ×Ù�Ñ�ØØ�� �Ò Å�Ö �<br />
Û�Ø� Ø�� �ÒØ�ÒØ�ÓÒ ØÓ ¬Ü �ÐÐ �ÒÓÛÒ �Ù�× �Ò� ØÓ<br />
�ÚÓ�� Ø�� �ÑÔÐ�Ñ�ÒØ�Ø�ÓÒ Ó� Ò�Û ���ØÙÖ�× ÙÒÐ�×× Ø��Ý<br />
�Ö� Ö�Ø� �Ð �ÓÖ Ø�� ÓÑÔÐ�Ø� ��×��Ò<br />
Å�Ò�ÑÙÑ ÁÓÒ�Þ�Ò� È�ÖØ� Ð� ÅÁÈ � �Ð� ØÖÓÒ× �Ò<br />
� �Ñ ×�Ð� ÓÒ
Analogue Input Pads<br />
Protection Diodes<br />
Testpulse Injector<br />
Probe Pads<br />
Analogue Frontend<br />
Frontend<br />
Bias Generator<br />
Comparator<br />
Probe<br />
LVDS Comparator Output Pads Pads<br />
Analogue Pipeline<br />
Pipeline/Readout<br />
Control Logic<br />
Pipeline Readout Amplifier<br />
LVDS Comparator Output Pads<br />
Multiplexer<br />
Backend<br />
Bias<br />
Generator<br />
I2C<br />
Interface<br />
���ÙÖ� � Ä�ÝÓÙØ Ó� Ø�� ���ØÐ� ��Ô Ú�Ö×�ÓÒ �Ò� �Ø× ÓÖÖ�×ÔÓÒ��Ò� ÓÓÖ ÔÐ�Ò Ì�� ��� ×�Þ� �× �� ¢ ��� ÑÑ<br />
� ��×��Ò ���Ò��×<br />
Ì�� �ÓÐÐÓÛ�Ò� ��×��Ò ��Ò��× ��Ú� ���Ò �ÔÔÐ����<br />
¯ Ì�� Ð�ÝÓÙØ Ó� Ø�� ØÖ�×Ø�Ø� �Ù«�Ö× �Ò Ø�� ÓÒØÖÓÐ<br />
�Ö Ù�Ø ��× ���Ò ÑÓ��¬��<br />
¯ � ×ÓÙÖ � �ÓÐÐÓÛ�Ö ��× ���Ò ����� ØÓ �� � Ô�Ô��ÑÔ<br />
��ÒÒ�Ð ØÓ �Ù«�Ö Ø�� Î� Ð ���× ÒÓ��<br />
¯ Ì�� Ð�ÝÓÙØ Ó� Ø�� ØÖ�Ò×Ñ�××�ÓÒ ��Ø� Ù×�� �Ò Ø��<br />
Ô�Ô��ÑÔ �Ò� ÑÙÐØ�ÔÐ�Ü�Ö ��× ���Ò ÑÓ��¬��<br />
¯ � Û�Ö�Ò� �ÖÖÓÖ �Ò Ø�� ÑÙÐØ�ÔÐ�Ü�Ö ��× ���Ò Ö�<br />
×ÓÐÚ��<br />
ÁÒ ����Ø�ÓÒ ØÓ Ø�� ��ÓÚ� Ñ�ÒØ�ÓÒ�� �Ù� ¬Ü�× ×ÓÑ�<br />
Ñ�ÒÓÖ ��Ò��× ��Ú� ���Ò �ÓÒ�� Ì�� ����Ø�Ð ��Ð�Ý �Ð�<br />
Ñ�ÒØ �ÓÖ Ø�� Á � Ë�� Ð�Ò� ��× ���Ò Ö�ÔÐ� �� �Ý �Ò<br />
�Ò�ÐÓ�Ù� ÓÒ� Ì�� Ð�ÝÓÙØ Ó� Ø�� Ô�Ô�Ð�Ò� ��× ���Ò ÑÓ�<br />
�¬�� ØÓ Ö��Ù � ÖÓ××Ø�Ð� Ì�� Ø�×Ø ��ÒÒ�Ð ��× ���Ò<br />
�ÜØ�Ò��� �ÓÛÒ ØÓ Ø�� Ô�Ô��ÑÔ × ÓÙØÔÙØ<br />
� ��Ö×Ø Å��×ÙÖ�Ñ�ÒØ Ê�×ÙÐØ×<br />
È�Ô�Ð�Ò�� Ê���ÓÙØ<br />
Ì�� ÓÙØÔÙØ ×��Ò�Ð Ó� Ø�� ÓÑÔÐ�Ø� �Ò�ÐÓ�Ù� ���Ò �×<br />
×�ÓÛÒ �Ò ¬� � �ÐÐ � ��ÒÒ�Ð× �Ö� ÑÙÐØ�ÔÐ�Ü�� ÓÒ<br />
ÓÒ� ÔÓÖØ ÁÒÔÙØ ×��Ò�Ð× ÓÖÖ�×ÔÓÒ��Ò� ØÓ ÅÁÈ× �Ö�<br />
�ÔÔÐ��� ØÓ � ×�Ò�Ð� �Ò� ØÛÓ �ÖÓÙÔ× Ó� ���� �ÒØ ��Ò<br />
Ò�Ð× Ó� Ø�� ��Ô Ì�� ¬Ö×Ø ����Ø ��Ø× Ó� Ø�� ��Ø� ×ØÖ��Ñ<br />
Ù×�� ØÓ �××ÙÖ� Ø�Ñ�Ò� ÓÒ×ØÖ��ÒØ× Ó� Ø�� Á � ÔÖÓØÓ ÓÐ<br />
Power Pads<br />
Probe Pads<br />
Analogue<br />
Output Pads<br />
Digital I/O Pads<br />
Monitor<br />
Pads<br />
�Ò Ó�� Ø�� Ô�Ô�Ð�Ò� ÓÐÙÑÒ ÒÙÑ��Ö �ÓÐÙÑÒ ÒÙÑ��Ö<br />
�� ��× ���Ò ØÖ����Ö�� �Ò Ø��× ÔÐÓØ Û�� � �× Ð��ÖÐÝ<br />
Ú�×��Ð� �Ò Ø�� ��Ø� �����Ö Ì�� ÚÓÐØ��� Ð�Ú�Ð× Ó� Ø��<br />
�����Ö ÓÖÖ�×ÔÓÒ� ØÓ ¦ ÅÁÈ× Ì�� ×Ð���Ø Ú�Ö��Ø�ÓÒ Ó�<br />
Ø�� ��×�Ð�Ò� Ó� �ÔÔÖÓÜ ÅÁÈ �× ÒÓØ Ý�Ø ÙÒ��Ö×ØÓÓ�<br />
��� � ��Ô� Ø× Ø�� ��Ò�ÖÝ Ô�Ô�Ð�Ò�� Ö���ÓÙØ Ô�Ø�<br />
Û��Ö� Ø�� ÓÑÔ�Ö�ØÓÖ ÓÙØÔÙØ× �Ö� ×�ÑÔÐ�� �ÒØÓ Ø��<br />
Ô�Ô�Ð�Ò� ����Ò �ÐÐ � ��ÒÒ�Ð× �Ö� ÑÙÐØ�ÔÐ�Ü�� ÓÒ<br />
ÓÒ� ÔÓÖØ �× �Ò Ø�� �Ò�ÐÓ�Ù� Ô�Ô�Ð�Ò�� Ö���ÓÙØ Ô�Ø�<br />
Ø�� �����Ö �× �Ò Ó��� Û�Ø� ¦ ÅÁÈ× Ì�� ÐÓ�� Ð�Ú�Ð×<br />
Ó� Ø�� ��Ò�ÖÝ ��ÒÒ�Ð× �Ö� Ö�ÔÖ�×�ÒØ�� Û�Ø� �Ò�<br />
ÅÁÈ× Ö�×Ô� Ø�Ú�ÐÝ<br />
�ÖÓÒØ�Ò� ÈÙÐ×� Ë��Ô�<br />
ÁÒ�ÓÖÑ�Ø�ÓÒ ��ÓÙØ Ø�� �ÖÓÒØ�Ò� × ÔÙÐ×� ×��Ô� �Ò<br />
�� Ó�Ø��Ò�� �ÖÓÑ ��Ø��Ö Ø�� Ø�×Ø ��ÒÒ�Ð ÓÙØÔÙØ<br />
��Ì�×ØÇÙØ � ¬� ÓÖ �ÖÓÑ � ÔÙÐ×� ×��Ô� × �Ò À�Ö�<br />
Ø�� �ÖÓÒØ�Ò� × ÓÙØÔÙØ �× Ö��� ÓÙØ Ú�� Ø�� Ô�Ô�Ð�Ò�� Ô�Ø�<br />
Û��Ð� Ø�� ÔÖ��ÑÔÐ�¬�Ö �ÒÔÙØ ×��Ò�Ð �× ×���Ø�� Û Ö Ø Ø��<br />
×�ÑÔÐ�Ò� ÐÓ � ��� � ×�ÓÛ× Ø�� ×��Ô�� ÔÙÐ×� Ñ��<br />
×ÙÖ�� �Ø Ø�� ÓÙØÔÙØ ÒÓ�� Ó� Ø�� Ø�×Ø ��ÒÒ�Ð Ì�� ÐÓ��<br />
�Ô� �Ø�Ò � �Ø Ø�� ÔÖ��ÑÔÐ��Ö × �ÒÔÙØ ��× ���Ò Ú�Ö<br />
��� �Ò �ÓÙÖ ×Ø�Ô× Ô� Ô� Ô� Ô� ��� �<br />
��Ô� Ø× Ø�� Ö�×ÙÐØ Ó� � ÔÙÐ×� ×��Ô� × �Ò Û�Ø� � �Ô�<br />
�Ø�Ú� �ÒÔÙØ ÐÓ�� Ó� Ô� �ÓÖ Ø�� �Ó×�Ò ���× ×�ØØ�Ò�×<br />
Ø�� Ô����Ò� Ø�Ñ� �Ò �ÓØ� ÔÐÓØ× �Ü ���× Ò× Û�� ��×<br />
�Ò ��×��Ö��Ñ�ÒØ Û�Ø� ×�ÑÙÐ�Ø�ÓÒ Ö�×ÙÐØ× Æ�Û �ÖÓÒØ�Ò�<br />
��Ú�ÐÓÔÑ�ÒØ× � ×� Ø Î Û�ÐÐ ÓÚ�Ö ÓÑ� Ø��× ÔÖÓ�<br />
Ð�Ñ
data header<br />
DataValid<br />
AnalogOut[0]<br />
128 channels<br />
���ÙÖ� � �Ò�ÐÓ�Ù� ÓÙØÔÙØ ×��Ò�Ð Ó� � ���ØÐ� ��Ô<br />
�ÐÐ � ��ÒÒ�Ð× �Ö� ÑÙÐØ�ÔÐ�Ü�� ÓÒ ÓÒ� ÔÓÖØ ÁÒÔÙØ<br />
×��Ò�Ð× ÓÖÖ�×ÔÓÒ��Ò� ØÓ � �Ò� � ÅÁÈ× ��Ú�<br />
���Ò �ÔÔÐ��� ØÓ � ×�Ò�Ð� �Ò� � �ÖÓÙÔ Ó� � ���� �ÒØ<br />
��ÒÒ�Ð× Ó� Ø�� ��Ô Ì�� Ö���ÓÙØ ×Ô��� �× ×�Ø ØÓ �<br />
ÅÀÞ<br />
Î �ÙØÙÖ� ÔÐ�Ò×<br />
ÁØ �× ÔÐ�ÒÒ�� ØÓ �ÖÖ����Ø� Ø�� ���ØÐ� ��Ô× �Ò Ç ØÓ<br />
��Ö �Ø Ø�� � Ö�Ý �ÖÖ����Ø�ÓÒ �� �Ð�ØÝ Ó� Ø�� ��ÊÆ<br />
Ñ� ÖÓ�Ð� ØÖÓÒ� × �ÖÓÙÔ ÙÔ ØÓ ÅÖ��<br />
Ì�� ×Ù�Ñ�××�ÓÒ Ó� Ú�Ö×�ÓÒ Ó� Ø�� ���ØÐ� ��Ô �×<br />
�ÒØ�Ò��� �Ò ×ÔÖ�Ò� Ì��× Ò�Û ��Ô Ú�Ö×�ÓÒ Û�ÐÐ<br />
�ÑÔÐ�Ñ�ÒØ Ø�� �ÓÐÐÓÛ�Ò��<br />
¯ � ÑÓ��¬�� �ÖÓÒØ�Ò� Û�Ø� � ��×Ø�Ö ×��Ô�Ò� �Ò� �<br />
�����Ö ØÓÐ�Ö��Ð� Ñ�Ü�ÑÙÑ �ÒÔÙØ ��Ö�� Ö�Ø�<br />
ÌÛÓ ¢ ÑÑ Ø�×Ø ��Ô× ��Ú� ���Ò ×Ù�Ñ�ØØ��<br />
�Ò Å�Ý �ÑÔÐ�Ñ�ÒØ�Ò� �Ò ØÓØ�Ð � ��«�Ö�ÒØ<br />
�ÖÓÒØ�Ò�× � ��Ø��Ð�� ��× Ö�ÔØ�ÓÒ �Ò �� �ÓÙÒ� �Ò<br />
��℄ ��Ø�Ö �ÒØ�Ò×�Ú� Ø�×Ø�Ò� Ó� Ø��×� ×ØÖÙ ØÙÖ�× � ��<br />
�×�ÓÒ �ÓÖ Ø�� �ÖÓÒØ�Ò� ÑÓ��¬ �Ø�ÓÒ Û�ÐÐ �� Ñ���<br />
¯ ÌÛÓ ×�Ò�Ð� �Ú�ÒØ ÙÔ×�Ø Ë�Í ��Ø� Ø�ÓÒ �Ò� ÓÖ<br />
Ö� Ø�ÓÒ Ñ� ��Ò�×Ñ×<br />
��Ö×Ø �ÒÚ�×Ø���Ø�ÓÒ× ÓÒ Ë�Í ��Ö��Ò�� ÐÓ�� Û�ÐÐ ��<br />
Ñ��� Û�Ø� Ø�� Ø�×Ø ��Ô ���ØÐ�ËÊ �Ò �ÖÖÓÖ ÓÖ<br />
Ö� Ø�ÓÒ Ñ� ��Ò�×Ñ ��×�� ÓÒ ��ÑÑ�Ò� �Ò Ó��Ò�<br />
�× ÙÒ��Ö ��Ú�ÐÓÔÑ�ÒØ<br />
ËØ�ØÙ× Ö�ÔÓÖØ× �Ò� �ÙÖØ��Ö Ø�×Ø Ö�×ÙÐØ× Û�ÐÐ �� �Ú��Ð<br />
��Ð� �Ø ��℄<br />
data header<br />
DataValid<br />
AnalogOut[0]<br />
128 channels<br />
���ÙÖ� �� �Ò�ÐÓ�Ù� ÓÙØÔÙØ ×��Ò�Ð Ó� � ���ØÐ� ��Ô<br />
�ÐÐ � ��ÒÒ�Ð× �Ö� ÑÙÐØ�ÔÐ�Ü�� Û�Ø� � ÅÀÞ ÓÒ ÓÒ�<br />
ÔÓÖØ ÁÒÔÙØ ×��Ò�Ð× ÓÖÖ�×ÔÓÒ��Ò� ØÓ ÅÁÈ× ��Ú� ���Ò<br />
�ÔÔÐ��� ØÓ � ×�Ò�Ð� �Ò� ØÛÓ �ÖÓÙÔ× Ó� ���� �ÒØ ��Ò<br />
Ò�Ð× Ó� Ø�� ��Ô<br />
Ê���Ö�Ò �×<br />
� ℄ � ��ÙÑ��×Ø�Ö �Ø �Ð ��×��Ò Ó� � Ê���ÓÙØ<br />
���Ô �ÓÖ ÄÀ�� ÈÖÓ ����Ò�× Ó� Ø�� �Ø� ÏÓÖ�<br />
×�ÓÔ ÓÒ �Ð� ØÖÓÒ� × �ÓÖ ÄÀ� �ÜÔ�Ö�Ñ�ÒØ×<br />
��ÊÆ ÄÀ�� � ��<br />
� ℄ Æ Ú�Ò ����Ð �Ø �Ð Ì�� ���ØÐ� Ê���Ö�Ò � Å�ÒÙ�Ð<br />
Ú ÄÀ�� ��<br />
� ℄ Ê �Ö�ÒÒ�Ö �Ø �Ð ÆÙ Ð ÁÒ×ØÖ �Ò� Å�Ø� � �<br />
��� ���<br />
��℄ Ì�� Á � �Ù× �Ò� �ÓÛ ØÓÙ×� �Ø È��Ð�Ô× Ë�Ñ� ÓÒ<br />
�Ù ØÓÖ× ���<br />
��℄ Ö� Ê��� ËØ�ØÙ× Ê�ÔÓÖØ ËØÙ�Ý Ó� Ê����Ø�ÓÒ ÌÓÐ<br />
�Ö�Ò � Ó� Á�× �ÓÖ ÄÀ�� ��ÊÆ ÄÀ��<br />
��℄ � �� �Ó �Ø �Ð ÌÓØ�Ð �Ó×� �Ò� Ë�Ò�Ð� �Ú�ÒØ<br />
�«� Ø× Ë�� �Ò � � �Ñ �ÅÇË Ì� �ÒÓÐÓ�Ý<br />
��ÊÆ ÄÀ�� �� � ���<br />
��℄ Í ÌÖÙÒ� �Ø �Ð �Ò��Ò �� Ö����Ø�ÓÒ ��Ö�Ò�×× �Ò�<br />
��×Ø�Ö �ÖÓÒØ �Ò�× �ÓÖ Ø�� ���ØÐ� Ö���ÓÙØ ��Ô ØÓ ��<br />
ÔÙ�Ð�×��� �Ò� ÈÖÓ ����Ò�× Ó� Ø�� �Ø� ÏÓÖ�×�ÓÔ ÓÒ<br />
�Ð� ØÖÓÒ� × �ÓÖ ÄÀ� �ÜÔ�Ö�Ñ�ÒØ×<br />
��℄ �ØØÔ� ÛÛÛ�×� ��Ô ÙÒ� �����Ð��Ö� �� Ð� �
data header<br />
DataValid<br />
128 channels<br />
���ÙÖ� �� ÇÙØÔÙØ ×��Ò�Ð Ó� Ø�� ��Ò�ÖÝ Ô�Ô�Ð�Ò�� Ö���ÓÙØ<br />
Ô�Ø� �ÐÐ � ��ÒÒ�Ð× �Ö� ÑÙÐØ�ÔÐ�Ü�� ÓÒ ÓÒ� ÔÓÖØ<br />
Ì�� �����Ö �× �Ò Ó��� Û�Ø� ¦ ÅÁÈ× Ì�� ÐÓ�� Ð�Ú�Ð×<br />
Ó� Ø�� ��Ò�ÖÝ ��ÒÒ�Ð× �Ö� Ö�ÔÖ�×�ÒØ�� �Ý �Ò�<br />
ÅÁÈ× Ö�×Ô� Ø�Ú�ÐÝ<br />
Cload<br />
3 pF<br />
13 pF<br />
25 pF<br />
32 pF<br />
���ÙÖ� �� ÌÖ�Ò×��ÒØ Ö�×ÔÓÒ×� ÓÒ � ��ÐØ� ×��Ô�� �ÒÔÙØ<br />
×��Ò�Ð Ó� � �Ð� ØÖÓÒ× ÅÁÈ Ñ��×ÙÖ�� �Ø Ø��<br />
���ØÐ� × Ø�×Ø ��ÒÒ�Ð ÄÓ�� �Ô� �Ø�Ò �× Ó� Ô�<br />
Ô� � Ô� �Ò� Ô� ��Ú� ���Ò �ÔÔÐ��� ØÓ Ø�� ÔÖ��Ñ<br />
ÔÐ�¬�Ö × �ÒÔÙØ<br />
[V]<br />
0,65<br />
0,6<br />
0,55<br />
0,5<br />
0,45<br />
0,4<br />
0,35<br />
0,3<br />
Vfs=0.5V,Vfp=0V,Ipre=600uA,Isha=80uA,Ibuf=80uA<br />
Vfs=0.5V,Vfp=0V,Ipre=600uA,Isha=120uA,Ibuf=80uA<br />
Vfs=0V,Vfp=0V,Ipre=600uA,Isha=80uA,Ibuf=80uA<br />
Vfs=0V,Vfp=0V,Ipre=600uA,Isha=120uA,Ibuf=80uA<br />
0,25<br />
0 20 40 60 80 100<br />
time [ns]<br />
120 140 160 180 200<br />
���ÙÖ� �� �ÖÓÒØ�Ò� ÓÙØÔÙØ ×��Ò�Ð Û�Ø� Ú�ÖÝ�Ò� ���× ×�Ø<br />
Ø�Ò�× Ó�Ø��Ò�� �ÖÓÑ � ÔÙÐ×� ×��Ô� × �Ò Ì�� �Ô� �Ø�Ú�<br />
�ÒÔÙØ ÐÓ�� �× Ô�
�Ò��Ò �� Ê����Ø�ÓÒ À�Ö�Ò�×× �Ò� ��×Ø�Ö �ÖÓÒØ �Ò�×<br />
�ÓÖ Ø�� ���ØÐ� Ê���ÓÙØ ���Ô<br />
Æ��Ð× Ú�Ò ����Ð ÂÓ Ú�Ò ��Ò �Ö�Ò� À�Ò× Î�Ö�ÓÓ���Ò<br />
ÆÁÃÀ�� �Ñ×Ø�Ö��Ñ<br />
��Ö�×Ø��Ò ��Ù�Ö ��Ò��Ð ��ÙÑ��×Ø�Ö Ï�ÖÒ�Ö ÀÓ�Ñ�ÒÒ Ã�ÖÐ Ì�××Ó ÃÒ�ÓÔ� ËÚ�Ò Ä�Ó �Ò�Ö<br />
Å� ���Ð Ë �Ñ�ÐÐ�Ò� ����Ö Ë�Ü�Ù�Ö £<br />
��×ØÖ� Ø<br />
Ì��× Ô�Ô�Ö ×ÙÑÑ�Ö�Þ�× Ø�� Ö� �ÒØ ÔÖÓ�Ö�×× �Ò Ø�� ��<br />
Ú�ÐÓÔÑ�ÒØ Ó� Ø�� � ��ÒÒ�Ð Ô�Ô�Ð�Ò�� Ö���ÓÙØ ��Ô<br />
���ØÐ� Û����×�ÒØ�Ò��� �ÓÖ Ø�� ×�Ð� ÓÒ Ú�ÖØ�Ü ��Ø�<br />
ØÓÖ Ø�� �ÒÒ�Ö ØÖ� ��Ö Ø�� Ô�Ð� ÙÔ Ú�ØÓ ØÖ����Ö �Ò� Ø��<br />
ÊÁ�À ��Ø� ØÓÖ× Ó� ÄÀ��<br />
��¬ ��Ò ��× �ÓÙÒ� �Ò Ø�� �ÖÓÒØ �Ò� Ó� Ø�� ���ØÐ� Î�Ö<br />
×�ÓÒ �Ò� ��Ô× Ö�×ÙÐØ�� �Ò Ø�� ×Ù�Ñ�××�ÓÒ× Ó�<br />
���ØÐ��� �Ò� ���ØÐ��� Û��Ð� ���ØÐ�ËÊ<br />
�ÑÔÐ�Ñ�ÒØ× Ø�×Ø �Ö Ù�Ø× ØÓ ÔÖÓÚ��� �ÙØÙÖ� ���ØÐ� ��Ô×<br />
Û�Ø� ÐÓ�� �Ö Ù�Ø× ��Ö��Ò�� ����Ò×Ø ×�Ò�Ð� �Ú�ÒØ ÙÔ×�Ø<br />
Ë�Í<br />
Ë� Ø�ÓÒ Á ÑÓØ�Ú�Ø�× Ø�� ��Ú�ÐÓÔÑ�ÒØ Ó� Ò�Û �ÖÓÒØ<br />
�Ò�× �ÓÖ Ø�� ���ØÐ� ��Ô �Ò� ×� Ø�ÓÒ ÁÁ ×ÙÑÑ�<br />
Ö�Þ�× Ø���Ö ÓÒ �ÔØ× �Ò� ÓÒ×ØÖÙ Ø�ÓÒ Ë� Ø�ÓÒ ÁÁÁ Ö�<br />
ÔÓÖØ× ÔÖ�Ð�Ñ�Ò�ÖÝ Ö�×ÙÐØ× �ÖÓÑ Ø�� ���ØÐ��� �Ò�<br />
���ØÐ��� ��Ô× Û��Ð� ×� Ø�ÓÒ ÁÎ ��× Ö���× Ø��<br />
���ØÐ�ËÊ ��Ô �Ò ÓÙØÐÓÓ� ÓÒ �ÙØÙÖ� Ø�×Ø �Ò� ��<br />
Ú�ÐÓÔÑ�ÒØ Ó�Ø�����ØÐ� ��Ô �× ��Ú�Ò �Ò ×� Ø�ÓÒ Î<br />
Á ÁÒØÖÓ�Ù Ø�ÓÒ<br />
Ì�� ��Ú�ÐÓÔÑ�ÒØ Ó�Ø�����ØÐ� Ö���ÓÙØ ��Ô ×Ø�ÖØ�� �Ò<br />
Ð�Ø� ��� ÁØ �ÑÔÐ�Ñ�ÒØ× Ø�� ��×� Ê� �Ö ��Ø� ØÙÖ�<br />
��℄ �Ù�Ñ�ÒØ�� Û�Ø� � ÔÖÓÑÔØ ��Ò�ÖÝ Ö���ÓÙØ Ô�Ø�<br />
Ð��� �Ø Û�× �ÑÔÐ�Ñ�ÒØ�� ÓÒ Ø�� À�ÄÁ� � ��Ô ��℄<br />
�Ò� � Ô�Ô�Ð�Ò�� ��Ò�ÖÝ ÓÔ�Ö�Ø�ÓÒ ÑÓ�� ��×���× ×�Ú�Ö�Ð<br />
¢ ÑÑ ��Ô× Û�Ø� Ø�×Ø ×ØÖÙ ØÙÖ�× �Ò� ÓÑÔÓÒ�ÒØ×<br />
ØÛÓ ÓÑÔÐ�Ø� Ö���ÓÙØ ��Ô× ���ØÐ� �Ò� ���ØÐ�<br />
��Ú� ���Ò Ñ�ÒÙ�� ØÙÖ�� �Ò ÓÑÑ�Ö ��Ð � ñÑ �ÅÇË<br />
£ ÒÓÛ �Ø ���ÐÓ� Ë�Ñ� ÓÒ�Ù ØÓÖ× �Ñ�À Ã�Ö ����Ñ Æ���ÖÒ<br />
��ÖÑ�ÒÝ<br />
ÝÒÓÛ �Ø �Ù��Ø×Ù Å��ÖÓ�Ð��ØÖÓÒ�� �Ñ�À �Ö���� �<br />
�Ù �× �Ð�� ��ÖÑ�ÒÝ<br />
Þ�Ñ��Ð� ØÖÙÒ� ��Ô ÙÒ� �����Ð��Ö� ��<br />
�Ò �×� Ó� ÑÙÐØ��ÒÓ�� Ô�ÓØÓÑÙÐØ�ÔÐ��Ö ØÙ�� Ö���ÓÙØ<br />
Å�Ü ÈÐ�Ò � ÁÒ×Ø�ØÙØ� �ÓÖ ÆÙ Ð��Ö È�Ý×� × À����Ð��Ö�<br />
Å�ÖØ�Ò ��Ù�Ö×Ø� � Ê���Ð� Ý ÍÐÖ� �ÌÖÙÒ� Þ<br />
ÍÒ�Ú�Ö×�ØÝ Ó� À����Ð��Ö�<br />
Æ�Ú�ÐÐ� À�ÖÒ�Û Æ���Ð ËÑ�Ð�<br />
ÍÒ�Ú�Ö×�ØÝ Ó� ÇÜ�ÓÖ�<br />
Ø� �ÒÓÐÓ�Ý � ��Ø��Ð�� ��× Ö�ÔØ�ÓÒ Ó� Ø��×� ��Ô× Ø���Ö<br />
�Ö ��Ø� ØÙÖ� �Ò� Ô�Ö�ÓÖÑ�Ò � �Ò �� �ÓÙÒ� �Ò � ℄ � ℄<br />
� ℄<br />
���ØÐ� Ø�� ¬Ö×Ø ÓÑÔÐ�Ø� Ô�Ô�Ð�Ò�� Ö���ÓÙØ ��Ô<br />
���ØÓ��Ô�Ø��� Û�Ø� � �Ó Ù×�� �ÓÒ ���Ñ ØÓ �� ÓÑ�<br />
�ÙÒ Ø�ÓÒ�Ð ÁÒ ØÙÖÒ Ø�� ×� ÓÒ� ÓÒ� ���ØÐ� �Ò ÐÙ���<br />
�ÐÐ ¬Ü�× ØÓ ÓÖÖ� Ø Ø�� �ÖÖÓÖ× �ÓÙÒ� ÓÒ �Ø× ÔÖ��� �××ÓÖ<br />
ÀÓÛ�Ú�Ö � ��Û ÔÖÓ�Ð�Ñ× ×Ø�ÐÐ Ö�Ñ��Ò���<br />
� È����Ò� Ø�Ñ� Ó� Ø�� �ÖÓÒØ �Ò� ØÔ��� ��� �<br />
� Ò×<br />
� �� �Ý Ø�Ñ� Ó� Ø�� ÔÙÐ×� ��× � Ö�Ñ��Ò��Ö ��ÓÚ�<br />
��Ø�Ö � Ò× ØÓÓ ÐÓÒ� �ÓÖ Ø�� ÓÔ�Ö�Ø�ÓÒ Ó� Ø��<br />
ÄÀ�� Ú�ÖØ�Ü ��Ø� ØÓÖ<br />
� Å�Ü�ÑÙÑ �ÒÔÙØ ÙÖÖ�ÒØ Ó�� Ò� ØÓÓ×Ñ�ÐÐ�ÓÖ<br />
Ø�� �ÜÔ� Ø�� Ó ÙÔ�Ò ��× �Ø ÄÀ��<br />
� ����Ø�Ð �Ö Ù�Ø× ÒÓØ ÖÓ�Ù×Ø ����Ò×Ø Ë�Í<br />
ÌÓ ÓÚ�Ö ÓÑ� Ø��×� ÔÖÓ�Ð�Ñ× Û�� � �Ö� ÔÖ�Ñ�Ö�ÐÝ Ö�<br />
Ð�Ø�� ØÓ Ø�� � ØÙ�Ð Ö�ÕÙ�Ö�Ñ�ÒØ× Ó� ÄÀ�� Ø�×Ø ��Ô×<br />
�ÑÔÐ�Ñ�ÒØ�Ò� Ø�� Ò� �××�ÖÝ �Ö Ù�Ø× Û�Ö� ×Ù�Ñ�ØØ��<br />
Ì��× �ÐÐÓÛ× Ø�� Ø�×Ø Ó� Ø�� �Ö Ù�Ø × �ÙÒ Ø�ÓÒ�Ð�ØÝ ÔÖ�ÓÖ<br />
ØÓ �Ø× �ÑÔÐ�Ñ�ÒØ�Ø�ÓÒ ÓÒ � ÓÑÔÐ�Ø� Ö���ÓÙØ ��Ô �ÙÖ<br />
Ø��ÖÑÓÖ� ��«�Ö�ÒØ �ÔÔÖÓ���× ØÓ ×ÓÐÚ� Ø�� ×�Ñ� ÔÖÓ�<br />
Ð�Ñ �Ò �� �Ú�ÐÙ�Ø�� ØÓ ¬Ò� Ø�� ÓÔØ�ÑÙÑ ×ÓÐÙØ�ÓÒ<br />
ÁÁ Æ�Û �ÖÓÒØ �Ò�×<br />
Ì�� �ÖÓÒØ �Ò� �ÑÔÐ�Ñ�ÒØ�� ÓÒ ���ØÐ� Û�× ��Ú�Ð<br />
ÓÔ�� Û�Ø� �Ò ��ÖÐÝ Ú�Ö×�ÓÒ Ó� Ø�� � ñÑ �ÅÇË ��<br />
×��Ò ��Ø �Ò ��� �Ò� ×Ù�Ñ�ØØ�� ÓÒ Ø�� ¬Ö×Ø Ø�×Ø ��Ô<br />
���ØÐ��� ÁØ ÓÒ×�×Ø× Ó� � ��Ö�� ×�Ò×�Ø�Ú� ÔÖ��Ñ<br />
ÔÐ�¬�Ö � �Ê Ê� ÔÙÐ×� ×��Ô�Ö �Ò� � �Ù«�Ö Ì�� ¬Ö×Ø<br />
ØÛÓ ×Ø���× Ù×� �ÓÐ��� �× Ó�� �ÑÔÐ�¬�Ö ÓÖ�× Û��Ð� Ø��<br />
�Ù«�Ö �× � ×ÓÙÖ � �ÓÐÐÓÛ�Ö Å��×ÙÖ�Ñ�ÒØ× Ó� �Ø× ��Ö�<br />
Ø�Ö�×Ø� × ×�ÓÛ�� Ø��Ø �Ø Û�× ÓÒ×���Ö��ÐÝ ×ÐÓÛ�Ö Ø��Ò
�ÜÔ� Ø�� �ÖÓÑ ×�ÑÙÐ�Ø�ÓÒ Ì��× ��× Ö�Ô�Ò Ý �ÓÛ�Ú�Ö<br />
��Ñ�Ò�×��� Û�Ø� Ø�� �ÚÓÐÙØ�ÓÒ Ó� Ø�� ��×��Ò ��Ø �Ò�<br />
Ò��ÖÐÝ Ú�Ò�×��� Û�Ø� Ø�� Ð�×Ø Ú�Ö×�ÓÒ � �ÙÖØ��Ö �Ñ<br />
Ô�ØÙ×ØÓ��Ú�ÐÓÔ � ��×Ø�Ö �ÖÓÒØ �Ò� �ÖÓ×� �ÖÓÑ Ø�� �Ò<br />
Ö��×�Ò� ��Ø� ØÓÖ �Ô� �Ø�Ò �× Ï��Ð� Ú�ÐÙ�× �ÖÓÙÒ�<br />
Ô� Û�Ö� �××ÙÑ�� �ÓÖ Ø�� ×ØÖ�Ô �Ô� �Ø�Ò � Ó� Ø��<br />
ÄÀ�� Ú�ÖØ�Ü ��Ø� ØÓÖ �ÙÖ�Ò� Ø�� ��Ú�ÐÓÔÑ�ÒØ Ó�Ø��<br />
���ØÐ��� Ø�� ÙÖÖ�ÒØ ��×��Ò× ÔÖ��� Ø �Ô� �Ø�Ò �×<br />
Ó� ÙÔ ØÓ � Ô� �ÓÖ Ø�� �ÒÒ�Ö ØÖ� ��Ö ��Ø� ØÓÖ× Ó� ÄÀ��<br />
��� ×�ÓÛ× Ø�� ÔÙÐ×� ×��Ô� Ó� Ø�� ���ØÐ� �ÖÓÒØ �Ò�<br />
�ÓÖ ��«�Ö�ÒØ �ÒÔÙØ �Ô� �Ø�Ò �×<br />
Cload<br />
3 pF<br />
13 pF<br />
25 pF<br />
32 pF<br />
���ÙÖ� � ÈÙÐ×� Ë��Ô�× Ó� Ø�� ���ØÐ� × Ø�×Ø ��ÒÒ�Ð<br />
�Ø �Ò�� �Ø�� �Ô� �Ø�Ò �×<br />
ÌÓ �� Ö��×� Ø�� Ô����Ò� Ø�Ñ� �Ò� ��ÐÐ Ø�Ñ� Ó�<br />
Ø�� ×��Ô�� ×��Ò�Ð Ø�� �ÓÐÐÓÛ�Ò� ÔÖÓÚ�×�ÓÒ× ��Ú� ���Ò<br />
����<br />
� �� Ö��×�� Ö�×�×Ø�Ò � Ó� Ø�� ÔÖ��ÑÔÐ�¬�Ö × �ÓÐ���<br />
�× Ó�� ÐÓ�� �Ö�Ò � ØÓ �� Ö��×� Ø�� Ô����Ò� Ø�Ñ�<br />
Ó� Ø�� ÔÙÐ×� Ì��× �Ð×Ó Ö�ÕÙ�Ö�� �Ò �Ò Ö��×� Ó� Ø��<br />
�ÒÔÙØ ØÖ�Ò×�×ØÓÖ × ØÖ�Ò× ÓÒ�Ù Ø�Ò � �Ñ �Ò ÓÖ��Ö<br />
ØÓ Ñ��ÒØ��Ò Ø�� ×�Ñ� ÓÔ�Ò ÐÓÓÔ ���Ò �<br />
� �� Ö��×�� �ÒØ��Ö�Ø�ÓÒ Ø�Ñ� ÓÒ×Ø�ÒØ �×�� Ó� Ø��<br />
×��Ô�Ö �Ò ÓÖ��Ö ØÓ �� Ö��×� Ø�� ��ÐÐ Ø�Ñ� Ó� Ø��<br />
ÔÙÐ×� Ì�� ×��Ô�Ö × �ÑÔÐ�¬�Ö ÓÖ� Û�× �Ò ÔÖ�Ò �ÔÐ�<br />
ÒÓØ �«� Ø�� �Ý Ø��× ��Ò��<br />
�� �ÒÔÙØ ÙÖÖ�ÒØ× ×�ÓÛ�� ÙÔ �× �ÒÓØ��Ö ÔÖÓ�Ð�Ñ Ó�<br />
���ØÐ� � Ø�ÓÖÓÙ�� �ÒÚ�×Ø���Ø�ÓÒ Ó� Ø�� ÔÖÓ�Ð�Ñ �Ò�<br />
×Ù�×�ÕÙ�ÒØ×�ÑÙÐ�Ø�ÓÒ× Ö�Ú��Ð�� Ø��Ø Ø�� �ÖÓÒØ�Ò�Û�×<br />
��Ð� ØÓ ÓÔ� Û�Ø� �Ú�Ö��� �ÒÔÙØ ÙÖÖ�ÒØ× ÓÒÐÝ ��ÐÓÛ<br />
Ò� Û�� ��×ØÓÓÐÓÛ �ÓÖ Ø�� �ÜÔ� Ø�� Ó ÙÔ�Ò ��× �Ø<br />
ÄÀ��<br />
Ì�� �Ù×� �ÓÖ Ø��× ����Ú�ÓÙÖ �× �Ò��Ö�ÒØ ØÓ Ø�� ÓÒ<br />
�ÔØ Ó� Ø�� ���ØÐ� × �ÖÓÒØ �Ò� ��Ô� Ø�� �Ò ¬� � Ì��<br />
��Ø� ÔÓØ�ÒØ��Ð Ó� Ø�� ÆÅÇË �ÒÔÙØ ØÖ�Ò×�×ØÓÖ �× ÓÒ � ÔÓ<br />
Ø�ÒØ��Ð Ó� �ÖÓÙÒ� Ø�� Ø�Ö�×�ÓÐ� ÚÓÐØ��� ÎØ� ÆÅÇË Ó�<br />
Ø�� �ÒÔÙØ ØÖ�Ò×�×ØÓÖ ��ÓÚ� Î×× Ì��× ÔÓØ�ÒØ��Ð �× �Ð×Ó<br />
Ø�� ×ÓÙÖ � ÚÓÐØ��� Ó� Ø�� ÈÅÇË ������ � ØÖ�Ò×�×ØÓÖ<br />
ÁÒ ØÙÖÒ Ø�� ��Ø� ÔÓØ�ÒØ��Ð Ó� Ø��× ØÖ�Ò×�×ØÓÖ ��× ØÓ ��<br />
input<br />
V th<br />
V ss<br />
V dd<br />
V fp<br />
V ss<br />
V th<br />
output<br />
���ÙÖ� � Ë ��Ñ�Ø� Ó� Ø�� �ÖÓÒØ �Ò�× �ÑÔÐ�Ñ�ÒØ�� ÓÒ<br />
���ØÐ� �Ò� ���ØÐ��� � Ì�� Ø�Ö�×�ÓÐ� ÚÓÐØ���×<br />
ÎØ� Ó� Ø�� ÆÅÇË �ÒÔÙØ ØÖ�Ò×�×ØÓÖ �Ò� ÈÅÇË ������ �<br />
ØÖ�Ò×�×ØÓÖ �Ö� �Ò�� �Ø�� �Ý �ÖÖÓÛ×<br />
ÎØ� ÆÅÇË ÎØ� ÈÅÇË �Ò ÓÖ��Ö ØÓ �� ÓÑ� ÓÒ�Ù<br />
Ø�Ú� Ë�Ò � Ø�� ��×ÓÐÙØ� Ú�ÐÙ� Ó� Ø�� Ø�Ö�×�ÓÐ� ÚÓÐØ���<br />
�× � ��Ø �����Ö �ÓÖ � ÈÅÇË Ø��Ò �ÓÖ �Ò ÆÅÇË ØÖ�Ò<br />
×�×ØÓÖ �Ò� ×�Ò � ÎØ� �× ÐÓÛ�Ö �ÓÖ ×�ÓÖØ ØÖ�Ò×�×ØÓÖ× Ð���<br />
Ø�� ÆÅÇË �ÒÔÙØ ��Ì Ø�� ×�ØÙ�Ø�ÓÒ �× ÛÓÖ×�Ò�� Æ�Ú<br />
�ÖØ��Ð�×× Ø�� �Ö Ù�Ø Ö�� ��× � ×Ø��Ð� ÓÔ�Ö�Ø�Ò� ÔÓ�ÒØ<br />
×�Ò � Ø�� ������ � ØÖ�Ò×�×ØÓÖ �× Ù×Ù�ÐÐÝ ÓÔ�Ö�Ø�� �Ò Ø��<br />
Ð�Ò��Ö ×Ù� Ø�Ö�×�ÓÐ� Ö���ÓÒ Û��Ö� Ø�� Ö�×�×Ø�Ò � Û�×<br />
×Ø�ÐÐ ��ÓÙØ �Å ª Û��Ò Ø�� ��Ø� Ó� Ø�� ������ � ØÖ�Ò<br />
×�×ØÓÖ Û�× Ø��� ØÓ Ø�� Î×× ÔÓØ�ÒØ��Ð<br />
� ¬Ö×Ø �ÔÔÖÓ� �ØÓÓÚ�Ö ÓÑ� Ø�� ÔÖÓ�Ð�Ñ Û�× �Ñ<br />
ÔÐ�Ñ�ÒØ�� ÓÒ Ø�� ���ØÐ��� ��Ô ×�ÓÛÒ �Ò ¬� �<br />
Ì�� Ð�Ò�Ø� Ó� Ø�� ������ � ØÖ�Ò×�×ØÓÖ Û�× �� Ö��×�� �Ò<br />
ÓÖ��Ö ØÓ Ö��Ù � �Ø× Ö�×�×Ø�Ò � �Ò� Ø�Ö�×�ÓÐ� ÚÓÐØ���<br />
�ÓÖ Ø�� ���ØÐ��� ¬� � ØÛÓ ��«�Ö�ÒØ ÓÒ �ÔØ×<br />
Û�Ö� Ö��Ð�×��� �ÖÓÒØ �Ò�× Û�Ø� ÈÅÇË �ÒÔÙØ �Ò� ����<br />
�� � ØÖ�Ò×�×ØÓÖ× �Ò� ÓÒ� ��ÒÒ�Ð Û�Ø� �Ò ÆÅÇË �ÒÔÙØ<br />
�Ò� ������ � ØÖ�Ò×�×ØÓÖ<br />
ÁÒ �×� Ó� Ø�� ÈÅÇË �ÒÔÙØ �Ò� ������ � ÓÒ¬�ÙÖ�<br />
Ø�ÓÒ ¬� � Ø�� Ø�Ö�×�ÓÐ� ÚÓÐØ���× ÔÓ�ÒØ �Û�Ý �ÖÓÑ<br />
Ø�� ÔÓÛ�Ö ×ÙÔÔÐÝ Ö��Ð× �Ò� Ø�Ù× �Ó ÒÓØ Ö�×ØÖ� Ø Ø��<br />
Ö�Ò�� Ó� Ù×��ÙÐ ÚÓÐØ���× Î�Ô ÓÒ Ø�� ������ � ØÖ�Ò×�×<br />
ØÓÖ × ��Ø� Ì�� �����×Ø ��×��Ú�ÒØ��� Ó� Ø��× �Ö Ù�Ø �×<br />
Ø�� �Ý � �� ØÓÖ Ó� ÐÓÛ�Ö �Ñ �Ö�� Ö�Ø�Ó Ó� Ø�� �ÒÔÙØ<br />
ØÖ�Ò×�×ØÓÖ ÇÒ Ø�� ���ØÐ��� Ø��× Û�× Ô�ÖØÐÝ ÓÑ<br />
Ô�Ò×�Ø�� �Ý Ø�� Ö��Ù Ø�ÓÒ Ó� Ø�� ��ÒÒ�Ð Ð�Ò�Ø� �Ò�<br />
Ø�� �Ò ÐÓ×�� Û�¯� ��ÓÑ�ØÖÝ Ó� Ø�� �ÒÔÙØ ØÖ�Ò×�×ØÓÖ<br />
Ì�� ×ÓÐÙØ�ÓÒ Û�Ø� ÆÅÇË �ÒÔÙØ �Ò� ������ � ØÖ�Ò<br />
×�×ØÓÖ× ×�ÓÛÒ �Ò ¬� � �× ×ÔÓ�Ð�� �Ý Ø�� ÓÒ×ØÖ��ÒØ×<br />
Ó� Ö����Ø�ÓÒ ��Ö� Ð�ÝÓÙØ Ø� �Ò�ÕÙ�×� Ì�� �Ò ÐÓ×�� ��Ó<br />
Ñ�ØÖÝ Ð�Ñ�Ø× Ø�� Ï�Ä Ö�Ø�Ó ØÓ ��ÓÙØ � Û�� � ØÓ��Ø��Ö<br />
Û�Ø� Ø�� Ñ�Ò�ÑÙÑ Ï Ó� � ñÑ �ÐÐ× �ÓÖ � ×�Ö��× Ó�<br />
ÑÓÖ� Ø��Ò ØÖ�Ò×�×ØÓÖ× ØÓ �ÓÖÑ Ø�� ������ � Ö�×�×<br />
Ø�Ò �<br />
×�Ò � Î Ø� ÈÅÇË �
Ì��Ð� � ��×��Ò Ô�Ö�Ñ�Ø�Ö× Ó� Ø�� �ÖÓÒØ �Ò�×Ó�Ø�����ØÐ��� Ë�Ø �Ò� ���ØÐ��� Ë�Ø � �Ò� Ë�Ø �<br />
Ø�×Ø ��Ô×<br />
Ë�Ø �ÒÔÙØ ØÖ�Ò×�×ØÓÖ Ï Ä ������ � ×��Ô�Ö ������ �<br />
���� ÆÅÇË Ö� Ø�Ò�ÙÐ�Ö ��� ñÑ � ñÑ ÈÅÇË ������<br />
����� ÆÅÇË Ö� Ø�Ò�ÙÐ�Ö ��� ñÑ � ñÑ ÈÅÇË ����<br />
�� ÈÅÇË Û�¯� � ñÑ � ñÑ ÈÅÇË � ��<br />
�� ÈÅÇË Û�¯� � ñÑ � ñÑ ÈÅÇË ���� ��<br />
� ÈÅÇË Û�¯� � ñÑ � ñÑ ÈÅÇË �����<br />
�� ÈÅÇË Û�¯� � ñÑ � ñÑ ÈÅÇË ���� ��<br />
�� ÈÅÇË Û�¯� � ñÑ � ñÑ ÈÅÇË �����<br />
�� ÈÅÇË Ö� Ø�Ò�ÙÐ�Ö ��� ñÑ � ñÑ ÈÅÇË ���� ��<br />
�� ÈÅÇË Ö� Ø�Ò�ÙÐ�Ö ��� ñÑ � ñÑ ÈÅÇË �����<br />
�� ÈÅÇË Û�¯� �� � ñÑ � ñÑ ÈÅÇË ���� ��<br />
�� ÈÅÇË Û�¯� �� � ñÑ � ñÑ ÈÅÇË �����<br />
�� ÆÅÇË Ö� Ø�Ò�ÙÐ�Ö ��� ñÑ � ñÑ ÆÅÇË ������<br />
front end with<br />
NMOS input<br />
���ÙÖ� � Ä�ÝÓÙØ Ó� Ø�� ���ØÐ��� � Ì�� Ò�Û �ÖÓÒØ<br />
�Ò� ��ÒÒ�Ð× �Ö� �Ò�� �Ø��<br />
ÁÁÁ ��Ö×Ø Ö�×ÙÐØ× �ÖÓÑ Ò�Û<br />
�ÖÓÒØ �Ò�×<br />
Å��×ÙÖ�Ñ�ÒØ× ÓÒ Ø�� ���ØÐ��� �Ò� ���ØÐ���<br />
¬�× � �Ò� � ×�ÓÛ�� Ø��Ø ÓÒ� Ó� Ø�� ��×��Ò �Ó�Ð× �<br />
Ö�×� Ø�Ñ� Û�ÐÐ ��ÐÓÛ � Ò× ��× ���Ò Ö�� ��� Û�Ø� �ÓØ�<br />
��Ô× �ÓÖ Ø�� ���ØÐ��� �Ø Û�× �Ð×Ó �ÓÙÒ� Ø��Ø Ø��<br />
�ÖÓÒØ �Ò� ÓÙÐ� � ���Ú� � Ö�×� Ø�Ñ� Ó� � � Ò× Û�Ø� �Ò<br />
�ÒÔÙØ �Ô� �Ø�Ò � �× ���� �× � Ô� Å��×ÙÖ�Ñ�ÒØ× Ó�<br />
Ø�� Ñ�Ü�ÑÙÑ �ÒÔÙØ ÙÖÖ�ÒØ �× Û�ÐÐ �× ÒÓ�×� Ñ��×ÙÖ�<br />
Ñ�ÒØ× �Ö� ×Ø�ÐÐ �Ò ÔÖÓ�Ö�××<br />
front end with<br />
PMOS input<br />
front end with<br />
NMOS feedback<br />
���ÙÖ� �� Ä�ÝÓÙØ Ó� Ø�� ���ØÐ��� � Ì�� Ò�Û �ÖÓÒØ<br />
�Ò� ��ÒÒ�Ð× �Ö� �Ò�� �Ø��<br />
ÁÎ Ì�� ���ØÐ�ËÊ ��Ô<br />
Ì�� ���ØÐ�ËÊ ��Ô �ÑÔÐ�Ñ�ÒØ× ØÛÓ �ÐÓ �× Ó� �<br />
Ö���×Ø�Ö× �� � � ��Ø× Û��� �ÓÑ��Ò�ØÓÖ��Ð ÐÓ�� �Ð Ù<br />
Ð�Ø�× Ø�� Ô�Ö�ØÝ Ó� Ø��×� Ö���×Ø�Ö× Û�� ��×�Ú��Ð��Ð� ÓÒ<br />
ØÛÓ �ÖÓÙÔ× Ó� � Ô��× Ê��� �Ò� ÛÖ�Ø� � �×× ØÓ Ø��×�<br />
Ö���×Ø�Ö �ÐÓ �× �× � ÓÑÔÐ�×��� Ú�� ØÛÓ �Ò��Ô�Ò��ÒØ<br />
Á ��ÒØ�Ö�� �×� ÇÒ� �× �ÑÔÐ�Ñ�ÒØ�� �Ò ÓÒÚ�ÒØ�ÓÒ�Ð �Ö<br />
Ù�ØÖÝ Û��Ð� Ø�� ÓØ��Ö ÓÒ� Ù×�× ØÖ�ÔÐ� Ö��ÙÒ��ÒØ �Ô<br />
ÓÔ× Û�Ø� Ñ��ÓÖ�ØÝ �Ò Ó��Ò� Ì�� �ÐÓ �×��Ñ�Ø� Ó�<br />
Ø�� ��Ô �× ×�ÓÛÒ �Ò ¬� � Û��Ð� � ØÖ�ÔÐ� Ö��ÙÒ��ÒØ<br />
�Ô ÓÔ Û�Ø� Ñ��ÓÖ�ØÝ �Ò Ó��Ö �× �ÐÐÙ×ØÖ�Ø�� �Ò ¬�<br />
Ì��× ��Ô Û�ÐÐ Ô�ÖÑ�Ø ØÓ Ñ��×ÙÖ� Ë�Í Ö�Ø�× �Ý<br />
Ñ��Ò× Ó� Ø�� Ö���×Ø�Ö �ÐÓ �× �Ø �Ð×Ó �ÐÐÓÛ× Ø�� �Ð Ù
PRELIMINARY<br />
���ÙÖ� �� ÈÙÐ×� ×��Ô�× Ó� Ø�� ���ØÐ��� Ø�×Ø ��Ô Ì�� Ð��Ø �Ö�Ô� ×�ÓÛ× ÔÙÐ×� ×��Ô�× �ÖÓÑ ��«�Ö�ÒØ ÑÓ��<br />
¬ �Ø�ÓÒ× Ó� Ø�� �Ë�Ø �� � Ø�� �ÖÓÒØ �Ò� Ì�� Ñ���Ð� ÓÒ� ×�ÓÛ× Ø�� Ö�×ÔÓÒ×� �ÓÖ ��«�Ö�ÒØ �ÒÔÙØ ��Ö��×<br />
�Ò� Ø�� Ö���Ø �Ö�Ô� ��Ô� Ø× Ø�� Ö�×ÔÓÒ×� �ÓÖ �ÒÔÙØ �Ô� �Ø�Ò �× Ó� Ô� Ô� Ô� Ô� �Ò� � Ô�<br />
V th<br />
input<br />
V dd<br />
V fp<br />
V ss<br />
V dd<br />
V th<br />
output<br />
���ÙÖ� �� Ë ��Ñ�Ø� Ó� Ø�� ÈÅÇË �ÖÓÒØ �Ò�× �ÑÔÐ�<br />
Ñ�ÒØ�� ÓÒ ���ØÐ��� � Ì�� Ø�Ö�×�ÓÐ� ÚÓÐØ���× ÎØ�<br />
Ó� Ø�� ÈÅÇË �ÒÔÙØ ØÖ�Ò×�×ØÓÖ �Ò� ÈÅÇË ������ �<br />
ØÖ�Ò×�×ØÓÖ �Ö� �Ò�� �Ø�� �Ý �ÖÖÓÛ×<br />
Ð�Ø�ÓÒ Ó� Ø�� Ë�Í ×ÙÔÔÖ�××�ÓÒ �Ö�×�Ò� �ÖÓÑ Ø�� Ù×���<br />
Ó� ØÖ�ÔÐ� Ö��ÙÒ��ÒØ �Ô ÓÔ× �Ò ×Ø�Ø� Ñ� ��Ò�×<br />
Î �ÙØÙÖ� ÔÐ�Ò×<br />
���ØÐ� ��Ô× Û�ÐÐ �� �ÖÖ����Ø�� ÙÔ ØÓ ÅÖ��<br />
��Ý Û�Ø� Ø�� � Ö�Ý �ÖÖ����Ø�ÓÒ �� �Ð�ØÝ Ó� Ø��<br />
��ÊÆ Ñ� ÖÓ �Ð� ØÖÓÒ� × �ÖÓÙÔ Ì��Ý Û�ÐÐ �Ð×Ó �� Ù×��<br />
�Ò � Ø�×Ø ���Ñ Û�Ø� ÔÖÓØÓØÝÔ� ��Ø� ØÓÖ× Ó� Ø�� ÄÀ��<br />
�ÒÒ�Ö ØÖ� ��Ö �Ò Ç ØÓ��Ö ËØÙ���× Û�Ø� Ø�� ��Ô<br />
�ÓÒ��� ØÓ � ��Ø� ØÓÖ �Ö� ÙÒ��Ö Û�Ý<br />
Ì�� ×Ù�Ñ�××�ÓÒ Ó� Ø�� ¬Ò�Ð Ú�Ö×�ÓÒ Ó� Ø�� ���<br />
ØÐ� ��Ô �× ÔÐ�ÒÒ�� �ÓÖ ×ÔÖ�Ò� Ì��× ��Ô Û�ÐÐ �Ñ<br />
ÔÐ�Ñ�ÒØ�<br />
� � ÑÓ��¬�� �ÖÓÒØ�Ò� Û�Ø� � ��×Ø�Ö ×��Ô�Ò� �Ò� �<br />
�����Ö Ñ�Ü�ÑÙÑ �ÒÔÙØ ��Ö�� Ö�Ø� �Ó×�Ò �ÖÓÑ<br />
Ø�� �ÖÓÒØ �Ò�× ÓÒ Ø�� ���ØÐ��� �Ò� ���ØÐ���<br />
��Ô×<br />
input<br />
V th<br />
V ss<br />
V dd<br />
V ss<br />
V fp<br />
V th<br />
output<br />
���ÙÖ� �� Ë ��Ñ�Ø� Ó� Ø�� ÆÅÇË �ÖÓÒØ �Ò�× �ÑÔÐ�<br />
Ñ�ÒØ�� ÓÒ ���ØÐ��� � Ì�� Ø�Ö�×�ÓÐ� ÚÓÐØ���× ÎØ�<br />
Ó� Ø�� ÆÅÇË �ÒÔÙØ ØÖ�Ò×�×ØÓÖ �Ò� ÆÅÇË ������ �<br />
ØÖ�Ò×�×ØÓÖ �Ö� �Ò�� �Ø�� �Ý �ÖÖÓÛ×<br />
� ØÛÓ ×�Ò�Ð� �Ú�ÒØ ÙÔ×�Ø Ë�Í ��Ø� Ø�ÓÒ �Ò� ÓÖ<br />
Ö� Ø�ÓÒ Ñ� ��Ò�×Ñ×�<br />
Á ØÖ�ÔÐ� Ö��ÙÒ��ÒØ �Ô ÓÔ× Û�Ø� Ñ��ÓÖ�ØÝ �Ò<br />
Ó��Ò� �Ò ×Ø�Ø� Ñ� ��Ò�× �Ò� ÓØ��Ö �Ö�ÕÙ�ÒØÐÝ<br />
��Ò��� Ö���×Ø�Ö× �Ò�<br />
�� ��� ��×�� ÓÒ ��ÑÑ�Ò� �Ò Ó��Ò� �ÓÖ ÑÓÖ�<br />
×Ø�Ø� Ö���×Ø�Ö×<br />
ËØ�ØÙ× Ö�ÔÓÖØ× �Ò� �ÙÖØ��Ö Ø�×Ø Ö�×ÙÐØ× Û�ÐÐ �� �Ú��Ð��Ð�<br />
�Ø ��℄<br />
�ÖÖÓÖ ÓÖÖ� Ø�ÓÒ �Ö Ù�Ø
PRELIMINARY<br />
���ÙÖ� �� ÈÙÐ×� ×��Ô�× Ó� Ø�� ���ØÐ��� Ø�×Ø ��Ô<br />
Ì�� Ð��Ø �Ö�Ô� ×�ÓÛ× ÔÙÐ×� ×��Ô�× �ÖÓÑ ��«�Ö�ÒØ ÑÓ�<br />
�¬ �Ø�ÓÒ× Ó� Ø�� �Ë�Ø � � Ø�� �ÖÓÒØ �Ò� Ø�� Ö���Ø<br />
ÓÒ� ×�ÓÛ× Ø�� Ö�×ÔÓÒ×� �ÓÖ ��«�Ö�ÒØ �ÒÔÙØ ��Ö��×<br />
I2C−bus<br />
I2C−bus<br />
2<br />
I2C−Interface<br />
STANDARD<br />
I2C−Decoder<br />
STANDARD<br />
Register Block<br />
Reg<br />
I2C−Interface I2C−Decoder Register Block<br />
2 SEU ROBUST SEU ROBUST<br />
Reg<br />
9<br />
���ÙÖ� �� �ÐÓ � × ��Ñ�Ø� Ó� Ø�� ���ØÐ�ËÊ Ø�×Ø<br />
��Ô ÌÛÓ Ö���×Ø�Ö �ÐÓ �× Û�Ø� Ô�Ö�ØÝ �Ò Ó��Ò� �Ö� ÓÒ<br />
ØÖÓÐÐ�� Ú�� � ×Ø�Ò��Ö� ÓÖ � Ë�Í ÖÓ�Ù×Ø Á � �ÒØ�Ö�� �<br />
Ö�×Ô� Ø�Ú�ÐÝ<br />
Ê���Ö�Ò �×<br />
9<br />
Parity<br />
Parity<br />
� ℄ � ��ÙÑ��×Ø�Ö �Ø �Ð ��×��Ò Ó� � Ê���ÓÙØ<br />
���Ô �ÓÖ ÄÀ�� ÈÖÓ ����Ò�× Ó� Ø�� �Ø� ÏÓÖ�<br />
×�ÓÔ ÓÒ �Ð� ØÖÓÒ� × �ÓÖ ÄÀ� �ÜÔ�Ö�Ñ�ÒØ×<br />
��ÊÆ ÄÀ�� � ��<br />
� ℄ � ��ÙÑ��×Ø�Ö �Ø �Ð È�Ö�ÓÖÑ�Ò � Ó� Ø�� ���ØÐ�<br />
Ê���ÓÙØ ���Ô �ÓÖ ÄÀ�� ØÓ �� ÔÙ�Ð�×��� �Ò� ÈÖÓ<br />
����Ò�× Ó� Ø�� �Ø� ÏÓÖ�×�ÓÔ ÓÒ �Ð� ØÖÓÒ� × �ÓÖ<br />
ÄÀ� �ÜÔ�Ö�Ñ�ÒØ×<br />
� ℄ Æ Ú�Ò ����Ð �Ø �Ð Ì�� ���ØÐ� Ê���Ö�Ò � Å�ÒÙ�Ð<br />
Ú ÄÀ�� ��<br />
��℄ Ê �Ö�ÒÒ�Ö �Ø �Ð ÆÙ Ð ÁÒ×ØÖ �Ò� Å�Ø� � �<br />
��� ���<br />
��℄ Í ÌÖÙÒ� ��Ú�ÐÓÔÑ�ÒØ �Ò� ���Ö� Ø�Ö�×�Ø�ÓÒ Ó�<br />
Ø�� Ê����Ø�ÓÒ ÌÓÐ�Ö�ÒØ À�ÄÁ� � Ê���ÓÙØ ���Ô<br />
�ÓÖ Ø�� À�Ê� � Å� ÖÓ×ØÖ�Ô ��Ø� ØÓÖ× È�� Ø��×�×<br />
À����Ð��Ö�<br />
��℄ �ØØÔ� ÛÛÛ�×� ��Ô ÙÒ� �����Ð��Ö� �� Ð� �<br />
���ÙÖ� � Ä�ÝÓÙØ Ó� Ø�� ���ØÐ�ËÊ Ø�×Ø ��Ô Ì��<br />
ØÛÓ Ö���×Ø�Ö �ÐÓ �× �Ö� ÐÓ �Ø�� ÓÒ Ø�� Ö���Ø �Ò� Ð��Ø<br />
��Ò� ×��� Ó� Ø�� ��Ô Ì�� Á ��ÒØ�Ö�� �× �Ö� Ø�� �ÐÓ �×<br />
�Ò Ø�� �ÒØÖ� Ø�� Ë�Í ÖÓ�Ù×Ø ÓÒ� ���Ò� Ø�� Ð�Ö��Ö<br />
�ÐÓ �<br />
D Q<br />
CK<br />
D Q<br />
D Q<br />
D Q<br />
���ÙÖ� � ÌÖ�ÔÐ� Ö��ÙÒ��ÒØ �Ô ÓÔ Û�Ø� Ñ��ÓÖ�ØÝ �Ò<br />
Ó��Ö Ù×�� �Ò Ø�� Ë�Í ÖÓ�Ù×Ø Á � �ÒØ�Ö�� � Ó� Ø��<br />
���ØÐ�ËÊ Ø�×Ø ��Ô
Development of a High Density Pixel Multichip Module at Fermilab<br />
S. Zimmermann, G. Cardoso, J. Andresen, J.A. Appel, G. Chiodini,<br />
D.C. Christian, B.K. Hall, J. Hoff, S.W. Kwan, A. Mekkaoui, R. Yarema<br />
Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 USA<br />
zimmer@fnal.gov<br />
Abstract<br />
At Fermilab, a pixel detector multichip module is being<br />
developed for the BTeV experiment. The module is composed<br />
of three layers. The lowest layer is formed by the readout<br />
integrated circuits (ICs). The back of the ICs is in thermal<br />
contact with the supporting structure, while the top is flip-chip<br />
bump-bonded to the pixel sensor. A low mass flex-circuit<br />
interconnect is glued on the top of this assembly, and the<br />
readout IC pads are wire-bounded to the circuit. This paper<br />
presents recent results on the development of a multichip<br />
module prototype and summarizes its performance<br />
characteristics.<br />
I. INTRODUCTION<br />
At Fermilab, the BTeV experiment has been approved for<br />
the C-Zero interaction region of the Tevatron [1]. One of the<br />
tracker detectors for this experiment will be a pixel detector<br />
composed of 62 pixel planes of approximately 100x100 mm 2<br />
each, assembled perpendicular to the colliding beam and<br />
installed a few millimeters from the beam.<br />
Carbon Fiber Shelves<br />
Beam<br />
Horizontal Shingles<br />
Vertical Shingles<br />
Figure 1: Pixel Station<br />
Cooling Pipes<br />
Work supported by the U.S. Department of Energy under<br />
contract No. DE-AC02-76CH03000. Fermilab Conf-01/247-E<br />
The planes in the pixel detector are formed by sets of<br />
different lengths of pixel-hybrid modules, each composed of a<br />
single active-area sensor tile and of one row of readout ICs.<br />
The modules on opposite faces of the same pixel station are<br />
assembled perpendicularly in relation to each other (see<br />
Figure 1).<br />
The BTeV pixel detector module is based on a design<br />
relying on a hybrid approach. With this approach, the readout<br />
chip and the sensor array are developed separately and the<br />
detector is constructed by flip-chip mating the two together.<br />
This approach offers maximum flexibility in the development<br />
process, the choice of fabrication technologies, and the choice<br />
of sensor material.<br />
The multichip modules must conform to special<br />
requirements dictated by BTeV: the pixel detector will be<br />
inside a strong magnetic field (1.6 Tesla in the central field),<br />
the flex circuit and the adhesives cannot be ferromagnetic, the<br />
pixel detector will also be placed inside a high vacuum<br />
environment, so the multichip module components cannot<br />
outgas, the radiation rates (around 3 Mrad per year) and<br />
temperature (-5 o C) also impose severe constraints on the pixel<br />
multichip module packaging design.<br />
The pixel detector will be employed for on-line track<br />
finding for the lowest level trigger system and, therefore, the<br />
pixel readout ICs will have to read out all detected hits. This<br />
requirement imposes a severe constraint on the design of the<br />
readout IC, the hybridized module, and the data transmission<br />
to the data acquisition system.<br />
Several factors impact the amount of data that each IC<br />
needs to transfer: readout array size, distance from the beam,<br />
number of bits of pulse-height analog to digital converter<br />
(ADC) data format, etc. Presently, the most likely dimension<br />
of the pixel chip array will be 128 rows by 22 columns and 3<br />
bits of ADC information.<br />
II. PIXEL MODULE READOUT<br />
The pixel module readout must allow the pixel detector to<br />
be used in the lowest level experiment trigger. Our present<br />
assumptions are based on simulations that describe the data<br />
pattern inside the pixel detector [3]. The parameters used for
the simulations are: luminosity of 2×10 32 cm −2 s −1 (corresponds<br />
to an average of 2 interactions per bunch crossing), pixel size<br />
of 400×50 µm 2 , threshold of 2000 e − and a magnetic field of<br />
1.6 Tesla.<br />
Module 1 → 11 13 17 17 20 18 16 13 8<br />
Module 2 → 11 18 26 31 39 33 25 18 12<br />
Module 3 → 16 20 37 61 76 59 39 26 18<br />
Module 4 → 17 35 63 141 234 130 65 36 16<br />
Module 5 → 23 35 74 234 •<br />
Beam<br />
Figure 2: Average Bit Data Rate at Middle Station, in Mbit/s<br />
Figure 2 shows a sketch of the 40 chips that may compose<br />
a pixel half plane and the data rate for the station in the middle<br />
of 31 stations. The beam passes at the place represented by the<br />
black dot. These numbers assume the 23-bit data format<br />
shown in Figure 3. Table 1 presents the required bandwidth<br />
per module. From this table we see that each half-pixel plane<br />
requires a bandwidth of approximately 1.8 Gbit/s.<br />
22 0<br />
ADC Beam Crossing Number Column Row<br />
Figure 3: Pixel Module Data Format (23 bits)<br />
Table 1: Half Plane Required Bandwidth, in Mbit/s<br />
Req. Bandwidth<br />
Module 1 133<br />
Module 2 213<br />
Module 3 352<br />
Module 4 737<br />
Module 5 366<br />
Total 1801<br />
We’ve used simulations of the readout architecture with a<br />
clock of 35MHz. This frequency can support a readout<br />
efficiency of approximately 98% when considering three times<br />
the nominal hit rate for the readout ICs closest to the beam.<br />
Efficiency is lost either due to a pixel being hit more than once<br />
before the first hit can be read out, or due to bottlenecks in the<br />
core circuitry.<br />
A. Proposed Readout Architecture<br />
The readout architecture is a direct consequence of the<br />
BTeV detector layout. The BTeV detector covers the forward<br />
direction, 10-300 mrad, with respect to both colliding beams.<br />
Hence, the volume outside this angular range is outside the<br />
active area and can be used to house heavy readout and<br />
control cables without interfering with the experiment. The<br />
architecture takes advantage of this consideration.<br />
The Data Combiner Board (DCB) located approximately<br />
10 meters away from the detector remotely controls the pixel<br />
modules. All the controls, clocks and data are transmitted<br />
between the pixel module and the DCB by differential signals<br />
employing the Low-Voltage Differential Signaling (LVDS)<br />
standard. Common clocks and control signals are sent to each<br />
module and then bussed to each readout IC. All data signals<br />
are point to point connected to the DCB. Figure 4 shows a<br />
sketch of the proposed readout architecture. For more details<br />
refer to [6].<br />
This readout technique requires the design of just one radhard<br />
chip: the pixel readout IC. The point-to-point data links<br />
minimize the risk of an entire module failure due to a single<br />
chip failure and eliminate the need for a chip ID to be<br />
embedded in the data stream. Simulations have shown that<br />
this readout scheme results in readout efficiencies that are<br />
sufficient for the BTeV experiment.<br />
Flex Circuit<br />
Conductors<br />
Readout Chip<br />
~10m<br />
Figure 4: Pixel Module Point-to-Point Connection<br />
III. PIXEL MODULE PROTOTYPE<br />
Data<br />
Combiner<br />
Board<br />
Figure 5 shows a sketch of the pixel module protoype.<br />
This design uses the FPIX1 version of the Fermilab Pixel<br />
readout IC [3].<br />
Connectors<br />
Decoupling<br />
Capacitors<br />
Figure 5: Sketch of the Pixel Multichip Module<br />
Wire Bonds<br />
Flex Circuit<br />
Sensor<br />
FPIX1 Chip<br />
The pixel module is composed of three layers, as depicted<br />
in Figure 6. The pixel readout chips form the bottom layer.<br />
The back of the chips is in thermal contact with the station<br />
supporting structure, while the other side is flip-chip bumpbonded<br />
to the silicon pixel sensor. The clock, control, and
power pad interfaces of FPIX1 extend beyond the edge of the<br />
sensor [2].<br />
Silicon-Flex<br />
Circuit adhesive<br />
Bump bonds<br />
Silicon-Carbon<br />
Fiber adhesive<br />
Flex Circuit<br />
Sensor<br />
FPIX1<br />
Wire bonds<br />
Support Structure<br />
Figure 6: Sketch of the Pixel Multichip Module “Stack”<br />
The interconnect circuitry (flex circuit) is placed on the<br />
top of this assembly and the FPIX1 pad interface is wirebonded<br />
to the flex circuit. The circuit then extends to one end<br />
of the module where low profile connectors interface the<br />
module to the data acquisition system. The large number of<br />
signals in this design imposes space constraints and requires<br />
aggressive design rules, such as 35 µm trace width and traceto-trace<br />
clearance of 35 µm.<br />
This packaging requires a flex circuit with four layers of<br />
copper traces (as sketched in Figure 7). The data, control and<br />
clock signals use the two top layers, power uses the third layer<br />
and ground and sensor high voltage bias use the bottom layer.<br />
The flex circuit has two power traces, one analog and one<br />
digital. These traces are wide enough to guarantee that the<br />
voltage drop from chip to chip is within the FPIX1 ±5%<br />
tolerance. The decoupling capacitors in the flex circuit are<br />
close to the pixel chips. The trace lengths and vias that<br />
connect the capacitors to the chips are minimized to reduce<br />
the interconnection inductance. A picture of the flex circuit<br />
made by CERN is shown in Figure 8.<br />
Flex Circuit<br />
Digital lines<br />
Digital<br />
Ground<br />
Analog lines<br />
Analog<br />
Layer 2<br />
Layer 3<br />
Flex Circuit-Silicon Adhesive<br />
Sensor<br />
Kapton ®<br />
Layer 1<br />
Dielectric Layer<br />
Figure 7: Sketch of Flex Circuit Cross Section<br />
Metal Layer 1<br />
Metal Layer 2<br />
Metal Layer 3<br />
High Voltage<br />
Metal Layer 4<br />
Bias Pad (1mm 2 )<br />
Gold Epoxy<br />
Bias Window<br />
Top<br />
Bottom<br />
Connectors<br />
Terminations<br />
H.V.<br />
Figure 8: Flex Circuit Picture<br />
Wire bonding pads<br />
Decoupling Caps.<br />
To minimize coupling between digital and analog<br />
elements, signals are grouped together into two different sets.<br />
The digital and analog traces are laid out on top of the digital<br />
and analog power supply traces, respectively. Furthermore, a<br />
ground trace runs between the analog set and the digital set of<br />
traces.<br />
A. High Voltage Bias<br />
The pixel sensor is biased with up to 1000 VDC through<br />
the flex circuit. The coupling between the digital traces and<br />
the bias trace has to be minimized to improve the sensor noise<br />
performance. To achieve this, the high voltage trace runs in<br />
the fourth metal layer (ground plane, see Figure 7) and bellow<br />
the analog power supply trace. The high voltage electrically<br />
connects to the sensor bias window through Gold epoxy. An<br />
insulator layer in the bottom of the flex circuit isolates the<br />
ground in the fourth metal layer of the flex circuit from the<br />
high voltage of the pixel sensor.<br />
B. Assembly<br />
The interface adhesive between the flex circuit and the<br />
pixel sensor has to compensate for mechanical stress due to<br />
the coefficient of thermal expansion mismatches between the<br />
flex circuit and the silicon pixel sensor. Two alternatives are<br />
being pursued. One is the 3M thermally conductive tape [7].<br />
The other is the silicone-based adhesive used in [8].<br />
The present pixel module prototypes were assembled using<br />
the 3M tape with a thickness of 0.05mm. Before mounting the<br />
flex circuit onto the sensor, a set of dummies with bump-bond<br />
structures where used to evaluate the assembly process. This<br />
assembly process led to no noticeable change in the resistance<br />
of the bumps. Figure 9 shows a picture of the dummy.<br />
Flex Circuit<br />
Bump Bonded Dummy<br />
Figure 9: Dummy Bump Bond Structure
IV. PIXEL MODULE EXPERIMENTAL RESULTS<br />
Two pixel module prototypes were characterized. One of<br />
these modules is a single readout IC (FPIX1) bump bonded to<br />
a SINTEF sensor (Figure 10) using Indium bumps. In the<br />
second pixel module the readout IC is not bump bonded to a<br />
sensor (Figure 11). In this prototype the flex interconnect is<br />
located on the top of the sensor (as in the baseline design).<br />
The pixel modules have been characterized for noise and<br />
threshold dispersion. These characteristics were measured by<br />
injecting charge in the analog front end of the readout chip<br />
with a pulse generator and reading out the hit data through a<br />
PCI based test stand. The results for different thresholds are<br />
summarized in Table 2.<br />
Wire Bonds<br />
Flex Circuit Readout IC Sensor<br />
Figure 10: Pixel Module with SINTEF Sensor<br />
Figure 11: Pixel Module without Sensor<br />
Wire Bonds<br />
Table 2: Performance of the Pixel Prototype Modules (in e − )<br />
Without Sensor With Sensor<br />
µ Th σ Th µ Noise σ Noise µ Th σ Th µ Noise σ Noise<br />
7365 356 75 7 7820 408 94 7.5<br />
6394 332 78 12 6529 386 111 11<br />
5455 388 79 11 5500 377 113 13<br />
4448 378 78 11 4410 380 107 15<br />
3513 384 79 12 3338 390 116 20<br />
2556 375 77 13 2289 391 117 21<br />
The comparison of these results with previous results<br />
(single readout IC without the flex circuit on top) shows no<br />
noticeable degradation in the electrical performance of the<br />
pixel module [4]. Figure 12 shows the hit map of the pixel<br />
module with sensor using a radioactive source (Sr 90),<br />
confirming that the bump bonds remain functional.<br />
Figure 12: Pixel Module Hit Map<br />
V. RESULTS OF THE HYBRIDIZATION TO<br />
PIXEL SENSORS<br />
The hybridization approach pursued offers maximum<br />
flexibility. However, it requires the availability of highly<br />
reliable, reasonably low cost fine-pitch flip-chip mating<br />
technology. We have tested three bump bonding technologies:<br />
indium, fluxed solder, and fluxless solder. Real sensors and<br />
readout chips were indium bumped at both the single chip and<br />
at the wafer level by BOEING, NA. Inc (Anaheim, CA) and<br />
Advance Interconnect Technology Ltd. (Hong Kong) with<br />
satisfactory yield and performance. For more details refer to<br />
[5].<br />
VI. CONCLUSIONS<br />
We have described the baseline pixel multichip module<br />
designed to handle the data rate required for the BTeV<br />
experiment at Fermilab. The assembly process of a single chip<br />
pixel module prototype was successful. A 5-chip pixel module<br />
prototype (Figure 13) will be assembled using the same<br />
process. The characterization of the two single-chip modules<br />
showed that there is no degradation in the electrical<br />
performance of the pixel module when compared with<br />
previous prototypes.
[mm]<br />
Readout chip Pixel Sensor<br />
Figure 13: 5-chip Pixel Module with Indium Bumps<br />
VII. ACKNOWLEDGEMENTS<br />
The authors would like to thank CERN, and in particular<br />
Rui de Oliveira, for producing the flex circuit for this<br />
prototype.<br />
VIII. REFERENCES<br />
1. Kulyavtsev, A., et al., BTeV Proposal, Fermilab, May<br />
2000.<br />
2. Cardoso, G., et al., “Development of a high density pixel<br />
multichip module at Fermilab”, 51 st ECTC, Orlando,<br />
Florida, May 28-31, 2001.<br />
3. Christian, D.C., et al., “Development of a pixel readout<br />
chip for BTeV,” Nucl. Instr. and Meth. A 435, pp.144-152,<br />
1999.<br />
4. Mekkaoui, A., et al., “FPIX2: an advanced pixel readout<br />
chip,” 5 th Workshop on Elect. LHC Exp., Snowmass, pp.<br />
98-102, Sept. 1999.<br />
5. Cihangir, S., et al., “Study of thermal cycling and radiation<br />
effects on Indium and fluxless solder bump-bonding<br />
devices”, to be presented in the 7 th Workshop on Elect.<br />
LHC Exp., Stockholm, September 2001<br />
6. Hall, B., et al., “Development of a Readout Technique for<br />
the High Data Rate BTeV Pixel Detector at Fermilab”, to<br />
be presented in the 2001 Nuclear Science Symposium and<br />
Medical Imaging Conference, San Diego, November 2001.<br />
7. Thermally Conductive Adhesive Transfer Tapes, Technical<br />
datasheet, 3M. April 1999.<br />
8. Abt, I., et al., “Gluing Silicon with Silicone”, Nucl. Instr.<br />
and Meth. A 411, pp. 191-196, 1998.
Radiation tolerance studies of BTeV pixel readout chip prototypes.<br />
G. Chiodini, J.A. Appel, G. Cardoso, D.C. Christian, M.R.Coluccia, J. Hoff, S.W. Kwan,<br />
A. Mekkaoui, R. Yarema, and S. Zimmermann<br />
Fermi National Accelerator Laboratory, P.O. Box 500 Batavia, IL 60510, USA 1<br />
email address of the corresponding author: chiodini@fnal.gov<br />
Abstract<br />
We report on several irradiation studies performed on<br />
BTeV preFPIX2 pixel readout chip prototypes exposed to a<br />
200 MeV proton beam at the Indiana University Cyclotron<br />
Facility. The preFPIX2 pixel readout chip has been<br />
implemented in standard 0.25 micron CMOS technology<br />
following radiation tolerant design rules. The tests confirmed<br />
the radiation tolerance of the chip design to proton total dose<br />
of 26 MRad. In addition, non destructive radiation-induced<br />
single event upsets have been observed in on-chip static<br />
registers and the single bit upset cross section has been<br />
measured.<br />
I. INTRODUCTION<br />
The BTeV experiment plans to run at the Tevatron collider<br />
in 2006 [1]. It is designed to cover the “forward” region of the<br />
proton-antiproton interaction point at a luminosity of<br />
2·10 32 cm -2 s -1 . The experiment will employ a silicon pixel<br />
vertex detector to provide high precision space points for an<br />
on-line lowest level trigger based on track impact parameters.<br />
The “hottest” chips, located at 6 mm from the beam, will<br />
experience a fluence of about 10 14 cm -2 y -1 . This is similar to<br />
the high radiation environments at ATLAS and CMS at LHC.<br />
A pixel detector readout chip (FPIX) has been developed<br />
at Fermilab to meet the requirements of future Tevatron<br />
collider experiments. The preFPIX2 represents the most<br />
advanced iteration of very successful chip prototypes [2] and<br />
has been realized in standard deep-submicron CMOS<br />
technology. As demonstrated by the RD49 collaboration at<br />
CERN, the above process can be made very radiation tolerant<br />
following specific design rules [3]. The final FPIX will be<br />
fabricated using radiation tolerant 0.25 micron CMOS process<br />
with enclosed geometry NMOS transistors and guard rings.<br />
We show results of radiation tests performed with<br />
preFPIX2 chip prototypes including both total dose and single<br />
event effects. The tests have been performed exposing the<br />
chip to 200 MeV protons at the IUCF. The comparison of the<br />
chip performance before and after exposure shows the high<br />
radiation tolerance of the design to protons up to about 26<br />
Mrad total dose. Last year exposures of preFPIXT chips to<br />
radiation from a Colbalt-60 source at Argonne National<br />
Laboratory verified the high tolerance to gamma radiation up<br />
to about 33 Mrad total dose [4].<br />
Total dose effects are not the only concern for reliable<br />
operation of the detector. Ionising radiation can induce single<br />
event upset (SEU) effects, as unwanted logic state transitions<br />
in digital devices, corrupting stored data.<br />
The single event upsets just described do not permanently<br />
alter the chip behaviour, but they could result in data loss,<br />
shifts of the nominal operating conditions, and loss of chip<br />
control. If the single event upset rate is particularly high, it<br />
could be mitigated by circuit hardening techniques. If it is not<br />
high, the upset rate could be tolerated simply by a slow<br />
periodic downloading of data and full system resetting in the<br />
worse case. During the irradiation, we set up tests in order to<br />
observe the occurrence of single event upsets in the preFPIX2<br />
registers and we measured the corresponding single bit upset<br />
cross section.<br />
II. THE RADIATION TOLERANT FPIX CHIP<br />
In order to satisfy the needs of BTeV, the FPIX pixel<br />
readout chip must provide “very clean” track crossing points<br />
near the interaction region for every 132 ns beam crossing.<br />
This requires a low noise front-end, an unusually high output<br />
bandwidth, and radiation-tolerant technology.<br />
A. The preFPIX2I and preFPIXTb chip<br />
prototypes<br />
The road to the desired performances has been paved by<br />
fabricating preFPIX2 chip prototypes in deep-submicron<br />
technology from two vendors. The preFPIX2I chip,<br />
containing 16 columns with 32 rows of pixel cells and<br />
complete core readout architecture, has been manufactured<br />
through CERN. The preFPIX2Tb chip, contains, in addition to<br />
the preFPIX2I chip features, a new programming interface<br />
and digital-to-analog converters. It has been manufactured by<br />
Tiawan Semiconductor Manufacturing Company. Based on<br />
test results, some of them reported here, we intend to submit a<br />
full-size BTeV pixel readout chip before the end of the year<br />
2001. That chip will include the final 50 micron by 400<br />
micron pixel cells and high speed output data serializer.<br />
The analog-front end [4] and the core architecture [5] of<br />
the pixel readout chips fabricated in deep-submicron CMOS<br />
technology have been described elsewhere. In this paper we<br />
briefly describe the additional features of the preFPIX2Tb<br />
chip because of their relevance in the single event upset tests<br />
reported.<br />
1 Work supported by the U.S. Department of Energy under contract No. DE-AC02-76CH03000. Fermilab Conf-01/214-E.
B. Registers in preFPIX2Tb readout chip<br />
The programming interface permits download of mask and<br />
charge-injection registers and digital-to-analog (DAC)<br />
registers. These registers control features of the chip and<br />
minimize the number of connections between the chip and the<br />
outside world.<br />
The mask and charge-injection registers consist of small<br />
size-daisy chained Flip-Flop’s (FF’s) and are implemented in<br />
each pixel cell. A high logic level stored in one of the mask<br />
FF’s disables the corresponding cell. This is meant to turn off<br />
noisy cells. Analogously, a high logic level stored in one of<br />
the charge-injection FF’s enables the cell to receive at the<br />
input an analogue pulse for calibration purposes. Thus, there<br />
are two independent long registers, which are serpentine<br />
through the chip. In the preFPIX2Tb periphery, there are 14<br />
DAC registers implemented, each one 8 bits long. The stored<br />
digital value is translated to in an analogue voltage or<br />
analogue current to set bias voltages, bias currents and<br />
threshold discriminators.<br />
The FF’s for DAC registers are of larger size than the FF’s<br />
for the shift-registers. In fact, the DAC FF’s are more<br />
complex and uses larger size NFET devices. The reason for<br />
this choice is the high reliability required for the DAC<br />
registers, which regulate the operational point of the cells.<br />
III. EXPERIMENTAL SETUP<br />
C. Irradiation facility at IUCF<br />
The proton irradiation tests took place at the Indiana<br />
University Cyclotron Facility where a proton beam line of 200<br />
MeV kinetic energy is delivered to users. The beam profile<br />
has been measured by exposing a sensitive film. The beam<br />
spot, defined by the circular area where the flux is not less<br />
than 90% of the central value, had a diameter of about 1.5 cm,<br />
comfortably larger than the chip size (the larger chip is<br />
preFPIX2Tb which is 4.3 mm wide and 7.2 mm long). Before<br />
the exposure the absolute fluence was measured by a Faraday<br />
cup; during the exposure by a Secondary Electron Emission<br />
Monitor. The cyclotron has a duty cycle factor of 0.7% with a<br />
repetition rate of about 17MHz and most of the tests were<br />
done with a flux of about 2·10 10 protons cm -2 s -1 .<br />
The irradiation was done in air at room temperature, and<br />
no low energy particle or neutron filters were used. The<br />
exposures with multiple boards were done placing the boards<br />
about 2 cm behind each other and with the chips facing the<br />
beam. Mechanically, the boards were kept in position by an<br />
open aluminium frame. The beam was centred on the chips.<br />
The physical position of the frame was monitored constantly<br />
by a video camera to ensure that no movements occurred<br />
during exposure.<br />
We irradiated 4 boards with preFPIXI chips to 26 Mrad<br />
(December 2000), one board with preFPIX2Tb to 14 Mrad<br />
(April 2001), and recently 4 boards with preFPIX2Tb to 29<br />
Mrad (August 2001). One of the boards with preFPIX2Tb<br />
chips on it was irradiated twice collecting 43 Mrad total dose.<br />
Due to the alignment precision and measurement technique<br />
employed, the systematic error on the integrated fluence is<br />
believed to be less than 10%.<br />
D. Hardware and software<br />
Each chip under test was wire-bonded to a printed circuit<br />
board in such a way that it could be properly biased,<br />
controlled and read out by a DAQ system. The DAQ system<br />
was based on a PCI card designed at Fermilab (PCI Test<br />
Adapter card) plugged in a PCIbus extender and controlled by<br />
a laptop PC. The PTA card generated digital signals to control<br />
and read back the readout chips. The software to control the<br />
PCI card IO busses was custom and written in C-code. The<br />
PCI card IO busses were buffered by LVDS differential<br />
driver-receiver cards near by the PCIbus extender located in<br />
the counting room. The differential card drove a 100 foot<br />
twisted pair cable followed by another LVDS differential<br />
driver-receiver card which finally was connected with a 10<br />
foot flat cable to the devices under test. All the DAQ<br />
electronics were well behind thick concrete walls, protecting<br />
the apparatus from being influenced by the radiation<br />
background from the cyclotron and from activated material.<br />
IV. ANALYSIS AND RESULTS<br />
E. Performed tests<br />
1) Bias currents monitor<br />
During the irradiation tests, the analogue and digital<br />
currents where continuously monitored by a GPIB card. The<br />
analogue current decreased slightly and the digital currents<br />
increased slightly during the proton exposure.<br />
2) Noise and threshold dispersion<br />
The noise and the discriminator threshold of each<br />
individual cell were measured before and after the irradiation<br />
in exactly the same bias conditions for the four preFPIX2I<br />
chips 2 . Every cell works after irradiation with a noise about<br />
10% less and a decrease of about 20% in the threshold<br />
dispersion among cells. Figure 1 and 2 show the noise and<br />
threshold distributions of a preFPIXI chip irradiated with a<br />
proton dose of 26 Mrad.<br />
3) Single Event Upsets (SEU)<br />
In our tests, a great deal of attention was focused on<br />
measuring radiation induced digital soft errors. We<br />
concentrated our effort on the preFPIX2Tb registers storing<br />
the initialisation parameters, because they have a large<br />
number of bits and the testing procedure is easy to prepare.<br />
The results obtained allow prediction of the performance of<br />
other parts of the chip potentially affected by the same<br />
phenomena.<br />
2 The results for the four preFPIXTb chips are going to be<br />
available in early October ‘01.
Figure 1: Measured amplifier noise in the 576 cells of preFPIX2I<br />
before and after 26 Mrad of 200 MeV proton irradiation.<br />
Figure 2: Measured discriminator threshold in the 576 cells of<br />
preFPIX2I before and after 26 Mrad of 200 MeV proton irradiation.<br />
The single event upset tests performed are very similar to<br />
the ones reported in reference [6]. The SEU measurements<br />
consisted of detecting single bit errors in the values stored in<br />
the registers. The testing procedure consisted of repeatedly<br />
downloading all the registers and reading back the stored<br />
values after one minute. The download and read-back phases<br />
took about 3 seconds. The download of the parameters was<br />
done with a pattern with half of the stored bits having a<br />
logical value 0 and the other half having a logical value 1<br />
(except in one case, see Footnote 3). For the shift-registers,<br />
the patterns were randomly generated at every iteration loop.<br />
For the DAC registers, the patterns were kept constant. A<br />
mismatch between the read-back value and the download<br />
value is interpreted as a single event upset due to the proton<br />
irradiation. No errors were observed in the system with the<br />
beam off and running for 10 hours.<br />
In a specific test, the mask register of one board was<br />
operated in clocked mode with a clock frequency of 380 kHz.<br />
The low clock frequency value was due to our DAQ<br />
limitation. In this test, the mask register was downloaded with<br />
a logical level 1 in each flip-flop, in order to increase the<br />
statistics in view of the fact that a stored logical level 1 is<br />
easier to upset with respect to a logical level 0 (see results).<br />
After the initialisation, a continuous read cycle was performed<br />
and stopped every time a logical level 0 was detected.<br />
We collected 14 errors for an effective integrated fluence of<br />
5.8·10 13 protons cm -2 .<br />
A summary of the total single bit errors detected in the<br />
preFPIX2Tb readout chips, together with other relevant<br />
quantities, is shown in Table 1. The value in square brackets<br />
represents the initial stored logical level of the upset bit. One<br />
of the boards (indicated as board 4 in Table 1) was placed not<br />
orthogonal to the beam, as the other ones, but at 45 degrees to<br />
explore possible dependence of the error rate on the beam<br />
incident angle. The number of single bit upsets, for an equal<br />
amount of total dose, is statistically consistent among the<br />
various chips. In addition, the data do not show any<br />
statistically significant difference in the error rate between the<br />
tilted board and the other ones.<br />
Table 1: Total single bit errors in preFPIX2Tb registers.<br />
Board Integrated<br />
Fluence (cm -2 )<br />
Errors in shiftregs<br />
(1152 bit)<br />
Errors in DAC<br />
regs (112 bit)<br />
1 2.33·10 14 53=18[0] +35[1] 10=8[0]+2[1] 3<br />
1 3.65·10 14 80=23[0] +57[1] 20=8[0] +12[1]<br />
2 3.65·10 14 74=22[0] +52[1] 19=9[0] +10[1]<br />
3 3.65·10 14 86=27[0] +59[1] 19=8[0] +11[1]<br />
4 3.65·10 14 77=14[0] +63[1] 31=19[0] +12[1]<br />
Table 2: Single bit upset cross section in preFPIX2Tb registers.<br />
Flip-flop Mode Cross section<br />
(10 -16 cm 2 )<br />
Shift-regs 0 to 1 Un-clocked 1.0±0.1<br />
Shift-regs 1 to 0 Un-clocked 2.7±0.2<br />
Shift-regs 1 to 0 Clocked (380kHz) 4.2±1.2<br />
DAC regs 1 to 0 Un-clocked 5.5±0.6<br />
It is common practise to express the error rate of a register as<br />
a single bit upset cross section, defined as the number of<br />
errors per bit per unit of integrated fluence. The single bit<br />
upset cross section has been computed for the shift-registers<br />
and for the DAC registers. The results are shown in Table 2.<br />
Only the statistical error on the cross section has been<br />
considered. For the shift-registers, the cross section has been<br />
computed separately for the radiation induced transition from<br />
0 to 1 and from 1 to 0 because the data have enough precision<br />
to show the existence of an asymmetry.<br />
3 The observed asymmetry in this case is due to the unequal<br />
numbers of zero’s (82) and one’s (30) downloaded into the<br />
DAC registers.
The high beam fluence used during the irradiation was of<br />
some concern regarding any saturation effect in the error rate.<br />
To study this, we collected some data at a fluence of about<br />
4·10 9 protons cm -2 s -1 , about 5 times less than the nominal<br />
fluence. In this short test, only one board was irradiated (Apr.<br />
‘01 test) and the single bit cross section was measured to be<br />
(1.4±1)·10 -16 cm 2 and (3.5±1.6 )·10 -16 cm 2 for the shift-<br />
registers and (7±5)·10 -16 cm 2 for the DAC registers in unclocked<br />
mode, statistically compatible with the results at<br />
higher fluence.<br />
F. Discussion of the results<br />
No power supply trip-offs or large increases in the bias<br />
currents were observed during the irradiation. There is no<br />
evidence of single event latch-up or of significant radiation<br />
induced leakage currents. Moreover, the absence of noisy<br />
cells and no large difference in individual thresholds due to<br />
irradiation, strongly suggest that single event gate rapture is<br />
not a concern.<br />
The prediction of the single bit upset cross section is very<br />
difficult because a lot of parameters came into play [7].<br />
Nevertheless, some gross features of the data can be<br />
understood simply by some general considerations.<br />
The disparity in the cross section between the shift<br />
registers and the DAC registers is likely caused by the<br />
different size of the active area of the NFET transistor, which<br />
is larger for the DAC register FF’s. Besides that, the DAC<br />
register FF’s have a more complicated design and an increase<br />
in complexity, as a rule of thumb, translates to a larger<br />
number of sensitive nodes that can be upset.<br />
The SEU asymmetry for the transition from 0 to 1 with<br />
respect to 1 to 0 can be explained in terms of the FF design.<br />
The FF’s of the shift-registers are D-FF’s implemented as<br />
cross-coupled nor-not gates. Such a configuration has<br />
different sensitive nodes for 0 to 1 and 1 to 0 upsets. No such<br />
an asymmetry is expected at all for the DAC registers because<br />
the FF’s are D-FF’s implemented as cross-coupled nor-nor<br />
gates. This symmetric configuration has the distribution of<br />
sensitive nodes for low logical level the same as when a high<br />
logical level is stored.<br />
A decrease of the energy threshold for single bit upset has<br />
been reported (in reference [6]) for a static register in clocked<br />
mode with respect to unclocked mode. Our data, taken with a<br />
clock frequency of 380 kHz, do not show a statistically<br />
significant difference from the data taken in the unclocked<br />
mode.<br />
In reference [8] a beam angular dependence is expected<br />
for devices with very thin sensitive volumes that have Linear<br />
Energy Transfer (LET) threshold over 1 MeV cm 2 /mg and<br />
tested with 200 MeV protons. We didn’t observe any<br />
dependence of the upset rate on the beam incident angle. In<br />
fact, due to the smaller device size of the deep submicron<br />
elements, the sensitive volumes are more like cubic than slab<br />
shaped.<br />
V. CONCLUSIONS<br />
The results of the total dose test validate the deep<br />
submicron CMOS process as radiation tolerant, particularly<br />
suitable for pixel readout chips and other electronics exposed<br />
to large integrated total dose. The single event upset cross<br />
sections of static registers are relatively small, but measurable<br />
(10 -16 to 5·10 -16 cm 2 ). The experience gained from the<br />
gamma and proton irradiation of pre-prototype chips has been<br />
of importance in allowing us to proceed with the submission<br />
of a full-size BTeV pixel readout chip and developing an<br />
approach to handle SEU.<br />
VI. ACKNOWLEDGEMENTS<br />
We thank Chuck Foster and Ken Murray for the generous<br />
technical and scientific assistance they provided us during the<br />
irradiation tests at IUCF.<br />
VII. REFERENCES<br />
[1] A. Kulyavtsev et al., “Proposal for an Experiment to<br />
measure Mixing, CP Violation, and Rare Decays in Charm<br />
and Beauty Particle Decays at Fermilab Collider,” (2000),<br />
http://www-btev.fanl.gov/public_documents/btev_proposal/.<br />
[2] D.C. Christian, et al., “Development of a pixel readout<br />
chip for BTeV,” Nucl. Instrum. Meth. A 435, 144-152, 1999.<br />
[3] L. Adams, et al., “2 nd RD49 Status Report: Study of<br />
the Radiation Tolerance of Ics for LHC,” CERN/LHCC 99-8,<br />
LEB Status Report/RD49, 8 March 1999, available at<br />
http://rd49.web.cern.ch/RD49/Welcome.html#rd49docs.<br />
[4] A. Mekkoui, J. Hoff, “30Mrad(SiO2) radiation tolerant<br />
pixel front end for the BTeV experiment”, Nucl. Instr. And<br />
Meth. A 465, 166 (2001).<br />
[5] J. Hoff, et al., “PreFPIX2: Core Architecture and<br />
Results”, IEEE Trans. Nucl. Sci. 48, 485 (2001).<br />
[6] P. Jarron, et al., “Deep submicron CMOS technologies<br />
for the LHC experiments”. Nucl. Phys. B (Proc. Suppl.) 78,<br />
625 (1999).<br />
[7] M. Huhtinen, F. Faccio, et al., “Computational method<br />
to estimate Single Event Upset rates in accelerator<br />
environment”, Nucl. Instr. And Meth. A 450, 155 (2000).<br />
[8] P.M. O’Neill, et al., “Internuclear Cascade –<br />
Evaporation Model for LET Spectra of 200 MeV Protons<br />
Used for Parts Testing”, IEEE Trans. Nucl. Sci. 45, 2467<br />
(1998)
The ALICE Pixel Detector Readout Chip Test System.<br />
F. Antinori (1,2) , M. Burns (1) , M. Campbell (1) , M. Caselle (3) , P. Chochula (1, 4) ,<br />
R. Dinapoli (1) , F. Formenti (1) , J.J.Van Hunen (1) , A. Kluge (1) , F. Meddi (1, 5) , M. Morel (1) , P<br />
Riedler (1) , W. Snoeys (1) , G. Stefanini (1) , K. Wyllie (1) .<br />
(For the ALICE Collaboration)<br />
(1) CERN, 1211 Geneva 23, Switzerland<br />
(2) Università degli Studi di Padova, I-35131 Padova, Italy<br />
(3) Universita degli Studi di Bari, I-70126 Bari, Italy<br />
(4) Comenius University, 84215 Bratislava, Slovakia<br />
(5) Universita di Roma “La Sapienza”, I-00185 Roma, Italy<br />
Abstract<br />
The ALICE experiment will require some 1200<br />
Readout Chips for the construction of the Silicon Pixel<br />
Detector [1] and it has been estimated that approximately<br />
3000 units will require testing.<br />
This paper describes the system that was developed<br />
for this task.<br />
I. INTRODUCTION<br />
The Pixel Readout chip [2] is a mixed signal device<br />
containing both analogue and digital circuits. It is made<br />
up as a matrix of 32 columns by 256 rows of Pixel cells.<br />
Each cell comprises of a pre-amplifier, pulse shaper,<br />
discriminator, two digital delay units and a 4 event derandomising<br />
buffer. These cells are connected as 32<br />
parallel shift registers each of 256 bits read out<br />
sequentially at 10MHz. Each cell contains five<br />
configuration bits, three for the threshold fine control and<br />
two to enable testing and masking of the cell.<br />
The test system used for testing the prototype version<br />
of the readout chip was made up of several modules each<br />
performing a specific function, but were invariably of<br />
different formats and often requiring adaptation between<br />
them. The computer used to control the system was not<br />
flexible enough to be able to be programmed to give a<br />
rapid online indication of the quality of the device under<br />
test or to present the results of the completed<br />
measurement in a graphic manner. With so many devices<br />
to be characterised and tested the opportunity was taken<br />
to develop a simple yet flexible test system incorporating<br />
the required functionality which could be used at all<br />
stages of the testing and qualifying of the Readout chips<br />
and sub assemblies.<br />
The test system should be capable of supplying the<br />
necessary signals to ensure the correct functionality of the<br />
internal circuitry of the device as well as finding use in<br />
other areas of the qualification process.<br />
II. THE TEST PROCEDURE<br />
Initially, several devices were tested on an Integrated<br />
Circuit (IC) tester capable of performing simple low level<br />
tests. These tests ensured that the various sections within<br />
the devices functioned correctly and could be accessed for<br />
more complete tests 1 . Once the design had been proved to<br />
function correctly on the IC tester they were transferred to<br />
the Test System for higher level tests.<br />
Using the test system each device was fully<br />
characterised for its dc operating conditions, power<br />
consumption and the functioning of the various internal<br />
sections, including dynamic testing of the sensitivity and<br />
spread of thresholds, noise, calibration of the internal<br />
configuration digital to analogue converters.<br />
At each step the system is capable of displaying the<br />
progress of the measurement and once finished the results<br />
are displayed in a graphical form. Data is stored into a<br />
database for reference at a future date.<br />
The same hardware was used for beam tests and tests<br />
of radiation tolerance.<br />
III. THE SYSTEM DESIGN<br />
The test system was designed taking the following<br />
requirements into account:<br />
• Hardware Requirements<br />
• To supply the necessary supply voltage and<br />
biases as required by the device,<br />
1 We shall not discuss these tests in detail as they are<br />
beyond the scope of this paper.
• provide an interface between the DAQ<br />
environment and the Pixel Chip environment,<br />
• be flexible enough to perform a variety of DAQ<br />
functions such as Wafer Probe testing, testing<br />
subassemblies of the Pixel Detector, radiation<br />
and beam tests,<br />
• be simple, compact and easy to reproduce,<br />
• use off-the-shelf and industry-standard<br />
components whenever possible,<br />
• Software Requirements<br />
• Have powerful graphic capabilities,<br />
• be flexible and adaptable when choosing<br />
hardware,<br />
• a comprehensive Graphical User Interface (GUI)<br />
should be developed,<br />
• full monitoring facilities should be available,<br />
• generation and maintenance of a database,<br />
• be adaptable to different test scenarios.<br />
The test system has been designed around a PC<br />
connected to a VME crate, as shown in Figure 1. The<br />
connection is made by a National Instruments MXI<br />
connection.<br />
Figure 1: The layout of the basic test system<br />
A Readout Controller (PILOT Module) [3] has been<br />
developed in the VME standard to control the readout of<br />
the Pixel Readout Chip. As the Pixel Readout Chip is<br />
configured and controlled by JTAG [4] so JTAG is also<br />
used control a DAQ Adapter board that is situated close<br />
to the Pixel readout chip under test. There are various<br />
choices of JTAG controller: models which use a PC<br />
parallel port, models which may be installed in the PC, or<br />
as in our case a module installed in the VME crate.<br />
Differential connections between the modules installed in<br />
the VME crate and the DAQ Adapter board allow the use<br />
of long interconnecting cables making the system suitable<br />
for use where the readout/test system must be sited away<br />
from the actual Pixel Readout chip.<br />
• The PC<br />
As the functioning of the system relies heavily<br />
on software and that large amounts of data need to be<br />
manipulated, a fast PC with a large amount of memory<br />
and storage is preferable.<br />
• MXI Controller<br />
The MXI Controller [5] provides the connection<br />
between the PCI bus of the PC and the DAQ VME crate.<br />
• Readout Controller<br />
The readout controller (Figure 2) is a VME<br />
module that, on receipt of commands from either the<br />
VME bus or an external source of trigger signals,<br />
generates all signals necessary for the readout of the Pixel<br />
Chip. Zero skipping and hit encoding are performed to<br />
reduce the amount of data to be stored. A test path has<br />
been included to allow testing the system with known<br />
data.<br />
• JTAG Controller<br />
VME INTERFACE<br />
Test Data<br />
Hit Data<br />
CRATE END FIFO<br />
(Hit Data)<br />
VME Control & Trigger<br />
External Trigger<br />
The JTAG controller requires two channels, one<br />
for controlling and configuring the Pixel Readout Chip,<br />
the other to control the DAQ Adapter Board. Using two<br />
channels reduces the risk of a faulty Pixel Readout Chip<br />
impeding the correct functioning of the DAQ Adapter<br />
Board.<br />
• DAQ Adapter Board<br />
ZERO SKIP<br />
&<br />
DATA ENCODING<br />
CONTROL LOGIC<br />
The DAQ Adapter Board is situated between the<br />
Pilot Module and Pixel Chip under test and serves as an<br />
interface between the two environments. It houses the line<br />
drivers and receivers necessary for the DAQ connection<br />
and Gunning transceiver Logic (GTL) drivers necessary<br />
for the Pixel Chip bus connection.<br />
It also houses the circuitry to derive the<br />
necessary power and bias supplies. Monitoring of both<br />
applied voltage and consumed current are possible.<br />
CLOCK<br />
TEST DATA<br />
INPUT FIFO<br />
(Raw data)<br />
Data[31:0]<br />
/CS[9:0]<br />
RO Control<br />
Test Pulse<br />
RO Clock<br />
PILOT MODULE<br />
Figure 2: The block diagram of the Readout Controller.
Control JTAG<br />
Multiplicity<br />
Pixel JTAG<br />
Pilot Module<br />
Trst<br />
Tms<br />
Tck<br />
Tdi<br />
Tdo<br />
Mult[9:0]<br />
Ready<br />
Trst<br />
Tms<br />
Tck<br />
Tdi<br />
Tdo<br />
Data[31:0]<br />
CS[9:0]<br />
RO[n:0]<br />
Test Pulse<br />
RO Clk<br />
Feed Back<br />
16 Bit Data Register<br />
Function<br />
Decoder<br />
8 Bit Instruction<br />
Register<br />
Register<br />
Multiplicity<br />
Additional circuitry has been included to allow a<br />
multiplicity measurement to make a rapid evaluation of<br />
the total number of hit Pixels within the chip.<br />
The control of the Adapter Board is by means of a<br />
simple JTAG protocol consisting of an 8 bit IR Scan to<br />
address a device and a 16 bit DR Scan to write or read a<br />
data value.<br />
Delay<br />
Data bus [15:0]<br />
Control bus [n:0]<br />
BS cells<br />
DACs<br />
ADCs<br />
ADC<br />
Fast<br />
OR<br />
I to V<br />
TTL/GTL Level Adaptors<br />
TTL/GTL Level Adaptors<br />
Trst<br />
Tms<br />
Tck<br />
Tdi<br />
Tdo<br />
Data[31:0]<br />
CONTROL<br />
Fast OR<br />
CS[9:0]<br />
RO<br />
Control<br />
Test<br />
Pulse<br />
RO Clock<br />
MISC.<br />
PIXEL JTAG<br />
PILOT INTERFACE<br />
DAQ ADAPTER BOARD<br />
Figure 3: The block diagram of the DAQ Adapter Board.<br />
Connections to IC Tester and DAQ Adapter Board<br />
Multliplicity Out<br />
Fast OR Out<br />
Trst<br />
Tms<br />
Tck<br />
Tdi<br />
Tdo<br />
Power and Biases<br />
Internal Feedback<br />
Misc.<br />
Signals<br />
JTAG<br />
Control<br />
Pixel Data[31:0]<br />
/CS<br />
Control Signals<br />
Test Pulse<br />
Clk<br />
PIXEL CHIP<br />
UNDER TEST<br />
Data<br />
Readout<br />
Internal Nodes<br />
Test PIXEL CHIP CARRIER BOARD<br />
Jumper Selection<br />
Figure 4: The block diagram of the Pixel Carrier Board.<br />
Pixel JTAG<br />
Pixel Databus<br />
Scope Monitoring<br />
• Pixel Readout Chip Carrier Board<br />
The individual Pixel Readout chips are wire<br />
bonded to the carrier board (Figure 4), which may be<br />
connected to either the IC tester, or the DAQ Adapter<br />
board. Test points have been included on all bus signals.<br />
Facilities have been provided to allow observation of<br />
various internal nodes of the device.<br />
Variations of the carrier board have lent<br />
themselves for both wafer probing tests and evaluation of<br />
the prototype bus structure currently under design for use<br />
in the Pixel Detector.<br />
IV. TEST SYSTEM SOFTWARE<br />
The final testing and production of the Alice<br />
Pixel detector will take place in several laboratories. For<br />
practical reasons the number of operating systems and<br />
software environments must be kept to a minimum.<br />
The test software architecture reflects the<br />
flexibility of the hardware. Its modularity guaranties that<br />
the system can be used with different hardware<br />
Applications<br />
Services and Shared Memory<br />
Drivers<br />
VISA ADO<br />
Hardware Database<br />
Figure 5: The PTS software architecture<br />
Root<br />
Interface<br />
Root<br />
Logfiles<br />
configurations without the need of rewriting the software<br />
core.<br />
Windows 9x/NT/2K and National Instrument’s<br />
LabView were chosen as the main software<br />
environments. In addition, CERN’s package ROOT [6] is<br />
used for offline analysis. The system has taken advantage<br />
of several industrial standards, such as VISA [7] or ADO<br />
[8] to simplify hardware and database access.<br />
To enhance the flexibility, software modules are<br />
logically grouped into three basic layers: the drivers,<br />
services and applications to form the full Pixel Test<br />
System (PTS). The relationship between architectural<br />
layers is shown in Figure 5.<br />
The driver layer handles the communication<br />
between the software system and hardware components.<br />
Its main role is configuration of connected devices and<br />
basic I/O operations (e.g. JTAG data scan). Drivers<br />
communicate with the service layer by means of a fixed<br />
protocol, which simplifies system adaptation to hardware<br />
modifications. A hardware modification requires only a<br />
minimum set of software modules to be changed.
Figure 6: A snapshot of DAQ Monitor display with an example of plugins<br />
On the lowest level, the drivers rely on Virtual<br />
Software Architecture (VISA). This industrial standard<br />
provides a unified interface to different busses (e.g. VME,<br />
VXI, GPIB, RS-232 or Ethernet). For non-VISA<br />
compliant devices Windows compatible DLL libraries are<br />
used.<br />
The service layer acts mainly on the data level.<br />
Its role is data formatting and integrity checking. For<br />
example, pixel data are checked for consistency at this<br />
level. Corrupted frames, buffer overflows or missing<br />
triggers can be immediately signalled to the applications,<br />
which can then take the proper action. Another important<br />
role of the service layer is data flow control. For example,<br />
two applications are not allowed to access the same<br />
hardware at the same time. Device execution timeouts are<br />
also handled at this level so that an accidental hardware<br />
failure will not lock the software system. Inter-module<br />
communication is simplified by using shared memory<br />
also provided by this layer.<br />
Programs belonging to the highest layer are<br />
divided into two basic categories: the Control Panels and<br />
the Applications. Each hardware component of the system<br />
has its own associated Control Panel, which enables low<br />
level register access, device reconfiguration and macro<br />
command execution. Programs belonging to this category<br />
are mainly used for debugging or low-level device<br />
commissioning. On the other hand, the Applications<br />
perform the most complicated tasks including JTAG<br />
integrity checks, DAC calibrations and threshold scans.<br />
V. THE TESTBEAM SOFTWARE SYSTEMS<br />
The Test beam DAQ and monitoring program is<br />
also a part of the highest PTS layer. It is the most<br />
powerful part of PTS so far. A single application (called<br />
DAQ Monitor) enables the acquisition of data in a variety<br />
of conditions ranging from free running modes with either<br />
software or external triggering up to the synchronous<br />
operation with an external triggering system. The DAQ<br />
Monitor can service a variable number of connected<br />
detectors.<br />
In order to keep the testbeam control system at<br />
the level of a single application, a concept of plugins has<br />
been introduced. These plugins are software modules<br />
used to perform specialized tasks, such as checking of<br />
trigger efficiencies, monitoring of system performance<br />
and calculation of cluster size distributions. The online<br />
system loads the plugins on demand. For example they<br />
are completely omitted if the highest system speed is<br />
required. The offline analysis can re-use the same plugins<br />
for data reconstruction. An example of the DAQ displays<br />
is shown in Figure 6.<br />
The DAQ software is capable of operating at a<br />
sustained trigger rate of 65 kHz, which is adequate for<br />
testbeam purposes requiring a trigger rate of ~10 kHz.<br />
The detector control system monitors voltages<br />
and temperatures in the different parts of the system. A<br />
LabView controlled multiplexer connects input channels<br />
to an external GBIB based multimeter. The position of the
irradiated chips can be changed by using a remotely<br />
controlled x-y table.<br />
VI. OFFLINE ANALYSIS AND DATABASE<br />
ACCESS<br />
Despite its flexibility, the physics community<br />
does not commonly use LabView for data analysis.<br />
Instead, the Root system is usually employed to perform<br />
the offline processing. The PTS includes an interface that<br />
allows for this task. By using Root, the PTS can take<br />
advantage of the variety of its classes for efficient<br />
analysis. The choice of Root was motivated also by<br />
economical factors, since it is a free system and does not<br />
require the purchase of expensive compilers. A nonnegligible<br />
fact is also the portability of code to different<br />
operating system platforms. However our mainstream<br />
system remains Windows.<br />
To perform the characterization of the full<br />
production of about 3000 chips, several Terabytes of data<br />
must be processed. The production data and measured<br />
optimal settings for each chip must be preserved in an<br />
accessible way for the actual PTS as well as for future<br />
final DAQ and Control systems. Our choice has been the<br />
MySQL [9] database running under the Linux operating<br />
system. To interface the database to LabView we use<br />
Microsoft’s Active Data Objects (ADO). This technology,<br />
based on ActiveX [10], provides a unique way of<br />
accessing different data sources either locally or over a<br />
network. One of the main advantages of ADO is that<br />
being an industrial-standard our system will be<br />
compatible with any common database which we might<br />
choose in the future. In a manner similar to the hardware<br />
drivers, it is only necessary to change a software driver<br />
(data provider) to make the PTS compatible with any<br />
other data sources.<br />
VII. CONCLUSIONS<br />
The test system described has proved to be flexible<br />
enough to satisfy all the requirements ranging from wafer<br />
probing, readout chip testing, radiation and beam testing.<br />
The software has greatly reduced the time necessary<br />
to characterize the devices. The PTS has proved<br />
invaluable not only to rapidly asses the device under test<br />
is functioning but also to assist with the debugging of the<br />
overall system<br />
VIII. REFERENCES<br />
[1] ALICE – Technical Design Report of the Inner<br />
Tracking System (ITS), CERN/LHCC 99-12, June<br />
1999.<br />
[2] W. Snoeys at al.: Pixel Readout Electronics<br />
development for the Alice Pixel Vertex and LHCb<br />
Rich detector, Nucl.Instrum.Meth.A465:176-<br />
189,2000<br />
[3] P.Chochula, F.Formenti and F.Meddi: User’s Guide<br />
to the Alice VME Pilot System, Alice-int-2000-32,<br />
CERN 2000<br />
[4] IEEE Std 1149.1. Test Access Port and Boundary<br />
Scan Architecture.<br />
[5] See http://www.ni.com<br />
[6] Rene Brun and Fons Rademakers, ROOT - An Object<br />
Oriented Data Analysis Framework, Proceedings<br />
AIHENP'96 Workshop, Lausanne, Sep. 1996, Nucl.<br />
Inst. & Meth. in Phys. Res. A 389 (1997) 81-86. See<br />
also http://root.cern.ch/.<br />
[7] National Instruments VISA pages, see for example<br />
http://www.ni.com/Visa<br />
[8] J.T.Roff: ADO : ActiveX Data Objects, O'Reilly &<br />
Associates; ISBN: 1565924150<br />
[9] M.Koffler: MySQL, APress; ISBN: 1893115577<br />
[10] D. Chapell: Understanding Activex and Ole,<br />
Microsoft Press; ISBN: 1572312165<br />
P6: M.Kofler: MYSQL, APress; ISBN: 1893115577
The HAL25 Front-End Chip<br />
for the ALICE Silicon Strip Detectors<br />
D. Bonnet, A. Brogna, J.P. Coffin, G. Deptuch, C. Gojak, C. Hu-Guo * , J.R. Lutz, A. Tarchini<br />
IReS (IN2P3-ULP), Strasbourg, France<br />
*Corresponding author Christine.hu@ires.in2p3.fr<br />
Abstract<br />
The HAL25 is a mixed low noise, low power consumption<br />
and radiation hardened ASIC intended to read out the<br />
Silicon Strip Detectors (SSD) in the ALICE tracker. It is<br />
designed in a 0.25 micron CMOS process and is<br />
conceptually similar to a previous chip, ALICE128C. It<br />
contains 128 analogue channels, each consisting of a<br />
preamplifier, a shaper and a storage capacitor. The<br />
analogue data is sampled by an external logic signal and<br />
then is serially read out through an analogue multiplexer.<br />
This voltage signal is converted into a differential current<br />
signal by a differential linearised transconductance<br />
output buffer. A slow control complying with the JTAG<br />
protocol was implemented to set up the circuit.<br />
Introduction<br />
The HAL25 is a mixed, analogue digital, ASIC designed<br />
for read-out of Silicon Strip Detectors (SSD) in the<br />
ALICE tracker. It is based on the ALICE first generation<br />
chip, ALICE128C [1]. The ALICE128C chip<br />
performances were successfully maintained up to 50 krad<br />
of the ionising dose. It is now used in the SSD front-ends<br />
electronic of the STAR tracker.<br />
In order to maintain some safety margin in radiation<br />
environment a new circuit has been designed. It has been<br />
demonstrated that commercial deep sub-micron CMOS<br />
processes exhibit intrinsic radiation tolerance [2]. HAL25<br />
has been designed in a 0.25 micron CMOS process. In<br />
addition special design techniques have been used to meet<br />
the demands of low noise, low power consumption and<br />
radiation hardness required by the ALICE experiment.<br />
For the SSD layer, the ALICE experiment needs a readout<br />
device having a very large dynamic range (+/- 13 MIPs)<br />
with a good linearity and an adjustable shaping time from<br />
1.4 µs to 2.2 µs. This is a challenge for such a circuit<br />
designed in a deep sub-micron process operated at only<br />
2.5 V which is the edge of the use of standard analogue<br />
design techniques.<br />
This paper explains the design of the chip. Since HAL25<br />
is still under evaluation, only <strong>preliminary</strong> results will be<br />
given.<br />
J.D. Berst, G. Claus, C. Colledani<br />
LEPSI (IN2P3-ULP), Strasbourg, France<br />
HAL25 Block Diagram<br />
Figure 1 shows the circuit block diagram. HAL25<br />
contains 128 channels, each consisting of a preamplifier,<br />
a shaper and a capacitor to store the voltage signal<br />
proportional to the collected charge on a strip of a silicon<br />
detector. The data is sampled by an external logic signal<br />
and is read out at 10 MHz through an analogue<br />
multiplexer. This voltage signal is converted to a<br />
differential current signal by a differential linearised<br />
transconductance output buffer. The chip is<br />
programmable via the JTAG protocol which allows:<br />
• to set up a programmable bias generator which tunes<br />
the parameter of the analogue chains;<br />
• to check analogue behaviour of the chip by injecting<br />
adjustable charges to the inputs of selected channels<br />
with a programmable pulse generator ;<br />
• to perform the boundary scan.<br />
AIN<br />
AIN<br />
CMOS Sig.<br />
LVDS Sig.<br />
Analogue Sig.<br />
127<br />
P<br />
U<br />
L<br />
S<br />
E<br />
R<br />
E<br />
G<br />
0<br />
P<br />
U<br />
L<br />
S<br />
E<br />
G<br />
E<br />
N<br />
BIAS<br />
PULSE<br />
PULSE<br />
DAC<br />
127<br />
|<br />
|<br />
|<br />
ANALOGUE CHANNELS<br />
|<br />
|<br />
|<br />
0<br />
A<br />
N<br />
A<br />
M<br />
U<br />
X<br />
127<br />
P<br />
O<br />
W<br />
E<br />
R<br />
O<br />
N<br />
R<br />
E<br />
G<br />
BIAS GENERATORS<br />
BIAS DAC<br />
JTAG CONTROLER<br />
0<br />
127<br />
R<br />
E<br />
A<br />
D<br />
O<br />
U<br />
T<br />
R<br />
E<br />
G<br />
0<br />
TEMPO<br />
<br />
Current<br />
Output<br />
Buffer<br />
OUTBUF<br />
CTRL<br />
BYPASS BSR ID STATUS POWER_ENA TOKEN_ENA<br />
PULSE<br />
ID<br />
GNDREF<br />
PWRSTB<br />
TRSTB<br />
TCK<br />
TDI<br />
TMS<br />
TDO<br />
HOLD<br />
Fig. 1 HAL25 block diagram<br />
Table 1 shows the main specifications<br />
Specification<br />
Input range ± 13 MIPs<br />
ENC ≤ 400 e -<br />
Readout rate 10 MHz<br />
Power ≤ 1mW / Channel<br />
Tuneable Shaping Time 1.4 µs to 2.2 µs<br />
Single power supply 0 – 2.5 V<br />
TK_IN<br />
FSTRB<br />
+<br />
-<br />
RCLK<br />
TK_OUT<br />
A<br />
N<br />
_<br />
O<br />
U<br />
T
I. Preamplifier<br />
The preamplifier is a charge amplifier made from a<br />
single-ended cascode amplifier with an NMOS input<br />
transistor dimensioned to meet the noise specification.<br />
The additional bias branch is used to increase the current<br />
in the input transistor [3]. Figure 2 shows the preamplifier<br />
schematic.<br />
IN<br />
Cf<br />
Fig. 2. Preamplifier schematic<br />
The advantages of the circuit compared to a conventional<br />
folded cascode structure are as follows:<br />
• A 2.5 V single power supply voltage which<br />
simplifies power supply circuit. This is a strong<br />
requirement from the ALICE experiment.<br />
• The drain current of the input transistor which is the<br />
sum of the current flowing in both bias branches.<br />
This improves power efficiency.<br />
Simulation shows that preamplifier has a gain of 10<br />
mV/MIP (24000 e - ) and a power consumption of 225 µW.<br />
II. Shaper<br />
The ALICE experiment needs a front-end circuit having a<br />
very large dynamic range (±13 MIPs) with a good<br />
linearity and a shaping time adjustable from 1.4 to 2.2 µs.<br />
A conventional shaper using a transistor as a feedback<br />
resistor (Fig. 3a) cannot satisfy the required dynamic and<br />
the linearity range. A linearised source degenerated<br />
differential pair (Fig. 3b) is used as an active feedback<br />
resistor. The details of the shaper are shown in figure 4.<br />
Cf<br />
Cc<br />
Hold<br />
IN OUT<br />
Gm V bias3<br />
Cload<br />
Fig. 3a. Shaper A Fig. 3b. Shaper B<br />
The active feedback circuit is a linearized<br />
transconductance amplifier with a very low<br />
transconductance value built with the differential pair M1,<br />
Vdd<br />
gnd<br />
VPRE<br />
IN Cc<br />
V offset<br />
V bias1<br />
V bias2<br />
OUT<br />
V bias4<br />
G m<br />
Cf<br />
Hold<br />
OUT<br />
Cload<br />
M2 transistors. Transistors M3, M4 are used for source<br />
degeneration of the differential pair linearising its<br />
transconductance. Transistors M5, M6 are current<br />
sources. The M7 and M8 constitute the non-symmetrical<br />
active load of the differential pair. The transconductance<br />
value of the feedback circuit depends on the<br />
transconductance of M1, M2 and the value of<br />
conductance gDS of the transistors M3, M4 connected in<br />
parallel [4]. Because the transconductance value of the<br />
transistors M1 and M2 depends on the bias current, it is<br />
easy to change the total transconductance of the circuit.<br />
This means that the equivalent resistance, on the feedback<br />
path, can be varied by changing the bias current. The<br />
output DC level of the shaper is fixed by Vdc.<br />
IN<br />
V bias<br />
V dc<br />
Cc<br />
M5 M6<br />
M1<br />
Vdd<br />
M4<br />
M3<br />
M2<br />
M7 M8<br />
gnd<br />
ISHA<br />
Cf<br />
Fig.4 Shaper schematic<br />
OUT<br />
Cload<br />
The noise contribution from the active feedback depends<br />
mainly on the bias current and the value of gm of M7. In<br />
order to reduce the noise of the circuit, the following<br />
points are taken into account:<br />
• minimum bias current ;<br />
• minimum gm of M7.<br />
A PMOS inverter is chosen as an amplifier stage in the<br />
shaper because it is the simplest way to meet the<br />
specification of the very long peaking time. This<br />
approach is a trade off between the values of the<br />
capacitors and the gm of the amplifier. The coupling and<br />
feedback capacitors are 3 pF and 0.6 pF, respectively.<br />
The storage capacitor is 10 pF.<br />
Another advantage of this shaper is that its DC output<br />
level is adjustable via M1. It is possible to tune different<br />
voltage values for positive and negative inputs in order to<br />
increase dynamic range.<br />
The shaper has a power consumption of 65 and 130 µW<br />
corresponding to the shaping time of 2.2 µs and 1.4 µs<br />
respectively. The total front-end gain is 35 mV/MIP in ±<br />
13 MIPs range. A non linearity less than 3% is obtained<br />
by simulation within ± 10 MIPs range. The simulated<br />
ENC of the front-end circuit is :<br />
ENC = 207 e - + 10 e - /pF for τs = 1.4 µs<br />
ENC = 158 e - + 10 e - /pF for τs = 2.2 µs<br />
Where τs is the peaking time.
The signal is sampled on the storage capacitor by an<br />
external LVDS HOLD signal activated at τs with respect<br />
to the peaking time.<br />
III. Analogue Multiplexer<br />
The analogue multiplexer is made of 128 intermediate<br />
buffers, two 128 bits shift registers (POWERON,<br />
READOUT) and two extra bits flip-flop cells (TEMPO)<br />
(Fig. 1). The readout is performed by injecting a token<br />
which is shifted through READOUT. TEMPO is used to<br />
delay by two clock pulses the injected token (TK_IN) in<br />
order to switch “ON” the first two intermediate buffers<br />
before beginning the readout. POWERON controls the<br />
power on and off of the intermediate buffers while<br />
READOUT controls the serial data transfer.<br />
Only 3 of 128 buffers are powered on at the same time<br />
during the readout cycle. They are the buffer N<br />
corresponding to the channel being read and two adjacent<br />
buffers i.e. channels N-1 and N+1. At the same time the<br />
buffer N-2 is switched off and the buffer N+2 is switched<br />
on. This means only 4 buffers dissipate power during the<br />
readout.<br />
The out going token (TK_OUT signal) is picked up two<br />
channels before the end of the readout of a chip in order<br />
to allow a daisy chaining of several HAL25 circuits<br />
without extra clock cycles. An asynchronous FAST<br />
CLEAR signal can reset the token and abort the cycle at<br />
any moment of the readout.<br />
The multiplexer can be set by JTAG in order to test a<br />
single channel. This test is called “transparent mode”.<br />
With HOLD signal inactive, the analogue response of an<br />
injected signal can be observed at the output of the chip.<br />
IV. Differential Current Buffer<br />
Figure 5 shows the three main parts of the analogue<br />
output buffer:<br />
• Single ended to differential voltage converter;<br />
• Linearised transconductor;<br />
• Output voltage reference controller.<br />
From MUX<br />
Vdc<br />
V+<br />
V-<br />
Linearised<br />
Transconductor<br />
Readout<br />
Vref<br />
Vref<br />
V level<br />
control<br />
Vref<br />
On chip<br />
Fig. 5. Output buffer block diagram<br />
I+<br />
I-<br />
25 pF<br />
200 Ω<br />
25 pF<br />
The current gain and the output DC level are adjustable<br />
by the programmable bias generator. Analogue output of<br />
several chips can be connected in parallel. Only the chip<br />
selected to be read out drives the output lines. The<br />
outputs of the remaining chips are in a high impedance<br />
state. Non selected chips have their output buffers<br />
powered off in order to reduce the total power<br />
consumption. The voltage reference controller sets the<br />
quiescent output level. This allows to use a simple<br />
floating input differential buffer outside the chip to pick<br />
up the signal.<br />
a. Single ended to differential voltage converter<br />
The single ended to differential voltage converter has a<br />
unity gain. It is built with three operational amplifiers<br />
and a resistive ladder. The common mode output signal of<br />
this stage is clamped to the internal reference DC level,<br />
Vdc, common to the shaper and the multiplexer.<br />
b. Linearised transconductor<br />
The linear transconductor is based on the differential pair<br />
linearised by the cross-coupled quad cell [5].<br />
OUT1<br />
MP4<br />
2.5<br />
MN5<br />
5<br />
MN2<br />
1<br />
MP1<br />
BIASP<br />
(n+1)I DC<br />
aIDC MC2 MC1<br />
IN+<br />
M6 M1 M2<br />
M5<br />
M3 M4 M7<br />
IN-<br />
1:n n:1<br />
1:n n:1<br />
aI DC<br />
(n+1)I DC<br />
MC3<br />
BIASN<br />
MN1<br />
MN3 MN4<br />
2:1 MC4<br />
1:2<br />
Fig. 6 Linearised transconductance<br />
1<br />
MP3<br />
2.5<br />
MP2<br />
MN6<br />
5<br />
OUT2<br />
The schematic of the transconductance element is shown<br />
in Fig. 6. Transistors M1-M4 form the cross-coupled<br />
quad cell, while M6 and M7 constitute the differential<br />
pair. The bias current of the differential pair is delivered<br />
by the cross-coupled quad cell through the M5 transistor<br />
and the current source MC1. The bias current of the<br />
differential pair has a quadratic dependence on the<br />
differential input voltage. The output current presents a<br />
linear dependence of the input signal by a careful choice<br />
of the weighting factor n. The additional current gain is<br />
achieved on the cascade of current mirrors MN1-MN2<br />
with MP1-MP2 and MN3-MN4 and MP3-MP4. The<br />
output has a gain of 175 µA/MIP at nominal bias.<br />
c. Output voltage reference controller<br />
The classical control of the quiescent output levels by a<br />
common mode feedback circuitry with low pass signal<br />
filtering was not possible because it has to be off when<br />
the chip is not selected. Another solution, using a dummy<br />
transconductance stage and a feed forward circuitry for<br />
referencing the output of the buffer, was preferred (Fig.<br />
7). Both inputs of the dummy transconductor are fed with<br />
the common mode signal and resulting not balanced<br />
output signal is assumed to be the same for the main<br />
circuitry. The output of the dummy element is balanced<br />
by comparing its voltage level to the reference signal in
the error amplifier. As a result, a balanced voltage equal<br />
to the reference value Vref is established at the output of<br />
the dummy transconductor. The both outputs of main<br />
transconductor are forced to the same value through<br />
current mirrors. This design is potentially sensitive to<br />
mismatches, thus special care was taken at the layout<br />
level. An eventual mismatch in the output voltage<br />
quiescent level can be adjusted by changing Vref.<br />
Dummy<br />
Transconductor<br />
Vdc<br />
B2<br />
B1<br />
1 1 1 1<br />
Iref<br />
Vref<br />
Iref/2<br />
I+<br />
V+<br />
I-<br />
Linearised<br />
Transconductor<br />
V-<br />
1 1 1 1<br />
Fig. 7. Quiescent output level controller<br />
V. Bias Generators and Current Reference<br />
The bias generators provide DC currents and voltages to<br />
bias accurately all the analogue parts. They consist of<br />
nine 8 bit DACs. Each DAC is made of a JTAG register,<br />
current sources and when necessary a current to voltage<br />
converters. The JTAG register consists of a shift register<br />
and a shadow register designed with a majority voting<br />
logic approach in order to prevent Single Event Upset<br />
(SEU).<br />
In the HAL25 chip, an internal current source provides<br />
current reference to the bias generators. It is designed to<br />
be insensitive to a +/- 8% power supply variation. To<br />
prevent poly-silicon resistor value variation (+/- 20%) due<br />
to the process, the reference is adjustable by JTAG in 5<br />
steps from -15% to +15% around nominal value.<br />
VI. Test Pulse Generator<br />
This feature can test analogue channels by injecting<br />
adjustable charge pulses. The dynamic range for test<br />
pulse is more than +/-15 MIPs, enough to test the full<br />
dynamic range of analogue channels. A programmable<br />
number of channels can be tested together.<br />
VII. JTAG Controller<br />
The control interface of HAL25 complies with the JTAG<br />
IEEE 1149.1 standard. It allows the access to the registers<br />
in the chip, especially for setting the bias and switching<br />
between the running and the test modes. These modes are<br />
the pulse test, the transparent test and the boundary scan<br />
of the pads involved in the readout.<br />
After reset of the controller, the circuit is in the bypass<br />
state. For JTAG this means that serial data can skip the<br />
circuit with one extra JTAG clock cycle per bypassed<br />
circuit. For the readout part the token passes directly from<br />
the previous to the next HAL25. An ID number can be<br />
read from the chip by setting 4 input pads.<br />
HAL25 Power Consumption<br />
Three types of power consumption can be calculated:<br />
• Power consumption per channel during acquisition:<br />
P (no read out) = 355 µW (τs =1.4 µs)<br />
P (no read out) = 290 µW (τs = 2.2 µs)<br />
• Power consumption per channel during readout:<br />
P (read out) = 750 µW (τs =1.4 µs)<br />
P (read out) = 680 µW (τs = 2.2 µs)<br />
• Mean power consumption per channel for a read out<br />
cycle of 1 ms:<br />
= 360 µW (τs =1.4 µs)<br />
= 265 µW (τs = 2.2 µs)<br />
HAL25 Layout<br />
Fig. 8. HAL25<br />
Circuit Evaluation<br />
The HAL25 has an area of 3.65 x<br />
11.90 mm 2 (Fig. 8). The process<br />
has 1 poly and 3 metal layers. The<br />
enclosed gate geometry with guard<br />
ring technique was used in the<br />
layout to prevent post-irradiation<br />
leakage currents in NMOS channel<br />
transistors.<br />
The I/O pad sizes and placement<br />
pitches were designed to be<br />
directly compatible with the<br />
existing Tape Automated Bonding<br />
(TAB) technique. All the pads<br />
except the power supply pads are<br />
protected with diodes. Pads used<br />
by the JTAG protocol are CMOS<br />
while the readout control pads<br />
comply with the LVDS standard.<br />
Evaluation of HAL25 is underway. Several chips have<br />
been tested on a probe station. Correct functionality of the<br />
chip has been verified.<br />
Figure 9 shows output stream from channel 1 to channel<br />
128. A 1 MIP signal injected on one channel shows up<br />
clearly after the average channel pedestals have been<br />
subtracted.
Fig. 9 Output stream<br />
Figure 10 shows analogue pulse shapes at output buffer as<br />
a function of the injected charges.<br />
output diff HAL25 (V)<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0<br />
0 5 10 15 20<br />
-0.2<br />
-0.4<br />
-0.6<br />
-0.8<br />
From +/- 2 to +/- 14 MIPs<br />
time (µs)<br />
Fig. 10 Output shapes<br />
A good linearity (
Abstract<br />
The front−end system of the Silicon Drift Detectors<br />
(SDDs) of the ALICE experiment is made of two ASICs. The<br />
first chip performs the preamplification, temporary analogue<br />
storage and analogue−to−digital conversion of the detector<br />
signals. The second chip is a digital buffer that allows for a<br />
significant reduction of the connections from the front−end<br />
module to the outside world.<br />
In this paper the results achieved on the first complete<br />
prototype of the front−end system for the SDDs of ALICE<br />
are presented.<br />
I. Introduction<br />
Silicon drift detectors provide xy coordinates of the<br />
crossing particles with a spatial precision of the order of 30<br />
µm, as well as a charge resolution such that the dE/dx is<br />
dominated by Landau fluctuation. The detector is hexagon−<br />
shaped with a total area of 72.5x87.6 mm 2 and an active area<br />
of 70.2x75.3 mm 2 . It is divided into two 35 mm drift regions.<br />
At the end of each drift region 256 anodes collect the charge.<br />
The pitch of the anodes is 294 µm.[1][2]<br />
Each anode has to be readout with a sampling frequency<br />
of 40 MS/s. The number of samples to be taken per event<br />
must cover the whole drift time, which is around 6 µs,<br />
therefore a number of 256 samples has been chosen. The<br />
total amount of data for each half−detector (corresponding to<br />
one drift region) is 64 kSamples.<br />
The basic idea behind our readout scheme is to convert<br />
the samples into a digital format as soon as possible. Due to<br />
the tight requirements in term of material and also the small<br />
space available on the support structures ( ladders ) it would<br />
be extremely difficult to transmit analogue data outside the<br />
detectors at the required speed.<br />
Owing to the low power budget ( 5 mW/channel ) it is<br />
not possible to have a 40 MS/s A/D converter for each<br />
channel, therefore a different approach has been adopted.<br />
The signal from the detector is continuously amplified and<br />
Test results of the front−end system<br />
for the silicon drift detectors of ALICE<br />
G.Mazza 1 , A.Rivetti 2 , G.Anelli 3 , M.I.Martinez 1,4 , F.Rotondo 1 ,<br />
F.Tosello 1 , R.Wheadon 1<br />
for the ALICE Collaboration<br />
1 INFN Sezione di Torino, via P.Giuria 1 10125 Torino, Italy<br />
2 Dip. di Fisica Sperimentale dell’Universita‘ di Torino, via P.Giuria 1 10125 Torino, Italy<br />
3 CERN, 1211 Geneva 23, Switzerland<br />
4 CINVESTAV, Mexico City, Mexico<br />
mazza@to.infn.it<br />
rivetti@to.infn.it<br />
sampled on an 256−cells analogue memory at 40 MS/s.<br />
When the trigger signal is received the analogue memory<br />
stops the write phase and moves to the read phase where its<br />
samples are converted by a slower A/D converter. This of<br />
course introduces some dead time since during the<br />
conversion the system is not sampling the detector signal.<br />
The maximum allowed dead time is 1 ms/event. A<br />
reasonable value for the settling time of the analogue<br />
memory is 500 ns and a 2 MS/s ADC every two channels is<br />
also acceptable in term of area and power consumption. With<br />
those values the dead time is 512 µs, a factor of two below<br />
the requirement.<br />
DCS<br />
Voltage regulators<br />
& LVDS tx/rx<br />
AMBRA<br />
PASCAL<br />
Figure 1 : SDD readout scheme<br />
Silicon Drift Detectors<br />
A schematic view of the readout architecture is shown in<br />
Figure 1.<br />
In order to further decrease the number of cables from the<br />
front end system a digital multi−event buffer is placed close<br />
to the front−end chip. The data from the A/D converter are<br />
first quickly stored into a digital event buffer over a wide bus<br />
and then sent outside the detector area over a single 8−bit<br />
bus for each front−end board.<br />
The introduction of the event buffers introduces an<br />
additional dead time; however, it has been calculated that<br />
with 4 buffers the dead time due to event buffer overflow is<br />
only 0.04%.<br />
...<br />
CARLOS<br />
GOL<br />
laser<br />
Low voltage boards<br />
Front−end boards End ladder board
II. The front−end ASICs<br />
The front−end system is based on two ASICs, named<br />
PASCAL and AMBRA.<br />
PASCAL can be divided in three parts : 64<br />
preamplification and fast analogue storage channels, 32<br />
successive approximation A/D converters (each ADC is<br />
shared between two analogue memory channels) and a logic<br />
control unit that provides the basic control signals for the<br />
analogue memory and the converter.<br />
The present prototype is half sized : 32 input channels<br />
with preamplifier and 256 cells analogue memory are<br />
connected to 16 A/D converters. A scheme of the prototype<br />
is shown in Figure 2.<br />
Preamplifier + buffer<br />
Figure 2 : PASCAL scheme<br />
Control Unit<br />
Analogue memory<br />
The preamplifier is based on the standard charge<br />
amplifier plus shaper configuration and provides a gain of<br />
around 35 mV/fC with a peaking time of 40 ns. The<br />
preamplifier is DC coupled with the detector; baseline<br />
variations and detector leakage currents are compensated via<br />
a low frequency feedback around the second stage. The<br />
amplifier is buffered with a class−AB output stage in order to<br />
be able to drive the analogue memory with a low power<br />
consumption.[3]<br />
The analogue memory is an array of 256x32 switched<br />
capacitor cells controlled via a shift register. The cells can be<br />
written at 40 MHz and read out at 2 MHz. The architecture is<br />
such that the voltage across the capacitor (and not the<br />
charge) is written and read; therefore the sensitivity to the<br />
absolute value of the capacitors and to the timing is greatly<br />
reduced.[5]<br />
The 10−bit A/D converter is based on the successive<br />
approximation principle; a scaled array of switched<br />
capacitors provides both the DAC and the subtraction<br />
functions, and a three−stage offset compensated comparator<br />
is used to check if the switched capacitor array output is<br />
positive or negative and to drive the successive<br />
approximation register that, in turn, controls the DAC. The<br />
successive approximation architecture is a good compromise<br />
between speed and low power consumption; it requires no<br />
operational amplifiers and only one zero crossing<br />
comparator. The conversion speed is one clock cycle per<br />
bit.[6]<br />
SAR<br />
SAR<br />
SAR<br />
FF<br />
FF<br />
FF<br />
A/D converter<br />
MUX<br />
Two calibration lines, connected to the even and odd<br />
channel inputs via a 180 fF capacitor, provide the capability<br />
of testing the circuit without the detector.<br />
The prototype has been designed in a commercial 0.25<br />
µm CMOS technology with radiation tolerant layout<br />
techniques. The chip size is 7x6 mm 2 .<br />
AMBRA provides 4 level of event buffering. Each buffer<br />
has a size of 16 kbytes and is based on static RAM in order to<br />
increase the SEU resistance and to avoid the refresh<br />
circuitry. The control unit provides buffer management,<br />
controls the operations of PASCAL and provides the front−<br />
end interface to the rest of the system. The present prototype<br />
has only 2 event buffers and can work up to 50 MHz. [4]<br />
The prototype has been designed in Alcatel 0.35 µm<br />
technology with standard layout. The chip size is 4.4x3.8<br />
mm 2 . 89% of the core area is occupied by the two memory<br />
buffers.<br />
The PASCAL−AMBRA system operates as follows :<br />
when AMBRA receveis the trigger signal it sends an SoP<br />
(Start of oPeration) command to PASCAL. PASCAL stops<br />
the sampling of the detector outputs and starts the readout of<br />
the analogue memory and the A/D conversion. When the first<br />
sample of the 32 channels has been converted, a write_req<br />
(write request) signal is sent to AMBRA which in turn, if a<br />
buffer is available, replies with a write_ack (write<br />
acknowledge ) and starts the data acquisition. The sequence<br />
continues for the other samples of the analogue memory until<br />
AMBRA sends an EoP (End of oPeration ) command, then<br />
PASCAL returns to the acquisition phase. The EoP<br />
command can be issued for two reasons : all the 256 cells<br />
have been readout, or an abort signal has been received. The<br />
latter indicates that the trigger signal, which is generated<br />
from the level−0 ALICE trigger, has not been confirmed by<br />
the higher level triggers. Since the reject probability can be<br />
very high (it can reach 99.8% in Pb−Pb interactions) it is<br />
very important to stop as soon as possible the conversion and<br />
restart the acquisition.<br />
write_req<br />
write_ack<br />
data_in<br />
Event Buffer<br />
data_out<br />
Write Counter A<br />
Read Counter<br />
Write Unit Control Unit Read Unit<br />
Figure 3 : AMBRA scheme<br />
Event Buffer<br />
B<br />
data_write<br />
data_stop<br />
trig_busy<br />
trig_abort<br />
As soon as an event buffer is full, AMBRA starts the<br />
transmission of the data. The data_write command is used to<br />
indicate that there are valid data on the output bus. This<br />
signal remains high as long as the data are transmitted. A<br />
data_end signal remains high for one clock cycle in<br />
correspondance of the last byte. It is possible to suspend the<br />
data transmission via the data_stop command.
III. Test results of the front−end circuit<br />
The two prototypes have been evaluated together on a<br />
test board.<br />
The inputs have been provided via the calibration lines<br />
while the outputs have been readout via a logic state<br />
analyzer. A data pattern generator has been used to provide<br />
the clock and the other digital control signals to the two<br />
chips.<br />
The two prototypes have also been tested connected to a<br />
detector in a test beam at CERN PS. Data analysis is in a<br />
<strong>preliminary</strong> stage; therefore in this paper we discuss only the<br />
lab measurement performed on the system.<br />
Figure 4 shows a typical output from a 4 fC charge signal<br />
from the calibration lines. Even with a δ−like pulse at least 4<br />
samples are significantly above the noise in the time<br />
direction. For a particle crossing the detector far from the<br />
anodes a slower signal is obtained and more samples can be<br />
above the noise floor; on the other hand, since the total<br />
charge does not change, the signal to noise ratio on the<br />
individual sample is worse.<br />
ADC counts<br />
Figure 5 shows the output code against the input charge<br />
for the 32 channels of a chip. It can be seen that the dynamic<br />
range is well above the required 32 fC.<br />
900<br />
800<br />
700<br />
600<br />
500<br />
400<br />
300<br />
200<br />
100<br />
0<br />
150<br />
100<br />
50<br />
0<br />
0 10 20 30 40 50 60<br />
Cell number<br />
Figure 4 : Typical output from a 4 fC input signal<br />
Output code vs. input charge ch 0<br />
0 3 5 8 10 13 15 18 20 23 25 28 30 33<br />
Figure 5 : Dynamic range<br />
Figure 6 shows the deviation of the curve of Figure 5<br />
from linear fit. The non−linearity is less than 0.8% over the<br />
whole dynamic range and it is mainly related to the<br />
saturation of the preamplifier at the highest part of the range.<br />
ch 1<br />
ch 2<br />
ch 3<br />
ch 4<br />
ch 5<br />
ch 6<br />
ch 7<br />
ch 8<br />
ch 9<br />
ch 10<br />
ch 11<br />
ch 12<br />
ch 13<br />
ch 14<br />
ch 15<br />
ch 16<br />
ch 17<br />
ch 18<br />
ch 19<br />
ch 20<br />
ch 21<br />
ch 22<br />
ch 23<br />
ch 24<br />
ch 25<br />
ch 26<br />
ch 27<br />
ch 28<br />
ch 29<br />
ch 30<br />
ch 31<br />
Another source of non−linearity is the voltage dependance of<br />
the memory capacitors, again in the highest part of the range.<br />
20<br />
10<br />
0<br />
−10<br />
Deviation from linear fit<br />
−20<br />
0 5 10 15 20 25 30<br />
Figure 6 : Linearity<br />
Figure 7 shows the gain variation across channels of 5<br />
different chips. The number of tested chip is too small to<br />
have a significant statistic; however, from these results the<br />
variation between channels of the same chip is of the order<br />
of few percent while the chip to chip variation is of the order<br />
of 10−15%.<br />
A small slope of the gain distribution across the channels<br />
of the same chip can be identified. This slope is due to a<br />
voltage drop on the power and reference lines across the<br />
chip. With a proper sizing of these critical lines it will be<br />
probably possible to recover some of the gain variation in the<br />
final version of the chip.<br />
30.0<br />
29.0<br />
28.0<br />
27.0<br />
26.0<br />
25.0<br />
24.0<br />
23.0<br />
22.0<br />
21.0<br />
20.0<br />
0 3 5 8 10 13 15 18 20 23 25 28 30<br />
Figure 7 : Gain<br />
Gain ( counts/fC )<br />
chip 0<br />
chip 1<br />
chip 3<br />
chip 4<br />
chip 5<br />
Noise measurements give an rms noise below 2 counts,<br />
which corresponds to around 400 e − . This number is slightly<br />
above the requirements and comes essentially from the<br />
coupling between the analogue part and the digital one at the<br />
substrate level and/or at the board level ( the measured noise<br />
of the preamplifier alone is less that 180 e − ). A better<br />
grounding scheme is under study for the final version of the<br />
chip and for the final board design.<br />
The measurements give less than 5 and around 10 mW per<br />
channel for average and peak power consumption,
espectively. These numbers fulfill the ALICE SDD<br />
requirements.<br />
Another important aspect is the amplifier recovery time<br />
from saturation. In the ALICE environment very high signals<br />
(up to 400 fC) are possible. Despite these signals are not of<br />
concern for the data analysis, it is important that the<br />
preamplifier does not remain "hanged" for milliseconds.<br />
Tests show a recovery time of 350 ns for a 100 fC signal.<br />
This time rises with higher signals and saturates at 500 ns for<br />
signals above 200 fC. These very high signals are very rare,<br />
therefore this recovery time is acceptable.<br />
IV. Radiation tolerance and technology issues<br />
The drift detectors and the front−end electronics in the<br />
ALICE environment will have to survive to a quite low, but<br />
not negligible, level of radiation. The foreseen dose for 10<br />
years of operation is around 20 krds. This value is at the limit<br />
of what a modern standard technology can accept, therefore<br />
the choice between a standard and a radiation hard<br />
technology was not straightforward.<br />
200<br />
175<br />
150<br />
125<br />
100<br />
75<br />
50<br />
25<br />
0<br />
Supply current vs dose<br />
0 25 50 75 100 125 150 175 200 225 250 275 300 325<br />
Figure 8 : AMBRA irradiation test results : supply current<br />
(mA) vs dose (krds)<br />
While radiation hard technologies remain two or three<br />
generation behind the standard technologies, a new approach<br />
based on specific layout techniques and deep submicron<br />
standard technologies has been carried out by the RD49<br />
research program at CERN. The effectivness of this approach<br />
has been demostrated up to 30 Mrds. Its main disadvantage is<br />
some area penalty, expecially in the digital design, if<br />
compared with a standard deep submicron technology.<br />
However, the digital radiation tolerant cells are still smaller<br />
when compared with the standard cells of a radiation hard<br />
processes.<br />
Owing to the low radiation level in ALICE the first<br />
PASCAL prototype has been designed in a commercial 0.25<br />
µm with radiation tolerant techniques while the first AMBRA<br />
has been designed in Alcatel 0.35 µm technology with the<br />
commercial standard cell library.<br />
The reason of this choice was that, on one side, analogue<br />
circuits are more sensitive to leakage currents and threshold<br />
variation (expecially the analogue memory, where leakage<br />
currents would destroy the cells content ) while, on the other<br />
hand, digital circuits are more robust but the area penalty due<br />
to the radiation tolerant techniques is much more significant.<br />
Radiation tests for total dose effects have shown very<br />
small variations in the parameters of the PASCAL chip up to<br />
30 Mrds. The irradiated AMBRA chips show full<br />
functionality up to 1 Mrd; unfortunately the leakage current<br />
shows a dramatic increase at 50−60 krds ( Figure 8 ) and,<br />
despite this effect does not affect the chip functionality, it<br />
will lead to an unacceptable power consumption.<br />
For this and other reasons (cost, phasing out of the 0.35<br />
µm process by Alcatel ) the final version of the AMBRA chip<br />
will use the 0.25 µm technology radiation tolerant standard<br />
cells library.<br />
V. Conclusions<br />
A 32 channels prototype and a 2 event−buffer prototype<br />
of the two ASICs for the readout of the ALICE SDDs have<br />
been designed and tested. The 2−chip system shows an<br />
excellent linearity and a good gain uniformity. The system<br />
fulfills the ALICE requirements and shows the effectivness<br />
of the chosen architecture.<br />
A minor problem related to voltage drop in the internal<br />
power supply and reference lines has been identified and will<br />
be corrected in the final version. Owing to the fact that the<br />
threshold for the leakage current in the 0.35 µm process used<br />
for AMBRA is too close to the foreseen radiation level, the<br />
final version of both chips will be designed using the<br />
radiation tolerant approach.<br />
VI. References<br />
[1] ALICE Technical Design Report, CERN/LHCC 99−<br />
12<br />
[2] ALICE Technical Proposal, CERN/LHCC 95−71<br />
[3] A.Rivetti et al, "A mixed−signal ASIC for the silicon<br />
drift detectors of the ALICE experiment in a 0.25 µm<br />
CMOS" CERN−2000−010, CERN−LHCC−2000−041, pp.<br />
142−146<br />
[4] G.Mazza et al, "Test Results of the ALICE SDD<br />
Electronic Readout Prototypes", CERN−2000−010, CERN−<br />
LHCC−2000−041, pp. 147−151,<br />
[5] G.Anelli et al., "A Large Dynamic Range Radiation−<br />
Tolerant Analog Memory in a Quarter−Micron CMOS<br />
Technology", IEEE Trans. on Nucl. Sci., vol.48, pp. 435−<br />
439, Jun 2001<br />
[6] A.Rivetti et al., "A Low−Power 10 bit ADC in a 0.25<br />
µm CMOS: Design Consideration and Test Results",<br />
presented at Nucl. Sc. Symposium and Med. Imaging<br />
Conference, Lyon, Oct 2000
Irradiation and SPS Beam Tests of the Alice1LHCb Pixel Chip<br />
J.J. van Hunen, G. Anelli, M. Burns, K. Banicz, M. Campbell, P. Chochula, R. Dinapoli, S. Easo, F.<br />
Formenti, M. Girone, T. Gys, A. Kluge, M. Morel, P. Riedler, W. Snoeys, G. Stefanini, K. Wyllie<br />
European Organization for Nuclear Research (CERN), 1211 Geneva 23, Switzerland<br />
jeroen.van.hunen@cern.ch<br />
A. Jusko, M. Krivda, M. Luptak<br />
Slovak Academy of Sciences, Watsonova 47, SK-043 53, Kosice<br />
M. Caselle, R. Caliandro, D. Elia,V. Manzari, V. Lenti<br />
Università degli Studi di Bari, Via Amendola, 173, I-70126, Bari<br />
F. Riggi<br />
Università di Catania, Corso Italia, 57, I-95129, Catania<br />
F. Antinori<br />
Università degli Studi di Padova , Via F. Marzolo, 8, I-35131, Padova<br />
F. Meddi<br />
Università di Roma I, La Sapienza, Piazzale Aldo Moro, 2, I-00185, Roma<br />
Abstract<br />
The Alice1LHCb front-end chip [1,2] has been designed<br />
for the ALICE pixel and the LHCb RICH detectors. It is<br />
fabricated in a commercial 0.25 µm CMOS technology, with<br />
special design techniques to obtain radiation tolerance. The<br />
chip has been irradiated with low energy protons and heavy<br />
ions, to determine the cross-section for Single Event Upsets<br />
(SEU), and with X-rays to evaluate the sensitivity to total<br />
ionising dose. We report the results of those measurements.<br />
We also report <strong>preliminary</strong> results of measurements done with<br />
150 GeV pions at the CERN SPS.<br />
(For the ALICE collaboration)<br />
I. INTRODUCTION<br />
The aim of ALICE (A Large Ion Collider Experiment) is<br />
to study strongly interacting matter that is created by heavy<br />
ion collisions at LHC. The experiment is designed to handle<br />
particle multiplicities as high as 8000 per unit of rapidity at<br />
central rapidity. Therefore the innermost cylindrical layers<br />
need to be two-dimensional tracking detectors with a high<br />
granularity. The two innermost layers are made of ladders of<br />
hybrid silicon pixel assemblies (readout chips bump-bonded<br />
to silicon sensors) at radii 3.9 and 7.6 cm, respectively. A<br />
large part of the momentum region of interest for ALICE<br />
consists of hadrons with an energy of several hundred MeV,<br />
and the momentum and vertex resolution are dominated by<br />
multiple scattering. Therefore the amount of material in the<br />
acceptance must be minimised. A development is under way
to thin the pixel chip wafers, after bump deposition, to less<br />
300µm. Sensor wafers of thickness 150 to 200 µm can be<br />
obtained in production. For the RICH detector of LHCb, the<br />
assembly will be encapsulated in a vacuum tube for the<br />
detection of the photoelectrons, and the thickness of the<br />
assembly is not relevant.<br />
Since the pixel layers are very close to the interaction<br />
region, they need to be resistant to Total Ionising Dose (TID)<br />
effects and to Single Event Effects (SEE). The expected TID<br />
after 10 years of operation at LHC is 500 Krad. The effects of<br />
the TID are well known; threshold shifts, leakage currents,<br />
and charge mobility reduction as a result of trapped charge<br />
and interface states [3]. The SEEs originate from a large<br />
amount of charge being deposited by a single event, usually<br />
through an interaction of the incident hadron with the silicon<br />
atoms. This may lead to [3]:<br />
• Single Event Upset (SEU): a change of a logic level<br />
(0→1 or 1→0)<br />
• Single Event Latch-up (SEL): a high power supply<br />
current<br />
• Single Event Gate Rupture (SEGR): a breakdown of a<br />
transistor gate.<br />
In this paper we focus on the application in ALICE, and<br />
describe tests of the Alice1LHCb pixel chip and of pixel<br />
single assembly prototypes, consisting of 300 µm p +<br />
n sensors<br />
bump-bonded to pixel chips from wafers of 750 µm thickness.<br />
The pixel front-end chip (Alice1LHCb) contains 8192<br />
readout cells of each 425×50 µm 2 and measures 13.5×15.8<br />
mm 2<br />
. It is designed in a 0.25 µm CMOS technology, where<br />
the radiation tolerance has been enhanced by the<br />
implementation of guard-rings and by using NMOS<br />
transistors in an enclosed geometry [4]. The measures that<br />
have been taken to reduce the effects of the TID also reduce<br />
the probability for Single Event Latch-up. Implementing<br />
redundancy for the memory cells in the chip reduces the<br />
effects of Single Event Upsets.<br />
In this document, we outline first the performance of the<br />
chip, particularly in what concerns minimum threshold and<br />
noise, as derived from laboratory measurements. We then<br />
report the results of irradiation with x-rays (TID) as well as<br />
with heavy ions and 60MeV protons (SEE). Preliminary<br />
results of the measurements with singles assemblies in a 150<br />
GeV pion beam at the CERN SPS are presented.<br />
II. MINIMUM THRESHOLD AND NOISE<br />
Each pixel contains a capacitor for testing purposes. A<br />
voltage step over this capacitor injects a certain amount of<br />
charge into each pixel. For different settings of the global<br />
threshold the efficiency is measured as a function of the size<br />
of the applied voltage step. This gives a calibration of the<br />
global threshold in mV as is shown in figure 1. Using the<br />
known value of the test capacitor, and additionally by using<br />
the X-rays of a Fe 55<br />
source (which give rise to around 1600<br />
electrons in 300 µm silicon) it is found that a threshold of 20<br />
mV equals a threshold of about 1000 electrons.<br />
Threshold (mV)<br />
80<br />
70<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
Figure 1: The measured average threshold of the pixel chip as a<br />
function of the setting of the DAC that is used to set the global<br />
threshold. A threshold of 20 mV is equivalent to about 1000<br />
electrons.<br />
The minimum threshold at which the chip can be operated<br />
is around 800 electrons. From the measurement of the<br />
efficiency as a function of the size of the voltage step, the<br />
noise (σ noise ) in each pixel was determined to be less than 110<br />
Equivalent Noise Charge (4σ noise ≈ difference in threshold<br />
when the efficiency is 2% and 98%, respectively). A pixel<br />
assembly has only a slightly higher noise (< 120 ENC), while<br />
the minimum threshold is around 1000 electrons.<br />
III. SINGLE EVENT EFFECTS<br />
A hadron with an energy of several GeV does not deposit<br />
enough charge through direct ionisation to create a Single<br />
Event Effect. However, it may interact elastically and inelastically<br />
with the silicon atoms in the pixel chip. The recoils<br />
and fragments will deposit a large amount of charge in the<br />
chip, and may therefore lead to single event effects. Both<br />
SEGR and SEL are generally not observed for circuits<br />
designed in 0.25 µm with special layout techniques. For<br />
SEGR the electric fields are not high enough, while the<br />
implementation of guard rings prevent SEL to occur. During<br />
measurements on the Alice1LHCb chip neither SEGR nor<br />
SEL were observed. The SEU do however occur, and we have<br />
to measure the cross-section for a SEU in the memory cells of<br />
the pixel chip. The cross-section is determined in two<br />
different ways. Firstly heavy ions with an LET (Linear<br />
Energy Transfer) between 6 and 120 MeVmg -1<br />
Minimum<br />
threshold<br />
170 180 190 200 210 220 230<br />
Global threshold (DAC units)<br />
cm 2<br />
were used,<br />
since these deposit a large amount of charge and the<br />
probability for SEU is large. From these results the SEU<br />
cross-section for other hadrons can be calculated. Secondly<br />
the measurements were repeated with 60 MeV protons.
A. Measurements with Ions<br />
In order to vary the value of the LET, different ions can be<br />
chosen (Xe 26+ , Ar 8+ , Ne 4+ , Kr 17+ ). Additionally the chip can be<br />
tilted with respect to the propagation direction of the ions to<br />
increase the path length of the ion through the sensitive part of<br />
the memory cells and therefore increase the amount of<br />
deposited charge in this region. To determine the number of<br />
SEUs the chip is loaded with a test pattern. After irradiation<br />
for a certain amount of time (several seconds or minutes) the<br />
memory cells are read-out and compared with the loaded test<br />
pattern. All differences are attributed to SEUs. It was verified<br />
that there are no SEUs without irradiation. The results of the<br />
measurements are shown in figure 2. For two pixel chips the<br />
SEU cross-section was measured. The results of these two<br />
chips are in good agreement. When the LET is larger than 6.3<br />
MeVmg -1 cm 2 , enough charge is deposited to create a SEU.<br />
Increasing the LET increases the SEU cross-section. At high<br />
values of the LET (> 20 MeVmg -1 cm 2 ) the cross-section<br />
increases slowly with increasing LET due to the fact that<br />
enough charge is available for a SEU, while only the<br />
probability to deposit the charge in the sensitive region of a<br />
memory cell remains. The curve in the figure represents the<br />
Weibull equation [5] that is used to estimate the cross-section<br />
for SEU in the case when the chip is irradiated with hadrons<br />
instead of ions. The data presented in figure 2 were used to<br />
determine the SEU cross-section for protons with an energy of<br />
60 MeV, leading to 9×10 -16<br />
cm 2<br />
[5].<br />
SEU Cross Section (cm 2 )<br />
1.E-06<br />
1.E-07<br />
1.E-08<br />
1.E-09<br />
1.E-10<br />
1.E-11<br />
chip 43<br />
chip 72<br />
Weibull<br />
0 20 40 60 80 100 120<br />
LET (MeV mg -1 cm 2 )<br />
Figure 2: The SEU cross-section as a function of the LET (Linear<br />
Energy Transfer) for 2 different pixel chips. The curve represents the<br />
Weibull equation and is used for the interpretation of the data<br />
A. Measurements with 60 MeV Protons<br />
The SEU cross-section measurements were repeated with<br />
60 MeV protons in order to confirm the heavy ion results.<br />
During 7 hours the chip was irradiated, leading to a fluency of<br />
6.4 10 12 cm -2 . The results of the measurements are summarised<br />
in table 1. In total 84 SEUs were found, while 41296 memory<br />
cells were irradiated. The SEU cross-section for 60 MeV<br />
protons equals thus 3×10 -16 cm 2 . This result is in good<br />
agreement with the result for 60 MeV protons as calculated<br />
from the heavy ion data (9×10 -16 cm 2 ).<br />
Table 1: Number of SEUs and the SEU cross-section per memory<br />
cell when irradiating with 60MeV protons.<br />
Fluency<br />
(cm -2 )<br />
# SEUs<br />
-<br />
# irradiated cells<br />
-<br />
Cross-section<br />
(cm 2<br />
)<br />
6.4 10 12 84 41296 3.2 10 -16<br />
In order to obtain an estimate for the number of SEUs in the<br />
entire pixel detector of ALICE, the calculations which were<br />
performed for the CMS experiment [5] are scaled with the<br />
particle flux as expected for ALICE and with the SEU crosssection<br />
as measured with 60 MeV protons. The neutron flux<br />
in central Pb-Pb collisions at ALICE equals 6.4×10 4<br />
cm -2<br />
s -1<br />
for the first pixel detector layer. The hadron flux originating<br />
from the Pb-Pb interactions was simulated using GEANT [7]<br />
and equals 2×10 5 cm -2 s -1 . For the entire ALICE pixel detector<br />
these particle fluxes would result in an upset rate of less than<br />
1 bit of one digital to analogue converter every 10 hours. It<br />
can therefore be concluded that SEU do not form a threat for<br />
continuous operation of the ALICE pixel detector. As<br />
mentioned earlier, SEGR and SEL have not been observed<br />
during the measurements.<br />
IV. TOTAL IONIZING DOSE EFFECTS<br />
The effect of the TID was studied by irradiating the pixel<br />
chip with 10 KeV X-rays from a SEIFERT X-ray generator,<br />
which is available at CERN. Due to the large size of the chip<br />
the irradiation had to be performed on two different positions<br />
on the chip in order to cover all the digital to analogue<br />
converters (DAC) located at the bottom of the chip, as well as<br />
a significant part of the pixel arrays. The bottom left and right<br />
(see figure 3) were irradiated to 12 Mrad at a rate of 0.6<br />
Mrad/hour. Due to some overlap of the 2 irradiated regions<br />
some parts receive as much as 24 Mrad. The top part of the<br />
chip, i.e. rows 0-50, was not irradiated. Figure 3 shows the<br />
threshold map of the chip after irradiation. The determination<br />
of the threshold of each pixel is explained in section I.<br />
Row<br />
0<br />
-50<br />
-100<br />
-150<br />
-200<br />
-250<br />
0 5 10 15 20 25 30<br />
Column<br />
30<br />
25<br />
20<br />
15<br />
10<br />
5<br />
0<br />
Threshold (mV)<br />
Figure 3: The threshold distribution of the chip after irradiation of<br />
the bottom right and left part of the chip to 12 Mrad.<br />
[6]
After the irradiation the output voltages of the DACs of the<br />
chip were up to 40 mV lower than before due to the threshold<br />
shift in the PMOS transistors [8]. For this reason the chip was<br />
equipped with a reference DAC, which serves as reference for<br />
the other DACs in the chip. After the irradiation the voltage<br />
supplied by this reference DAC needed to be increased by 40<br />
mV in order to obtain results as good as the results of the<br />
pixel chip before irradiation. There is no significant difference<br />
between the threshold map determined before and after the<br />
irradiation. The threshold for rows 0-50 is slightly higher, due<br />
to the fact that the reference DAC, as well as a two other<br />
DACs that control the digital part of the chip, were optimised<br />
for the irradiated pixels.<br />
After the irradiation the minimum threshold at which the chip<br />
can operate is unchanged (800 electrons), while the noise per<br />
pixel is still below 110 ENC. The power consumption of the<br />
chip is unaffected by the irradiation and is shown as a<br />
function of the total dose in figure 4.<br />
Current (mA)<br />
400<br />
300<br />
200<br />
100<br />
0<br />
Digital Supply Analogue Supply<br />
0.01 0.1 1 10 100<br />
Irradiated Dose (Mrad)<br />
Figure 4: The current of the digital and analogue power supplies as a<br />
function of the TID.<br />
V. TEST WITH 150 GEV PIONS AT CERN<br />
A number of assemblies were tested in a beam of 150 GeV<br />
pions at the CERN SPS. Two different configurations were<br />
used. First, one pixel assembly was tested together with<br />
scintillators for triggering and efficiency determination (with<br />
an uncertainty of about 1%). At a later stage more pixel<br />
assemblies were added to be able to perform tracking and<br />
therefore improve the efficiency determination. The results<br />
given in this paper concern mainly the first configuration. The<br />
results from the extended configuration will be presented at a<br />
later stage. The efficiency was determined using four<br />
scintillators which select a small area, several mm 2 , of the<br />
chip. An efficiency of 100% means that for each trigger there<br />
was a hit in our pixel assembly. First the efficiency was<br />
studied as a function of the strobe delay.<br />
Efficiency (%)<br />
120<br />
100<br />
80<br />
60<br />
40<br />
20<br />
0<br />
-150 -100 -50 0 50 100 150<br />
Strobe Delay (ns)<br />
Figure 5: Efficiency as a function of strobe delay<br />
Threshold=215<br />
Threshold=200<br />
The strobe is generated by the scintillator trigger signal<br />
and its duration could be varied for testing purposes from 100<br />
to 200 ns. At the CERN SPS the particle bunches arrive<br />
randomly with respect to the 10 MHz clock of the chip. The<br />
strobe width was set at 120 ns, and the trigger delay was<br />
changed in steps of 8 ns. The results are shown in figure 5.<br />
Two different threshold settings were used, namely 200 and<br />
215 which correspond to 2000 and 1000 electrons,<br />
respectively. The shape of the curve is as expected, and<br />
indicates a plateau of 20 ns. With a strobe width of 100 ns<br />
there would no plateau. There is no significant difference<br />
between the results for the different thresholds.<br />
Furthermore the efficiency was studied as a function of the<br />
detector bias voltage. The results are shown in figure 6. These<br />
results show that the detector can be operated with full<br />
efficiency, over a large voltage range.<br />
Efficiency (%)<br />
120<br />
100<br />
80<br />
60<br />
40<br />
20<br />
0<br />
Threshold=175<br />
Threshold=215<br />
0 10 20 30 40 50 60 70 80 90<br />
Detector Bias [V]<br />
Figure 6: The efficiency as a function of detector bias voltage. The<br />
curve with a threshold setting of 175 (approximately 4000 electrons)<br />
is shown for illustration purposes only, the chip will be operated at a<br />
threshold setting around 215, corresponding to 1000 electrons.<br />
Additionally the efficiency and cluster size was studied as<br />
a function of the incident angle of the particles. For this<br />
purpose the assembly could be tilted using a remote controlled
stepping motor. The results are shown in figures 7 and 8 for<br />
different thresholds. As mentioned before the chip will be<br />
operated with threshold settings in the range of 200-215,<br />
corresponding to 2000 to 1000 electrons, respectively.<br />
6<br />
5<br />
4<br />
3<br />
2<br />
1<br />
0<br />
Figure 7: Cluster size as a function of the angle of the assembly. At<br />
zero degrees the substrate is perpendicular to the particle beam.<br />
The data in figure 7 show an increase of the cluster size with<br />
increasing assembly angle. For very high thresholds the<br />
cluster size decreases as result of the decreasing charge<br />
deposition per cell when more cells are traversed. There is<br />
again no significant difference between the results with a<br />
threshold setting of 2000 and 1000 electrons.<br />
Efficiency (%)<br />
Cluster Size<br />
120<br />
100<br />
80<br />
60<br />
40<br />
20<br />
0<br />
Threshold=175<br />
Threshold=200<br />
Threshold=215<br />
0 5 10 15 20 25 30 35 40 45 50<br />
Track Angle (Deg)<br />
45 Deg<br />
0 Deg<br />
25 50 75 100 125 150 175 200<br />
Global threshold (DAC units)<br />
Figure 8: The efficiency as a function of the threshold setting for<br />
zero and 45 degrees.<br />
The data in figure 8 show that the efficiency in the operation<br />
range (threshold setting between 200 and 215) is high, also at<br />
a substrate angle of 45 degrees.<br />
VI. CONCLUSIONS<br />
The Alice1LHCb pixel chip was successfully tested for<br />
total ionising dose and single event effects. It has been shown<br />
that the performance of the chip does not degrade after a total<br />
ionising dose of 12 Mrad. No SEGR and SEL were detected,<br />
while the SEU cross-section was determined to be small,<br />
(3×10 -16 cm 2 ). This cross-section would lead to the upset of 1<br />
bit per 10 hours, of one digital analogue converter of one chip<br />
for the complete ALICE pixel detector. The tests with 150<br />
GeV pions show that the pixel assemblies perform well<br />
concerning the timing resolution, the efficiency and the<br />
cluster size.<br />
VII. REFERENCES<br />
1. K. Wyllie et al., “A pixel readout chip for tracking at<br />
ALICE and particle identification at LHCb”, Proceedings of<br />
the Fifth Workshop on Electronics for LHC Experiments,<br />
Snowmass, Colorado, September 1999.<br />
2. R. Dinapoli et al., “An analog front-end in standard 0.25µm<br />
CMOS for silicon pixel detectors in ALICE and LHCb",<br />
Proceedings of the Sixth Workshop on Electronics for LHC<br />
Experiments, Krakow, Poland, September 2000.<br />
3. F. Faccio, “COTS for the LHC radiation environment: the<br />
rules of the game", Proceedings of the Sixth Workshop on<br />
Electronics for LHC Experiments, Krakow, Poland,<br />
September 2000.<br />
4. G. Anelli et al., “Radiation Tolerant VLSI Circuits in<br />
Standard Deep Submicron CMOS technologies for the LHC<br />
Experiments: Practical Design Aspects”, IEEE Transactions<br />
on Nuclear Science, Vol. 46, No. 6 (1999) 1690-1696.<br />
5. M. Huhtinen and F. Faccio, “Computational method to<br />
estimate Single Event Upset rates in an accelerator<br />
environment”, NIM A 450 (2000) 155-172. F. Faccio: private<br />
communication.<br />
6. A. Morsch: private communication.<br />
http://AliSoft.cern.ch/offline/<br />
7. A. <strong>Bad</strong>alà et al., "Geant Simulation of the Radiation Dose<br />
for the Inner Tracking System of the ALICE Detector",<br />
ALICE Internal Note, ALICE/ITS 99-01.<br />
8. F. Faccio et al., “Total Dose and Single Event Effects<br />
(SEE) in a 0.25 µm CMOS Technology”, Proceedings of the<br />
Fourth Workshop on Electronics for LHC Experiments,<br />
Rome, September 1998.<br />
VIII. ACKNOWLEDGEMENT<br />
We thankfully acknowledge two CERN summer students,<br />
S. Kapusta and J. Mercado-Perez, who have been involved in<br />
different aspects of the testing. Further more we would like to<br />
thank F. Faccio for his advice and assistance with the<br />
measurements of the SEU cross-section.
Progress in Development of the Analogue Read-Out Chip for Silicon Strip Detector<br />
Modules for LHC Experiments.<br />
J. Kaplon 1 (e-mail: Jan.Kaplon@cern.ch), E. Chesi 1 , J.A. Clark 2 , W. Dabrowski 3 , D. Ferrere 2 , C. Lacasta 4 ,<br />
J. Lozano J. 1 , S. Roe 1 , A. Rudge 1 , R. Szczygiel 1,5 , P. Weilhammer 1 , A. Zsenei 2<br />
1<br />
CERN, 1211 Geneva 23, Switzerland<br />
2<br />
University of Geneva, Switzerland<br />
3<br />
Faculty of Physics and Nuclear Techniques, UMM, Krakow, Poland<br />
4<br />
IFIC, Valencia, Spain<br />
5<br />
INP, Krakow, Poland<br />
Abstract<br />
We present a new version of the 128-channel analogue<br />
front-end chip SCTA128VG for readout of silicon strip<br />
detectors. Following the early prototype developed in DMILL<br />
technology we have elaborated a design with the main goal of<br />
improving its robustness and radiation hardness. The<br />
improvements implemented in the new design are based on<br />
experience gained in DMILL technology while developing the<br />
binary readout chip for the ATLAS Semiconductor Tracker.<br />
The architecture of the chip and critical design issues are<br />
discussed. The analogue performance of the chip before and<br />
after the gamma irradiation is presented. The performance of<br />
modules built of ATLAS baseline detectors read out by six<br />
SCTA chips is briefly demonstrated. The performance of a<br />
test system for wafer screening of the SCTA chips is<br />
presented including some <strong>preliminary</strong> results.<br />
I. INTRODUCTION<br />
The SCTA chip has been developed from the beginning as<br />
a backup option to the binary read-out chip ABCD [1] for the<br />
ATLAS SCT, using the DMILL technology. Currently, SCTA<br />
chips have found the following applications:<br />
• Read-out of silicon strip detectors for the NA60<br />
experiment.<br />
• Production quality assurance testing of silicon strip<br />
detectors for ATLAS SCT.<br />
• Fast read-out chip for diamond strip detectors.<br />
• Read-out of silicon pad detectors for HPD applications.<br />
A first prototype of the SCTA chip [2] was designed and<br />
manufactured in the early stages of stabilisation of the<br />
DMILL process. In the meantime the DMILL process has<br />
been improved and stabilised. The development of the ABCD<br />
binary readout chip helped us to understand better and<br />
quantify various aspects of the process like matching,<br />
parasitic couplings through the substrate and radiation effects.<br />
The conclusions from the work on the ABCD chip have been<br />
implemented in the new design of the SCTA128VG chip with<br />
the main goal of improving robustness and radiation hardness<br />
of the new chip.<br />
II. CHIP ARCHITECTURE<br />
Figure 1 shows the block diagram of the SCTA128VG<br />
chip. The SCTA128VG chip is designed to meet all basic<br />
requirements of a silicon strip tracker for LHC experiments. It<br />
comprises five basic blocks: front-end amplifiers, analogue<br />
pipeline (ADB), control logic including derandomizing FIFO,<br />
command decoder and output multiplexer. The detailed<br />
architecture of the front-end amplifier based on a bipolar input<br />
device has been discussed already in [3]. An advantage of this<br />
solution, compared to a pure CMOS version being developed<br />
for the CMS tracker [4], is significantly lower current in the<br />
input transistors required for achieving comparable noise<br />
levels.<br />
Input pads from the detector<br />
x 128<br />
FE<br />
DAC’s<br />
& CAL<br />
logic<br />
128x128 ADB<br />
ADB control logic<br />
Command decoder<br />
x 128<br />
analogue<br />
MUX<br />
Readout<br />
logic<br />
Figure 1: Block diagram of the SCTA128VG chip.<br />
The front-end circuit is a fast transimpedance amplifier<br />
followed by an integrator, providing semi-gaussian shaping<br />
with a peaking time of 16 to 24ns. This dispersion of peaking<br />
times is for the full range of expected process variations. The<br />
design peaking time for nominal values of resistors and<br />
capacitors is 20ns. The peak values are sampled at 40 MHz<br />
rate and stored in the 128-cell deep analogue pipeline (ADB).<br />
Upon arrival of the trigger the analogue data from the<br />
corresponding time slot in the ADB are sampled in the buffer<br />
and sent out through the analogue multiplexer. The gain of the<br />
front-end amplifier is about 50mV/fC. The gain of the output
uffer of the analogue multiplexer is in the range of 0.8[V/V].<br />
Therefore the final gain of the whole read-out chain is roughly<br />
40mV/fC. All figures in the paper showing the gain and<br />
linearity refer to the full processing chain (front-end amplifier,<br />
ADB and output multiplexer). The front-end circuit is<br />
designed in such a way that it can be used with either polarity<br />
of the input signal, however the full read-out chain (NMOS<br />
switches in the analogue pipeline, output multiplexer) is<br />
optimised for p-side strips. The dynamic range of the<br />
amplifier is designed for 12fC input, which together with the<br />
gain of 40mV/fC gives a full swing at the output of the chip in<br />
the range of 500mV. The current in the input transistor is<br />
controlled by an internal DAC and can be set within the range<br />
0 to 320uA. This allows one to optimise the noise according<br />
to the actual detector capacitance.<br />
III. RESULTS FROM THE EVALUATION OF A<br />
SINGLE CHIP<br />
The basic parameters of the chip have been evaluated<br />
using internal calibration circuitry. The internal calibration<br />
circuitry provides a well-defined voltage step at the input of<br />
the calibration capacitors connected to every channel. Since<br />
the characteristic of the calibration DAC can be measured, the<br />
inaccuracy of the electronic calibration is related only to the<br />
deviation of calibration capacitors from the nominal value and<br />
the mismatch of the resistors used for scaling of the<br />
calibration voltage. For comparison with the results obtained<br />
with the electronic calibration, the absolute calibration of the<br />
chip in the set-up with a silicon pad detector and beta source<br />
is also presented.<br />
A. Basic parameters of the Front-End<br />
amplifier.<br />
The basic parameters of the amplifier are speed, gain,<br />
linearity and noise performance. The pulse shape at the output<br />
of the front-end amplifier has been evaluated by scanning the<br />
delay of the calibration signal with respect to the 40MHzsampling<br />
clock for the analogue pipeline. In order to<br />
normalise the results to the absolute time scale the<br />
measurement has been repeated for two consecutive values of<br />
the trigger delay.<br />
Amplitude [mV]<br />
SCTA128VG, single channel<br />
180<br />
160<br />
140<br />
120<br />
100<br />
80<br />
60<br />
40<br />
20<br />
25ns<br />
0<br />
0 10 20 30 40 50<br />
Calibration strobe delay (x1.17ns)<br />
Figure 2: Pulse shapes at the output of the multiplexer obtained from<br />
the delay scan for two consecutive trigger delays.<br />
Figure 2 shows the example of the measurement done for<br />
one typical channel of the SCTA128VG chip. The injected<br />
charge was 3.5fC. The obtained 18ns peaking time is in the<br />
expected range given by the technology process variation. The<br />
distribution of the peaking times in one SCTA128VG chip is<br />
shown in Figure 3. The RMS spread of the peaking times is in<br />
the range of 0.6%.<br />
Entries<br />
30<br />
25<br />
20<br />
15<br />
10<br />
5<br />
SCTA128VG, Peaking Time distribution<br />
PkTSpread<br />
Nent = 128<br />
Mean = 18.44<br />
RMS = 0.1244<br />
0<br />
17 17.5 18 18.5 19 19.5 20<br />
Peaking Time [ns]<br />
Figure 3: Distribution of the channel peaking times in one<br />
SCTA128VG chip. The RMS spread is about 0.6%.<br />
Figure 4 shows the gain linearity for one channel in the<br />
chip. The gain is 43mV/fC and a good linearity is kept up to<br />
16fC, which is the maximum range of the calibration DAC.<br />
The overall distribution of the gain in one SCTA128VG chip<br />
is presented in Figure 5. The RMS spread of the gains is about<br />
2%, which is very good for tracking applications.<br />
Output [mV]<br />
SCTA128VG, single channel<br />
700<br />
600<br />
500<br />
400<br />
300<br />
200<br />
100<br />
Offset = -3.1 mV<br />
Gain = 43.3 mV/fC<br />
0<br />
0 2 4 6 8 10 12 14 16<br />
Input charge [fC]<br />
Figure 4: Gain linearity for one channel of the SCTA128VG chip.<br />
The noise measurements have been done for the whole<br />
chip working with a 40MHz clock sampling data to analogue<br />
memory and for random readout of ADB cells. In this way<br />
any pedestal variation between ADB cells will contribute to<br />
the overall noise performance of the chip. The distributions of<br />
the ENC in one particular SCTA128VG chip for various input<br />
transistor biases are shown in Figure 6. For input transistor<br />
current ranging from 120µA up to 300µA the equivalent noise<br />
charge varies between 480 and 630e-. The RMS spreads of<br />
the ENC are in the range of 2 to 2.5%.<br />
In order to verify the measurements with the internal<br />
calibration signal a set-up with a detector and beta source has<br />
been built. The SCTA128VG chip was connected to a<br />
SINTEF Silicon pad detector of thickness of 530µm. The<br />
detector bias voltage was set to 400V, 265V above the<br />
depletion voltage, providing sufficiently fast charge collection<br />
from the pads. The SCTA128VG chip was operating under<br />
nominal bias condition with input transistor current set to<br />
200µA. Figure 7 shows the signal distribution from the<br />
detector exposed to beta particles. The gain extracted from
this signal distribution, assuming the Landau peak<br />
corresponding to a charge of 5.7fC, is in the order of<br />
44.2mV/fC. This has to be compared with the gain of<br />
45.6mV/fC measured with internal calibration circuitry for the<br />
channel connected to a detector pad. A minor difference of<br />
3% between the results of two measurements could be<br />
explained not only by the tolerance of the calibration<br />
capacitors and inaccuracy of the band-gap reference but also<br />
by a ballistic deficit for charge collected from the detector.<br />
The difference between charge collection time from the<br />
detector and charge injected from the calibration circuitry is in<br />
the range of 5ns, which is not negligible for a front-end<br />
amplifier with 18ns peaking time.<br />
Entries<br />
25<br />
20<br />
15<br />
10<br />
5<br />
SCTA128VG, Gain distribution<br />
GainSpread<br />
Nent = 128<br />
Mean = 44.87<br />
RMS = 0.9146<br />
0<br />
30 35 40 45 50 55 60<br />
Gain [mV/fC]<br />
Figure 5: Distribution of channel gains in one SCTA128VG chip.<br />
The RMS spread is in the range of 2%.<br />
Entries<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
SCTA128VG, ENC<br />
Iinput = 120uA -> ENC = 485e-<br />
Iinput = 200uA -> ENC = 560e-<br />
Iinput = 300uA -> ENC = 630e-<br />
0<br />
400 450 500 550 600 650 700 750 800<br />
ENC [e-]<br />
Figure 6: Distribution of ENC in a single SCTA128VG chip for<br />
different bias of the input transistor. The spread of the equivalent<br />
noise charge is in the range of 2 to 2.5%.<br />
Entries<br />
350<br />
300<br />
250<br />
200<br />
150<br />
100<br />
50<br />
SCTA128VG<br />
Peak = 252 mV<br />
0<br />
0 100 200 300 400 500 600 700 800<br />
Signal [mV]<br />
Figure 7: Histogram of data taken with silicon pad detector and a<br />
106 Ru beta source showing Landau peak at 252mV.<br />
B. Performance of the analogue memory<br />
(ADB).<br />
One of the most important parameters of the analogue<br />
memory, which will define its contribution to the overall<br />
noise performance of the chip, is the uniformity of the DC<br />
offsets (pedestals) between ADB cells.<br />
Pedestal [mV]<br />
SCTA128VG, ADB Pedestal Map<br />
320<br />
300<br />
280<br />
260<br />
240<br />
220<br />
200<br />
180<br />
160<br />
120<br />
100<br />
Cell Number<br />
80<br />
60<br />
40<br />
20<br />
0<br />
0<br />
20<br />
40<br />
60<br />
80<br />
100 120<br />
Channel Number<br />
Figure 8: ADB pedestal map in one SCTA128VG chip.<br />
Entries<br />
SCTA128VG, Distribution of ADB Pedestal Spread<br />
35<br />
ADBSpread<br />
30<br />
Nent = 128<br />
Mean = 1.11<br />
25<br />
RMS = 0.1765<br />
20<br />
15<br />
10<br />
5<br />
0<br />
-0.5 0 0.5 1 1.5 2 2.5 3<br />
Pedestal RMS [mV]<br />
Figure 9: Distribution of ADB pedestal spread in each channel, in<br />
one SCTA128VG chip. The 1.1mV mean value of the distribution is<br />
equivalent to 150e- ENC.<br />
Figure 8 shows the pedestal map of 128x128 ADB cells in<br />
one chip. From the presented figure one can extract the ADB<br />
cell-to-cell variation for all channels of the chip. The<br />
distribution of the ADB pedestal spreads for all channels in<br />
one particular chip is shown in Figure 9. The 1.1mV mean<br />
value of the distribution is equivalent to 150e- ENC of extra,<br />
non-correlated contribution to the noise generated by the<br />
front-end. For a low value of the input current and a low<br />
detector capacitance the additional contribution is about 4%.<br />
For higher detector capacitance this contribution becomes<br />
negligible. One can notice the high channel-to-channel<br />
uniformity of the analogue memory confirmed by a narrow<br />
(RMS ~ 10%) distribution of the pedestal spreads (Figure 9).<br />
IV. PERFORMANCE OF THE SCTA128VG CHIP<br />
CONNECTED TO A SILICON STRIP DETECTOR.<br />
To demonstrate the performance of the SCTA128VG chip<br />
reading out long silicon strip detectors, several modules<br />
equipped with 12.8cm ATLAS SCT type sensors have been<br />
built. A 6 chip ceramic hybrid holding two silicon detectors of<br />
size 6.3 x 6.4cm is shown in Figure 10.
The noise performance of the SCTA128VG chip may be<br />
optimised according to the detector capacitance by adjustment<br />
of the current in the input transistor. Figure 11 shows the<br />
results of noise measurements of one module with SCTA<br />
chips connected to 6.4 and 12.8cm long silicon strip detectors.<br />
Figure 10: Photograph of 6chip, 12.8cm strip silicon detector<br />
(ATLAS type) module.<br />
The measurement has been done for various bias<br />
conditions of the input transistor. It should be noted that the<br />
noise performance of the chip connected to 12.8cm strips<br />
could be improved by increasing the bias current of the input<br />
transistor. The reduction of ENC is relatively smaller for high<br />
current since the noise of the base spread resistance and the<br />
noise of the strip resistance become limiting factors.<br />
ENC [e-]<br />
2200<br />
2000<br />
1800<br />
1600<br />
1400<br />
1200<br />
1000<br />
800<br />
600<br />
400<br />
SCTA128VG detector module<br />
Ibias = 120uA ENC ~ 485 + 72e-/pF<br />
Ibias = 200uA ENC ~ 560 + 62e-/pF<br />
Ibias = 300uA ENC ~ 630 + 55e-/pF<br />
6.4cm strips 12.8cm strips<br />
0 5 10 15 20<br />
Cinput [pF]<br />
Figure 11: ENC for SCTA128VG chips connected to various length<br />
silicon strip detectors.<br />
Entries<br />
3000<br />
2500<br />
2000<br />
1500<br />
1000<br />
500<br />
Signal/Noise SCT module (12.8cm.)<br />
Peak = 11.4 ± 0.01<br />
Sigma = 1.296<br />
0<br />
0 5 10 15 20 25 30 35 40 45 50<br />
Signal/Noise<br />
Figure 12: Signal over Noise histogram of data taken with 100GeV<br />
pion beam for the 12.8cm detector module. Measurement done for<br />
200µA input transistor current.<br />
Figure 12 shows a Signal-over-Noise distribution of data<br />
taken with 100GeV pion beam for the SCTA chip connected<br />
to a 12.8cm long and 280µm thick silicon strip detector. The<br />
SCTA128VG chip was operated under nominal bias<br />
conditions with input transistor current set to 200µA. The<br />
ENC of 1850e- extracted from Figure 12 has to be compared<br />
with an ENC of 1700e- measured with the internal calibration<br />
circuit. The difference may be explained by the ballistic<br />
deficit and charge loss due to the inter-strip capacitances of<br />
the detector.<br />
V. RESULTS OF THE X-RAY IRRADIATION<br />
Although the SCTA128VG chip is realised in DMILL<br />
radiation hard technology the radiation effects in the devices<br />
cannot be ignored. The critical issue is the noise in the frontend<br />
amplifier. A second order effect is possible degradation of<br />
matching which may affect the uniformity of the channels in<br />
terms of gain, speed and the ADB performance.<br />
Entries<br />
70<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
SCTA128VG, ENC before & after irradiation<br />
Before irradiation<br />
ENC = 580e-<br />
RMS = 24e-<br />
After 10 MRad<br />
ENC = 620e-<br />
RMS = 25e-<br />
0<br />
400 500 600 700 800 900 1000<br />
ENC [e-]<br />
Figure 13: Distribution of ENC before and after 10Mrad. After<br />
irradiation the ENC increases by about 6%. The measurement are<br />
performed for 200µA input bias current.<br />
The irradiations have been performed at CERN using a<br />
facility providing 10keV energy X-Rays at two dose rates: 8<br />
and 33kRad/min. No annealing has been applied. During the<br />
irradiation we have evaluated the analogue parameters such as<br />
gain, noise, peaking time, and ADB uniformity as well as<br />
power consumption in the analogue and digital parts of the<br />
circuit.<br />
Entries<br />
40<br />
35<br />
30<br />
25<br />
20<br />
15<br />
10<br />
5<br />
SCTA128VG, Gain before & after irradiation<br />
Before irradiation<br />
Gain = 44mV/fC<br />
RMS = 0.9mV/fC<br />
After 10 MRad<br />
Gain = 41mV/fC<br />
RMS = 1.2mV/fC<br />
0<br />
30 35 40 45 50 55 60<br />
Gain [mV/fC]<br />
Figure 14: Distribution of channel gains before and after 10Mrad.<br />
After irradiation there is a noticeable drop of gain in the order of 7%,<br />
and an increase of gain spread from 2 to 3%.<br />
Figure 13 shows the distribution of ENC in one SCTA<br />
chip before and after irradiation. The increase of parallel noise<br />
due to the BJT beta degradation is as expected and could be<br />
neglected in the case of a chip working on a detector module<br />
when the serial noise due to the capacitive load is dominant.<br />
Figure 14 shows the distribution of channel gains in the
SCTA128VG chip before and after irradiation. After<br />
irradiation one can observe a 7% decrease of gain and an<br />
increase of gain spread from 2 to 3%. The evolution of power<br />
consumption during the irradiation is shown in Figure 15.<br />
The small (8%) decrease in analogue power consumption<br />
is due to the drift of the resistors in the internal band-gap<br />
reference and could be compensated by a change of the bias<br />
DAC setting. The peaking time and the uniformity of the<br />
ADB pedestals were unaffected by the X-Ray irradiation.<br />
Current [mA]<br />
200<br />
180<br />
160<br />
140<br />
120<br />
100<br />
80<br />
60<br />
40<br />
20<br />
0<br />
Power consumption vs total dose<br />
Analogue power consumption<br />
Digital power consumption<br />
0 2 4 6 8 10<br />
Dose [MRad]<br />
Figure 15: Evolution of analogue and digital power consumption of<br />
SCTA128VG chip during the X-Ray irradiation.<br />
VI. WAFER SCREENING SYSTEM FOR<br />
SCTA128VG CHIP.<br />
In order to be able to qualify good dies a wafer screening<br />
system has been developed. The system is based on an<br />
automatic probe station with all movements programmed.<br />
Using a standard probe card it was possible to test the SCTA<br />
chips under nominal bias conditions and at full, 40MHz readout<br />
speed. Results of all tests together with chip coordinates<br />
are saved to file for off-line analysis. The presented system<br />
provides the capability of evaluating all analogue parameters<br />
like gain, noise, peaking time, ADB uniformity as well as chip<br />
power consumption. The system enables one to do defect<br />
analysis, which is a part of the design validation.<br />
16<br />
14<br />
12<br />
10<br />
8<br />
6<br />
4<br />
Wafer 8: Classification map<br />
chips accepted<br />
chips rejected<br />
2<br />
0<br />
chips with one defect channel<br />
0 2 4 6 8 10 12 14<br />
Figure 16: Example of wafer map with chips classified according to<br />
the number of defects.<br />
Figure 16 shows an example of a typical wafer map with<br />
SCTA128VG chips classified according to the number of<br />
defects detected during analysis. The presented wafer shows<br />
roughly 30% yield for perfect chips. The percentage of the<br />
chips with single defects (channel gain out of specified 20%<br />
range or single ADB pedestal out of the normal distribution)<br />
was in the range of 15%. The SCTA chips with single defects<br />
are usually used for the evaluation of the hybrids when we do<br />
not require the 100% good channels as for the final detector<br />
modules. Figure 17 shows the distribution of the chip gains on<br />
one typical wafer. One can see good uniformity (in the range<br />
of 5% RMS) of the mean value of the gains for SCTA128VG<br />
chips over a whole wafer.<br />
Gain [mV/fC]<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
16<br />
14<br />
12<br />
10<br />
Wafer 8: Gain<br />
8<br />
6<br />
4<br />
2<br />
0<br />
Figure 17: Distribution of the chip gains on a wafer.<br />
0<br />
2<br />
4<br />
6<br />
8<br />
10<br />
VII. CONCLUSIONS<br />
The SCTA128VG chip is an implementation of a full<br />
analogue architecture satisfying the requirements of LHC<br />
experiments. The analogue performance of the SCTA128VG<br />
chip is adequate for the readout of LHC type Si strip detector<br />
modules. Excellent uniformity of the analogue parameters on<br />
the chip level as well as on the wafer level has been shown.<br />
The results of the X-Ray irradiation show radiation hardness<br />
of the SCTA128VG chip up to the ionising doses required by<br />
LHC experiments. A system for wafer screening of the<br />
SCTA128VG chip has been presented. It allows design<br />
validation in terms of defect analysis as well as selection of<br />
good dices to the users.<br />
VIII. REFERENCES<br />
[1] W. Dabrowski et al., Design and Performance of the<br />
ABCD Chip for the Binary Readout of Silicon Strip<br />
Detectors in the ATLAS Semiconductor Tracker. IEEE<br />
Transactions on Nuclear Science, vol.47, no.6, pt.1, Dec.<br />
2000, pp.1843-50.<br />
[2] W. Dabrowski et al., Performance of a 128 Channel<br />
Analogue Front-End Chip for Read-Out of Si Strip<br />
Detector Modules for LHC Experiments. IEEE<br />
Transactions on Nuclear Science, vol.47, no.4, pt.1, Aug.<br />
2000, pp.1434-41.<br />
[3] J. Kaplon et al., Analogue Readout Chip for Si Strip<br />
Detector Modules for LHC Experiments. Sixth Workshop<br />
on Electronics for LHC Experiments, Cracow, September<br />
11-15, 2000, CERN/LHCC/2000-041.<br />
[4] M. Raymond et al., The CMS Tracker APV25 0.25µm<br />
CMOS Readout Chip. Sixth Workshop on Electronics for<br />
LHC Experiments, Cracow, September 11-15, 2000,<br />
CERN/LHCC/2000-041.<br />
12<br />
14
The ALICE on-detector pixel PILOT system - OPS<br />
Kluge, A. 1 , Anelli, G. 1 , Antinori, F. 2 , Ban, J. 3 , Burns, M. 1 , Campbell, M. 1 , Chochula, P. 1, 4 ,<br />
Dinapoli, R. 1 , Formenti, F. 1 ,van Hunen, J.J. 1 , Krivda, M. 3 , Luptak, M. 3 , Meddi, F. 5 ,<br />
Morel, M. 1 , Riedler, P. 1 , Snoeys, W. 6 , Stefanini, G. 1 , Wyllie K. 1<br />
1 CERN, 1211 Geneva 23, Switzerland<br />
2 Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova, Italy<br />
3 Institute of Experimental Physics, 04353 Kosice, Slovakia<br />
4 Comenius University, 84215 Bratislava, Slovakia<br />
5 Universita di Roma “La Sapienza, I-00185 Roma, Italy<br />
6 on leave of absence from CERN, 1211 Geneva 23, Switzerland<br />
Abstract<br />
The on-detector electronics of the ALICE silicon pixel<br />
detector (nearly 10 million pixels) consists of 1,200 readout<br />
chips, bump-bonded to silicon sensors and mounted on the<br />
front-end bus, and of 120 control (PILOT) chips, mounted on<br />
a multi chip module (MCM) together with opto-electronic<br />
transceivers. The environment of the pixel detector is such<br />
that radiation tolerant components are required. The front-end<br />
chips are all ASICs designed in a commercial 0.25-micron<br />
CMOS technology using radiation hardening layout<br />
techniques. An 800 Mbit/s Glink-compatible serializer and<br />
laser diode driver, also designed in the same 0.25 micron<br />
process, is used to transmit data over an optical fibre to the<br />
control room where the actual data processing and event<br />
building are performed. We describe the system and report on<br />
the status of the PILOT system.<br />
A. Detector<br />
r/o<br />
elec.<br />
I. INTRODUCTION<br />
pixel chip<br />
2 ladders =<br />
half stave<br />
Figure 1: ALICE Silicon Pixel Detector<br />
Two ladders (5 pixel chips each), mounted on a front-end<br />
bus, constitute a half-stave. The complete detector consists of<br />
120 half-staves on two layers, 40 half staves in the inner layer,<br />
80 in the outer layer. The detector is divided into 10 sectors<br />
(in φ-direction). Each sector comprises two staves in the inner<br />
layer and four staves outer layer. Thus one detector sector<br />
contains six staves. Fig. 1 illustrates the ALICE silicon pixel<br />
detector. [1, 2]<br />
B. Design considerations<br />
Table 1 summarizes the main design parameters of the<br />
readout system.<br />
Table 1: System parameters<br />
L1 latency 5.5 µs<br />
L2 latency 100 µs<br />
Max. L1 rate 1 kHz<br />
Max. L2 rate 800 Hz<br />
Radiation dose in 10 years < 500 krad<br />
Neutron flux in 10 years 3 x 10 11 cm -2<br />
Total number of pixels 9.8184 x 10 6<br />
Occupancy < 2%<br />
Although the L1 trigger rate and the L2 trigger rate are low<br />
compared to other LHC experiments, the raw data flow yields<br />
almost 1 GB/s.<br />
The expected radiation dose and the neutron flux are at least<br />
one magnitude of order lower compared to the ATLAS or<br />
CMS experiments. However, commercial off-the-shelf<br />
components can still not be used. Therefore, the ASICs have<br />
been developed in a commercial 0.25-micron CMOS<br />
technology using radiation hardening layout techniques [3].<br />
Precautions have been undertaken to reduce malfunction due<br />
to single event upset. A minimum of data processing is<br />
performed on the detector, which subsequently simpifies<br />
ASIC developments.<br />
A. System overview<br />
II. SYSTEM ARCHITECTURE<br />
Fig. 2 shows a block diagram of the system electronics.<br />
The 10 pixel chips of one half stave are controlled and read
out by one PILOT multi chip module (MCM). The PILOT<br />
MCM transfers the data to the control room. In the control<br />
room 20 9U-VME-based router cards, two for each detector<br />
sector, receive the data. One router card contains six data<br />
converter daughter boards, one for each half stave. The data<br />
converters process the data and store the information in an<br />
event memory. The router merges the hit data from 6 half<br />
staves into one data block, processes the data and stores them<br />
into a memory where the data wait to be transferred to the<br />
ALICE data acquisition (DAQ) over the detector data link<br />
DDL [4].<br />
half stave 5<br />
half stave 4 pilot MCM<br />
half stave 3 pilot MCM<br />
half stave 2<br />
half stave 1<br />
pilot MCM<br />
pilot MCM<br />
half stave 0 pilot MCM<br />
pilot MCM<br />
half stave 0<br />
ixel chips<br />
pixel<br />
chips<br />
half stave<br />
sector 0 .. 19<br />
pilot<br />
chip<br />
data converter 5<br />
data converter 4<br />
data converter 3<br />
data converter 2<br />
data converter 1<br />
data converter 0<br />
ALICE DAQ<br />
router 0 .. 19<br />
DDL<br />
ALICE DAQ<br />
detector<br />
control room<br />
pilot MCM<br />
Glink<br />
control<br />
receive<br />
Figure 2: System block diagram<br />
pixel<br />
pilot<br />
core<br />
pixel bus<br />
data control<br />
Figure 3: Read-out chain<br />
pixel transmit<br />
G-link<br />
pixelcontrol<br />
receive<br />
router 0<br />
data processing<br />
data converter 5<br />
data converter 4<br />
data converter 3<br />
data converter 2<br />
data converter 1<br />
data converter 0<br />
link<br />
receiver<br />
control<br />
transmit<br />
0 data<br />
encoding<br />
opt.<br />
link<br />
opt.<br />
links<br />
link<br />
receiver<br />
pixelcontrol<br />
transmit<br />
router 0<br />
data converter<br />
pixel<br />
converter<br />
data processing<br />
event<br />
memory<br />
converter and<br />
control daughter card<br />
L1, L2y, L2n,<br />
testpulse, jtag<br />
pixel<br />
router<br />
pilot MCM control room<br />
B. PILOT logic and optical pixel transmitter<br />
A. Kluge 28.8.01<br />
Fig. 3 illustrates a block diagram of the read-out chain.<br />
When the ALICE DAQ issues a L1 trigger signal, the pixel<br />
router forwards the signal via the pixel control transmitter and<br />
the pixel control receiver to the PILOT logic. The PILOT chip<br />
asserts a strobe signal to all pixel chips [5], which stores the<br />
delayed hit information into multi event buffers in the pixel<br />
chips. Once a L2 accept signal (L2y) is asserted and<br />
transmitted to the detector, the PILOT chip initiates the<br />
readout procedure of the 10 pixel chips one after the other.<br />
The 256 rows of 32 pixels of a pixel chip are presented<br />
sequentially on a 32-bit bus. The read-out clock frequency is<br />
10 MHz. As a result, the read-out of 10 chips takes about 256<br />
µs.<br />
clk10<br />
clk40<br />
cycle<br />
event info<br />
pixel data bus<br />
data control<br />
signal feedback<br />
32 x 10 MHz<br />
clk40<br />
32<br />
16<br />
16<br />
16<br />
16<br />
3<br />
2<br />
MUX 4:1<br />
1<br />
sel<br />
0<br />
cnt4<br />
16 X 40 MHz<br />
0 1 2 3 0 1 2 3 0 1<br />
Figure 4: Transmission principle<br />
The PILOT logic performs no data processing but directly<br />
transmits data to the control room. This approach has several<br />
advantages. The first is, that the on detector PILOT-ASIC<br />
architecture is simple. Secondly, the system becomes more<br />
reliable as the complex data processing units are accessible<br />
during operation in the control room. Finally, if the detector<br />
hit occupancy increases in the future, data compression<br />
schemes can be adapted in the FPGA based control room<br />
located electronics.<br />
For the optical transmission of the data to the control room<br />
the encoder-serializer gigabit optical link chip GOL [6] is<br />
used. The GOL allows the transmission of 16 bit data words<br />
every 25 ns resulting in an 800 Mbit/s data stream. The data<br />
are encoded using the Glink [7] protocol. This chip has<br />
already been developed at CERN.<br />
The pixel data stream arrives from the pixel chips at the<br />
PILOT chip on a 32-bit bus in 100 ns cycles. That means that<br />
the transfer bandwidth of the GOL is twice as high as<br />
required. The 100 ns pixel data cycle is split up into four 25<br />
ns GOL transmission cycles. Fig. 4 shows the transmission<br />
principle. In two consecutive GOL cycles, 16 bits of pixel<br />
data are transmitted. The remaining two transmission cycles<br />
are used to transmit data control and signal feedback signal<br />
blocks. The control block contains information directly<br />
related to the pixel hit data transmission, such as start and end<br />
of transmission, error codes, but also event numbers. In the<br />
signal feedback block, all trigger and configuration data sent<br />
from the control room to the detector are sent back to the<br />
router for error detection.<br />
Upon receipt of a L2 reject (L2n) signal the corresponding<br />
location in the multi event buffer in the pixel chips are cleared<br />
and the PILOT initiates a short transmission sequence to<br />
acknowledge the reception of the L2n signal.
C. Data converter<br />
The serial-parallel converter receives the Glink data<br />
stream and recovers the 40 MHz transmission clock using a<br />
commercial component [8]. The implementation of the link<br />
receiver is based on a commercial FPGA and storage devices.<br />
Fig. 5 shows a block diagram of the data converter. The<br />
received data is checked for format errors and zero<br />
suppression is conducted before the data are loaded into a<br />
FIFO. The expected occupancy of the detector will not exceed<br />
2%. As a result, it is economic to encode the raw data format.<br />
In the raw data format the position of a hit within a pixel row<br />
is given by the position of logic ‘1’ within a 32-bit word. The<br />
encoder transforms the hit position into a 5-bit word giving<br />
the position as a binary number for each single hit and<br />
attaches chip and row number to the data entry [9]. The output<br />
data from the FIFO are encoded and stored in an event<br />
memory in a data format complying with the ALICE DAQ<br />
format [10]. There it waits until merged with the data from the<br />
remaining five staves by the router electronics.<br />
HDMP<br />
1034 0 FIFO<br />
Figure 5: Link receiver data converter<br />
encode+<br />
format<br />
D. Pixel control transmitter and receiver<br />
L1<br />
L2y<br />
L2n<br />
test_pulse<br />
reset<br />
jtag signals<br />
reset signals<br />
pixelcontrol_receive<br />
Figure 6: Pixel control block diagram<br />
clk40<br />
clock<br />
data<br />
pixelcontrol_transmit<br />
RAM<br />
L1<br />
L2y<br />
L2n<br />
test_pulse<br />
reset<br />
jtag signals<br />
reset signals<br />
0 1 2 3 0 1 2 3 0<br />
idle<br />
L1, L2y,<br />
L2n,reset,<br />
reset_jtag<br />
command<br />
Jtag<br />
Figure 7: Pixel control data format<br />
tms tms tdi tdi<br />
The pixel control transmitter and receivers are responsible<br />
for the transmission of the trigger and configuration signals<br />
from the control room to the detector. This includes the<br />
following signals: L1, L2y, L2n trigger signals, reset signals,<br />
a test pulse signal and JTAG signals.<br />
The data must arrive at the detector in a 10 MHz binning,<br />
since the on detector PILOT system clock frequency is 10<br />
MHz. During data read-out of the detector the JTAG access<br />
functionality is not required and vice versa. The link is<br />
unidirectional since the return path for the JTAG system<br />
(TDO) uses the Glink data link. The data protocol must be<br />
simple in order to avoid complex recovery circuitry on the<br />
detector in the PILOT chip. As a result, all commands must be<br />
DC balanced. (The number of ‘1’s and ‘0’s in the command<br />
code must be equal.)<br />
The data transmission is performed using two optical<br />
fibres, one carrying the 40 MHz clock and the other the actual<br />
data.<br />
The pixel control transmitter (see fig. 6) translates the<br />
commands into a serial bit stream. A priority encoder selects<br />
the transmitted signal in case two commands are active at the<br />
same time. L1 is the only signal where the transmission<br />
latency must be kept constant. Therefore, a L1 trigger<br />
transmission must immediately be accepted by the pixel<br />
control transmitter and, thus, has highest priority. A conflict<br />
would arise if the transmitter were in the process of sending a<br />
command at the same time as a L1 transmission request<br />
arrives. In order to avoid this situation the L1 trigger signal<br />
will always be delayed by the time duration, it takes to<br />
serialize a command (200 ns). During this delay time, all<br />
command transmissions are postponed to after the L1 signal<br />
transmission. Thus, when the delayed L1 trigger signal arrives<br />
at the transmitter, no other command can be in the<br />
transmission pipeline.<br />
Fig. 7 illustrates the data protocol. Four 40 MHz clock<br />
cycles form a command cycle. At start-up 64 idle patterns are<br />
sent to the receiver. The receiver synchronizes to this idle<br />
pattern. Commands are always two transmit cycles (or eight<br />
40 MHz cycles) long. The number of different commands<br />
requires a two transmit cycle command length. After each<br />
transmission of an idle word, a transmission command can<br />
follow. Since the idle word is only 100 ns long, the<br />
transmission of a command can be started in a 100 ns binning.<br />
However, the duration of a command transmission is 200 ns<br />
long [11].<br />
E. Fast multiplicity<br />
The pixel chips provide an analog fast multiplicity current<br />
signal. This signal is proportional to the number of pixels<br />
being hit. As it is a current signal, the sum of all 10 chips on a<br />
half stave can be obtained by connecting the 10 fast<br />
multiplicity outputs together. The use of this signal to<br />
generate a multiplicity and vertex trigger for the ALICE<br />
trigger system is currently under investigation [12].<br />
For the read-out of this signal, two options exist. One<br />
option is to use an A/D-converter and transmit the signal<br />
using the PILOT system and the Glink interface. The other<br />
option is to use an analog optical link [13] to transmit the<br />
information independently from the digital data stream. The<br />
draw back of the first option is the additional time delay when<br />
inserting the signal into the Glink data stream, which prohibits<br />
the use of the trigger signal in the L0 application in ALICE.<br />
The disadvantage of the second option is the need of an<br />
additional optical package. The available space for<br />
components is very restricted, as described below.
III. PHYSICAL IMPLEMENTATION<br />
A. Pixel bus and pixel extender<br />
Fig. 8 shows the view from the side of the mechanical<br />
assembly, fig. 9 from the top. On the bottom of fig. 8, a fibre<br />
carbon structure and the cooling tube can be seen which holds<br />
the detector components. The pixel chips and the sensor<br />
ladders are bump-bonded and directly glued on top of the<br />
fibre carbon structure. On top of the assembly the pixel bus is<br />
glued. The pixel bus is an aluminium-based multi layer bus<br />
structure, which provides both power and data to the chips.<br />
The connections between the pixel bus and the ladder<br />
assembly are made by wire bonds. Passive components are<br />
soldered on top of the pixel bus. The PILOT MCM is attached<br />
to the structure in a similar way. Two copper bus structures,<br />
known as the pixel extenders, supply the pixel bus and the<br />
PILOT MCM with power.<br />
CAPACITOR<br />
PIXEL BUS<br />
DETECTOR<br />
READOUT CHIP<br />
Figure 8: Pixel bus and extender<br />
15.5<br />
13.9<br />
13<br />
R<br />
EXTENDERS<br />
Analogue Digital<br />
PILOT PILOT<br />
GOL<br />
CARBON FIBER SUPPORT<br />
ladder2 ladder1<br />
C<br />
C<br />
Power<br />
Pixel chip Pixel detector<br />
12 mm<br />
Pilot MCM<br />
Opt<br />
Receiver<br />
Trans<br />
Al pixel carrier<br />
OPTICAL LINKS<br />
COOLING TUBE<br />
Flexible Extender<br />
70.72 mm 70.72 mm<br />
1000mm<br />
Figure 9: Pixel bus and extender (top view)<br />
B. PILOT MCM<br />
Data<br />
Controls<br />
Clk<br />
Cu extender 1&2<br />
Fig. 10 shows a diagram of the PILOT MCM. Due to<br />
mechanical constraints, the MCM must not exceed 50 mm in<br />
length and 12 mm in width. Components can only be placed<br />
in a 5 mm-wide corridor in the middle of the MCM. A special<br />
optical package is being developed, which is less than 1.4 mm<br />
in height and houses two pin diodes and a laser diode [14].<br />
Due to the height constraints for components, all chips<br />
will be directly glued and bonded onto the MCM without a<br />
package. Fig. 10 shows the GOL, which must be in close<br />
vicinity to the optical package in order to keep the 800 Mbit/s<br />
transmission line short. The distance from the connector to the<br />
GOL is less critical, as only 40 Mbit/s signals are connected<br />
to the optical package. On the very left, the analog PILOT<br />
chip is shown. It is an auxiliary chip for the pixel chips and<br />
provides bias voltages.<br />
2 mm<br />
mm<br />
.4 mm<br />
IL<br />
na<br />
ILOT<br />
igital<br />
Figure 10: PILOT MCM<br />
C. PILOT chip<br />
0 mm<br />
OL<br />
aser +<br />
Pin diodes<br />
The PILOT chip layout can be seen in fig. 11. The chip<br />
size of 4 x 6 mm is determined by the number of I/O pins. The<br />
chip has been produced in a 0.25 micron CMOS technology<br />
using special layout techniques to enhance radiation tolerance<br />
[3]. A comprehensive description of the PILOT chip can be<br />
found in [11, 15].<br />
Figure 11: PILOT layout<br />
D. GOL chip<br />
The GOL chip has already been tested and its performance<br />
is described in [6].<br />
E. Single event upset<br />
Although the expected neutron fluence is comparatively<br />
low, design precautions have been undertaken to prevent<br />
single event upsets from causing malfunctions. In both the<br />
PILOT chip and the GOL chip, all digital logic has been<br />
triplicated and all outputs are the result of majority voting.<br />
Internal state machines are made in a self-recovering manner.<br />
Fig. 12 shows the principle. In case a flip-flop in a state<br />
machine changes its state due to a single event upset, the<br />
correct state will be recovered using the state of the remaining<br />
two state machines.
input<br />
logic<br />
block a<br />
logic<br />
block b<br />
logic<br />
block c<br />
state<br />
machine<br />
_a<br />
state<br />
machine<br />
_b<br />
state<br />
machine<br />
_c<br />
internal<br />
voting gat e<br />
Figure 12: SEU recovery architecture<br />
F. PILOT system test board<br />
output<br />
voting<br />
gate<br />
output<br />
A PILOT system test board has been developed. The<br />
board is used to test the functionality of the PILOT chip. The<br />
PILOT chip is directly glued and bonded onto the board. An<br />
FPGA [16] provides the test patterns to the PILOT. The<br />
FPGA contains functional models of the pixel control<br />
transmitter, the ten pixel chips and the link receiver. The<br />
outputs of the PILOT chip are stored in a 128k x 48 static<br />
memory bank and can also be read back by the FPGA for<br />
comparison with the model. Access to the board, the FPGA<br />
and the RAM bank is via a JTAG port. Fig. 13 shows the<br />
block diagram of the board. In a second phase, the test will<br />
include the PILOT chip, the GOL transmitter chip and the<br />
commercial Glink receiver chip [8]. Again, the output of the<br />
data chain can be read into the FPGA and the memory bank.<br />
In a third phase the pixel bus and its 10 pixel chips will be<br />
connected to the board. This feature will allow qualification<br />
of the entire data read-out chain.<br />
pilot_in<br />
clk_opt_in<br />
data_opt_in<br />
data_opt_out<br />
clk_opt_out<br />
clk40<br />
50<br />
pilot<br />
50 44<br />
clk_opt<br />
da ta_<br />
opt<br />
FPGA<br />
JTAG<br />
Figure 13: PILOT system test board<br />
IV. STATUS<br />
GOL<br />
tx<br />
RAM<br />
128k 48<br />
Glink<br />
rx<br />
HP10 32<br />
The GOL chip has already been tested and its performance<br />
met the specifications. Another prototype run was launched in<br />
order to enhance functionality for another application [6].<br />
The PILOT chip has been received from the foundry [11,<br />
15].<br />
Tests of a prototype pixel bus have been started [17].<br />
The link receiver [10, 11] and the router designs are in<br />
progress.<br />
V. CONCLUSION<br />
All chip developments have been conducted using a 0.25micron<br />
CMOS technology and layout techniques in order to<br />
cope with the radiation dose. The on detector PILOT system<br />
performs no data processing nor requires on-chip memory.<br />
The entire data stream can be moved off the detector using the<br />
encoder and serializer chip GOL. This has the advantage that<br />
the on-detector electronics is independent from the detector<br />
occupancy and future upgrades can be performed on the<br />
FPGA based electronics located in the control room. The<br />
transmission of data is performed using optical links. The<br />
number of electrical read-out components is minimized, as the<br />
available space for physical implementation is very limited.<br />
VI. REFERENCES<br />
[1] M. Burns et al., The ALICE Silicon Pixel<br />
Detector Readout System, 6 th Workshop on<br />
Electronics for LHC Experiments,<br />
CERN/LHCC/2000-041, 25 October 2000, 105.<br />
[2] ALICE collaboration, Inner Tracking System<br />
Technical Design Report, CERN/LHCC 99 –12,<br />
June 18, 1999.<br />
[3] F. Faccio et al., Total dose and single event<br />
effects (SEE) in a 0.25 µm CMOS technology,<br />
LEB98, INFN Rome, 21-25 September 1998,<br />
CERN/LHCC/98-36, October 30, 1998, 105-113.<br />
[4] György Rubin, Pierre Vande Vyvre, The ALICE<br />
Detector Data Link Project, LEB98, INFN Rome,<br />
21-25 September 1998, CERN/LHCC/98-36.<br />
[5] K. Wyllie et al., A pixel readout chip for tracking<br />
at ALICE and particle identification at LHCb,<br />
Fifth workshop on electronics for LHC<br />
Experiments, CERN/LHCC/99-33, 29 October<br />
1999, 93<br />
[6] P. Moreira et al., G-Link and Gigabit Ethernet<br />
Compliant Serializer for LHC Data Transmission,<br />
N S S 2 0 0 0<br />
P. Moreira et al., A 1.25 Gbit/s Serializer for<br />
LHC Data and Trigger Optical links, Fifth<br />
workshop on electronics for LHC Experiments,<br />
CERN/LHCC/99-33, 29 October 1999, 194.<br />
[7] R, Walker et al., A Two-Chip 1.5 GBd Serial<br />
Link Interface, IEEE Journal of solid state<br />
circuits, Vol. 27, No. 12, December 1992,<br />
Agilent Technologies, Low Cost Gigabit Rate<br />
Transmit/ Receive Chip Set with TTL I/Os,<br />
HDMP-1022, HDMP-1024,
http://www.semiconductor.agilent.com, 5966-<br />
1183E(11/99).<br />
[8] Agilent Technologies, Agilent HDMP-1032,<br />
HDMP-1034, Transmitter/Receiver Chip Set,<br />
http://www.semiconductor.agilent.com, 5968-<br />
5909E(2/00).<br />
[9] T. Grassi, Development of the digital read-out<br />
system for the CERN Alice pixel detector,<br />
UNIVERSITY OF PADOVA – Department of<br />
Electronics and Computer Engineering (DEI),<br />
Doctoral Thesis, December 31, 1999.<br />
[10] A. Kluge, Raw data format of one SPD sector, to<br />
be submitted as ALICE note,<br />
http://home.cern.ch/akluge/work/alice/spd/spd.ht<br />
ml.<br />
[11] A. Kluge, Specifications of the on detector pixel<br />
PILOT system OPS, Design Review Document,<br />
to be submitted as ALICE note,<br />
http://home.cern.ch/akluge/work/alice/spd/spd.ht<br />
ml.<br />
[12] F. Meddi, Hardware implementation of the<br />
multiplicity and primary vertex triggers from the<br />
pixel detector, CERN, August 27, 2001, Draft, to<br />
be submitted as ALICE note.<br />
[13] V. Arbet-Engels et al., “Analogue optical links of<br />
the CMS tracker readout system, Nucl. Instrum.<br />
Methods Phys. Res., A 409, pp 634-638, 1998.<br />
[14] Private communication with G. Stefanini.<br />
[15] A. Kluge, The ALICE pixel PILOT chip, Design<br />
Review document, March 15, 2001, to be<br />
submitted as ALICE note,<br />
http://home.cern.ch/akluge/work/alice/spd/spd.ht<br />
ml.<br />
[16] Xilinx, XC2S200-PQ208.<br />
[17] Morel M., The ALICE pixel detector bus,<br />
http://home.cern.ch/Morel/documents/pixel_carri<br />
er.pdf<br />
http://home.cern.ch/Morel/alice.htm
A Study of Thermal Cycling and Radiation Effects<br />
on Indium and Solder Bump Bonding<br />
S. Cihangir, J. A. Appel, D. Christian, G. Chiodoni, F. Reygadas and S. Kwan<br />
Fermi National Accelerator Laboratory *<br />
Batavia, IL 60510, USA<br />
C. Newsom<br />
The University of Iowa<br />
Iowa City, IA 52242, USA<br />
Email address of the corresponding author: selcuk@fnal.gov<br />
Abstract<br />
The BTeV hybrid pixel detector is constructed of readout<br />
chips and sensor arrays which are developed separately. The<br />
detector is assembled by flip-chip mating of the two parts.<br />
This method requires the availability of highly reliable,<br />
reasonably low cost fine-pitch flip-chip attachment<br />
technology.<br />
We have tested the quality of two bump-bonding<br />
technologies; indium bumps (by Advanced Interconnect<br />
Technology Ltd. (AIT) of Hong Kong) and fluxless solder<br />
bumps (by MCNC in North Carolina, USA). The results have<br />
been presented elsewhere[1]. In this paper we describe tests<br />
we performed to further evaluate these technologies. We<br />
subjected 15 indium bump-bonded and 15 fluxless solder<br />
bump-bonded dummy detectors through a thermal cycle and<br />
then a dose of radiation to observe the effects of cooling,<br />
heating and radiation on bump-bonds.<br />
I. TESTED COMPONENTS<br />
The dummy detectors were single flip-chip assembles of<br />
daisy-chained bumps. Measured channels were composed of<br />
30 micron pitch indium bumps, a chain of 28 to 32; and 50<br />
micron pitch solder bumps, a chain of 14 to 16. Figure 1<br />
shows a schematic layout of a portion (8 channels) of an AIT<br />
dummy detector. Each chain was connected to pads on each<br />
end over which we measured the resistance to characterize the<br />
channel. AIT detectors had 200 channels each, MCNC<br />
detectors had 195 channels each.<br />
II. THERMAL CYCLING AND RADIATION<br />
Each detector was measured first for continuity before<br />
thermal cycling and radiation. These measurements were<br />
compared to the electrical resistance measurements done<br />
about 12 months ago[1] to yield an understanding of “time<br />
effect” on the bump-bonds. Then they were cooled to -10 o C in<br />
* Work supported by U. S. Department of Energy under contract no. DE-AC02-76CH0300.<br />
a freezer in an air tight container for 144 hours. Subsequent<br />
measurements were compared to the measurements done<br />
before cooling to understand any “cooling effect” on the<br />
bump-bonds. This was followed by heating the detectors to<br />
100 o C in vacuum for 48 hours. The detectors were measured<br />
after heating and compared to the measurements done after<br />
cooling to yield an understanding of any “heating effect”.<br />
Finally, the dummy detectors were shipped to the University<br />
of Iowa in three shipments to be radiated by a Cs-137 gamma<br />
source to 13 MRad and measured again to understand any<br />
“radiation effect”. A randomly selected sample of detectors in<br />
each shipment was not radiated to give us an indication if the<br />
detectors were affected during shipment. This way we<br />
eliminated one of the shipments from consideration.<br />
Figure 1: AIT Dummy Detector Bump Daisy Chain.<br />
III. RESULTS<br />
The effects we studied manifested themselves as large<br />
increases in resistance on the channels measured. These<br />
occurrences are described below.<br />
A. Thermal Cycling<br />
We categorize the problem occurrences after each step of<br />
the thermal cycling as follows:
1. Indium Bumps:<br />
Occurrence A: A good channel (1-2 Ohms average<br />
resistance per bump) develops a high resistance (5-10<br />
KOhms per bump) in 12 months.<br />
Occurrence B: A good channel develops a high<br />
resistance after cooling.<br />
Occurrence C: A good channel develops a high<br />
resistance after heating.<br />
In most cases the high resistance is accompanied by<br />
an average capacitance per bump of 2-10 picofarads.<br />
2. Solder Bumps:<br />
Occurrence A: A good channel (1-2 Ohms average<br />
resistance per bump) is broken (a resistance of larger<br />
than 20 MOhms) in 12 months.<br />
Occurrence B: Cooling breaks a good channel.<br />
Occurrence C: Heating breaks a good channel.<br />
Table 1 shows the distribution of the occurrences in<br />
indium bump detectors. No entry means no problem. The last<br />
column indicates the number of channels having an open or<br />
high resistance problem before the thermal cycling. There is a<br />
correlation between the occurrences of new problems and the<br />
original existence of problems. For instance, detectors E11<br />
and E20 which originally had many problematic channels<br />
developed more new problematic channels over the thermal<br />
cycling.<br />
Table 1: Indium Bump Problem Occurrence Distribution<br />
Det-ID<br />
E2<br />
Occur-A Occur-B Occur-C Orig-<strong>Bad</strong><br />
E3<br />
E4<br />
1<br />
E5<br />
E8<br />
1<br />
E11 14 1 37<br />
E13 1 6<br />
E14 2<br />
E15<br />
E16<br />
2 4<br />
E20<br />
E22<br />
20 2 8 74<br />
E23<br />
E24<br />
E25<br />
1<br />
Table 2 shows the distribution of the occurrences in solder<br />
bump detectors. No entry means no problem. The last column<br />
indicates the number of channels having a problem before the<br />
thermal cycling. Here we also see a correlation between the<br />
occurrences of new problems and the existence of problems<br />
before the thermal cycling. For instance, detectors MCNC-24<br />
and MCNC-27 which originally had many problematic<br />
channels developed more new problematic channels over the<br />
thermal cycling.<br />
Table 2: Solder Bump Problem Occurrence Distribution<br />
Det-ID Occur-A Occur-B Occur-C Orig-<strong>Bad</strong><br />
MCNC-10<br />
MCNC-11<br />
MCNC-12<br />
MCNC-18<br />
MCNC-19<br />
7 1 1<br />
MCNC-24 6 3 6 5<br />
MCNC-27 1 7 12<br />
MCNC-44 1<br />
MCNC-50 1 1<br />
MCNC-55 4 2<br />
MCNC-59<br />
MCNC-75<br />
1<br />
MCNC-76<br />
MCNC-81<br />
3<br />
MCNC-86 4 1 5 3<br />
We calculated the occurrences per bump based on these<br />
observations and summarize the results in Table 3. The<br />
correlation mentioned above can be a reason to exclude<br />
detectors E11, E20, MCNC-24 and MCNC-27 from<br />
consideration for the effects of thermal cycling. If we do that,<br />
we then calculate the occurrence rates per bump as shown in<br />
Table 4.<br />
Table 3: Rate of Occurrences (per bump)<br />
Occurrence Indium Bumps Solder Bumps<br />
A 2.1 x 10 -4 4.0 x 10 -4<br />
B 2.2 x 10 -5 1.4 x 10 -4<br />
C 2.1 x 10 -4 6.3 x 10 -4<br />
Table 4: Rate of Occurrences (per bump) without Problematic<br />
Detectors<br />
Occurrence Indium Bumps Solder Bumps<br />
A 3.3 x 10 -5 2.6 x 10 -4<br />
B 2.2 x 10 -5 4.6 x 10 -5<br />
C 2.5 x 10 -5 3.3 x 10 -4<br />
B. Radiation<br />
On indium bump detectors, after the radiation we observed<br />
that almost every first channel in groups of four channels (see<br />
Figure 1) was at high resistance. The group of four channels is<br />
a geometrical pattern of the construction of these detectors.<br />
Having every first channel affected, rather than a random<br />
distribution, suggests the occurrence may be not a result of<br />
radiation but of some effect unknown at the moment. We will<br />
further investigate the effect by x-ray study of a sample<br />
detector.<br />
On solder bump detectors, we observed that the<br />
aluminium layers both on the strips and the pads were<br />
extensively flaky and bubbly after the radiation. This may be
a result of accelerated oxidation with radiation. We observed<br />
6 out of 2280 channels (each with 14 or 16 bumps) were<br />
broken. This indicates a rate per bump of 1.8x10 -4 for the<br />
radiation effect. We should point out that these 6 failures<br />
might be due to breakage in the aluminium strips due to<br />
radiation rather than the breakage on the bump-bonds. We can<br />
not distinguish this effect at the present time for geometrical<br />
and structural reasons, but will investigate in the future.<br />
IV. CONCLUSIONS<br />
The results of thermal cycling and radiation tests validate<br />
the feasibility of bump-bonding technologies for hybrid pixel<br />
detectors. They withstand extreme conditions. Heating to<br />
100 o C, though, is more destructive than cooling to -10 o C,<br />
while the radiation effect is minimal. There is a correlation<br />
between the occurrences of problems due to these effects and<br />
existence of problems when the detectors were first<br />
assembled. The rates quoted are probably inflated due to the<br />
fact that some failures are caused by damage to the strips and<br />
pads due to repeated probing and radiation.<br />
V. REFERENCES<br />
[1] S. Cihangir and S. Kwan, talk presented at the 3 rd<br />
International Conference on Radiation Effects on<br />
Semiconductor Materials, Detectors and Devices, Florence,<br />
Italy (June 28-30, 2000), to appear in Nuclear Instruments and<br />
Methods A.
Beamtests of Prototype ATLAS SCT Modules at CERN H8 in 2000<br />
A.Barr C , A.A.Carter Q , J.R.Carter C , Z.Dolezal P,$ , J.C.Hill C , T.Horazdovsky U , P.Kodys F&P , L.Eklund N ,<br />
G.Llosa V , G.F.Moorhead M , P.W.Phillips R , P. Reznicek P , A.Robson R , I.Stekl U , Y.Unno K , V.Vorobel P ,<br />
M.Vos T&V<br />
Abstract<br />
ATLAS Semiconductor Tracker (SCT) prototype modules<br />
equipped with ABCD2T chips were tested with 180 GeV pion<br />
beams at CERN SPS. Binary readout method is used so many<br />
threshold scans at a variety of incidence angles, magnetic<br />
field levels and detector bias voltages were taken. Results of<br />
analysis showing module efficiencies, noise occupancies,<br />
spatial resolution and median charge are presented. Several<br />
modules have been built using detectors irradiated to the full<br />
ATLAS dose of 3×10 14 p/cm 2 and one module was irradiated<br />
as a complete module. Effects of irradiation on the detector<br />
and ASIC performance is shown.<br />
I. INTRODUCTION<br />
Beam tests provide an important opportunity to study how<br />
the detector systems fulfil what they have been designed for -<br />
detecting particles. Compared to the radioactive source<br />
measurements, beam tests much better simulate the realistic<br />
environment, with many modules working together,<br />
connected via long cables, etc.<br />
In June and August 2000 two types of silicon microstrip<br />
modules, barrel and forward have been tested with the pion<br />
beams of 180 GeV/c at the CERN H8 SPS beamline [1].<br />
The barrel modules were equipped with square silicon<br />
microstrip sensors of a physical size of 64 mm long and 63.6<br />
mm wide with strips in parallel at a pitch of 80 micrometers.<br />
A module had a pair of sensors glued on the top and the other<br />
glued on the bottom side of a baseboard of the module, being<br />
angled at 40 mrad to have a stereo view. The strip length of a<br />
ATLAS SCT Collaboration<br />
C Cavendish Laboratory, Cambridge University, UK<br />
F University of Freiburg, Germany<br />
K KEK, Tsukuba, Japan<br />
M School of Physics, University of Melbourne, Australia<br />
N CERN, Geneva, Switzerland<br />
P Charles University, Prague, Czech Republic<br />
R Rutherford Appleton Laboratory, Didcot, UK<br />
T Universiteit Twente, The Netherlands<br />
U Czech Technical University in Prague, Prague, Czech Republic<br />
V IFIC - Universitat de Valencia/CSIC, Valencia, Spain<br />
$ corresponding author, e-mail: Zdenek.Dolezal@mff.cuni.cz<br />
module was 12 cm by connecting the pair of sensors. The<br />
forward modules (see Figure 1) were functionally very<br />
similar, but they had quite different layout and hybrid<br />
technology. Their strip length was similar, but they were<br />
wedge-shaped with a fan geometry of strips with an average<br />
strip pitch of about 80 micrometers.<br />
Strips were connected to the readout electronics, near the<br />
middle of the strips in the barrel module and at the end of the<br />
strips in the forward modules. A module was equipped with<br />
12 readout chips (prototype ABCD2T [2]), 6 on the top and 6<br />
on the bottom side of the module. Chips were glued on<br />
specially-designed hybrids. Several modules have been built<br />
using detectors irradiated to the full ATLAS dose of 3×10 14<br />
p/cm 2 with 24 GeV protons at the CERN proton synchrotron<br />
and one module was irradiated as a complete module.<br />
Figure 1: Expanded view of the forward module<br />
The ABCD chip utilises on-chip discrimination of the<br />
signal pulses at each silicon detector strip, producing a binary<br />
output packet containing hit information sampled at the
40MHz clock frequency and corresponding in time to the<br />
beam trigger. The threshold for discrimination is set on a<br />
chip-by-chip basis using a previously determined calibration<br />
between the threshold (in mV) and the corresponding test<br />
input charge amplitude (in fC) which results in 50%<br />
occupancy.<br />
To obtain information on pulse heights, threshold scans<br />
must be carried out. Our measurement program thus consisted<br />
of multiple threshold scans, each at a certain combination of<br />
settings of variables of interest which included:<br />
• Detector bias voltage, generally covering the range<br />
up to expected full charge collection, about +250V for<br />
unirradiated detectors and +500V for irradiated detectors;<br />
• Magnetic field, i.e. the state on or off of the 1.56T<br />
magnetic field;<br />
• Beam incidence angle, the modules being rotated<br />
about an axis parallel to the detector strips reflecting the<br />
ATLAS SCT barrel geometry with respect to the magnetic<br />
field direction.<br />
The readout was triggered with an external scintillator<br />
system. For each trigger, we record binary information from<br />
the modules under test, from anchor planes included for<br />
reference and as control samples, and analogue information<br />
from the 4 high-spatial-resolution silicon telescopes with<br />
analogue readout. In addition, a 0.2ns resolution TDC is used<br />
to record the timing of the (randomly arriving) beam trigger<br />
relative to the 40MHz system sampling clock so that pulse<br />
shape and timing characteristics can be recovered.<br />
More detailed description of the measurement procedure<br />
and results can be found at [3] and [4]<br />
II. BEAMTEST SETUP<br />
A sketch of the beamline setup in August 2000 is shown in<br />
Figure 2. Ten SCT modules are mounted one after the other in<br />
a cooled, light-tight environment chamber on a mechanical<br />
system which allows each to be rotated about a vertical. This<br />
Figure 2: Sketch of the SCT experimental setup at H8 during August<br />
2000. *Barrel modules with irradiated detectors. **Fully irradiated<br />
module, RLT4. Module RLK6 was used for reference, with fixed<br />
threshold and bias.<br />
chamber can be moved into the 1.56 T Morpurgo<br />
superconducting dipole magnet. The field of this magnet is<br />
highly uniform over the volume of the SCT modules, and is<br />
directed vertically downward, parallel to the detector strips in<br />
a configuration which mimics the design arrangement of the<br />
SCT barrel modules with respect to particle trajectory, field<br />
direction and detector.<br />
Outside the environment chamber there are four telescope<br />
modules and two anchor planes, positioned as shown in<br />
Figure 2. The telescope have both X and Y sensors of 50 um<br />
pitch, coupled to analogue readout.<br />
In addition to the tracking telescopes we had two anchor<br />
planes constructed from SCT barrel detectors and hybrids<br />
with four ABCD2NT chips. These provide some additional<br />
external track information with timing characteristics similar<br />
to the modules under test.<br />
The DAQ used for the beamtests was an extension of that<br />
generally used for SCT module testing, a system of VME<br />
units for control, readout and low-voltage power. The SCT<br />
DAQ units include the CLOAC [5], SLOG [6] and<br />
MuSTARD [7]. Low-voltage power and slow-control signals<br />
came from SCTLV2 [8] low-voltage supplies, while high<br />
voltage for detector bias came from linear supplies and from a<br />
prototype CANbus-controlled SCT high-voltage power<br />
supply. [9]. The DAQ software [10] is an extension of a<br />
module testing system [11], a collection of C++ class libraries<br />
used in conjunction with the ROOT [12] package running on<br />
a PC under Windows NT 4.0.<br />
III. MODULES<br />
A. Overview of modules under test<br />
6 barrel and 3 forward modules were tested in the beam.<br />
Their positions and names are at Fig. 2. Barrel modules used<br />
hybrids of two different technologies: thin film [13] and<br />
copper/polyimide [14]. Forward modules were equipped with<br />
Kapton-Carbon-Kapton hybrids [15]. They used strip<br />
detectors of 3 different thicknesses (285, 300 and 325 μm)<br />
from several vendors. Modules K3113, RLT9 and RLT10<br />
have been built using detectors irradiated to the full ATLAS<br />
dose of 3×10 14 p/cm 2 with 24 GeV protons at the CERN<br />
proton synchrotron and module RLT4 was irradiated as a<br />
complete module.<br />
B. Calibration<br />
We performed a number of in-situ calibration<br />
measurements and other studies of all modules prior to,<br />
between and after the beamtests to verify or correct the<br />
module characterisations using a number of internallygenerated<br />
test charges across the full charge range of interest,<br />
as well as the identification of unusable channels. Last<br />
versions of ABCD chip has an additional four-bit threshold<br />
trim adjustment for each strip, which must be separately<br />
optimised. All unusable channels are recorded and later<br />
masked.
IV. MEASUREMENTS<br />
A total of over a 1000 runs of 5000 events each were taken<br />
at 5 incidence angles, 2 magnetic field levels and 6 detector<br />
bias voltages. At each combination of these settings, a set of<br />
threshold scans was performed with 12 charge settings<br />
ranging from 0.9 to 4.5 fC chosen to cover in some detail the<br />
design operating region near 1.0 fC as well as allowing a<br />
study of the fall off at higher thresholds allowing an accurate<br />
determination of the median charge collected.<br />
These data are complemented by noise runs (taken in-situ,<br />
but with no beam) and local calibration runs.<br />
V. DATA ANALYSIS<br />
In the course of data analysis, tracks are reconstructed<br />
from the telescope signals. Events with one reconstructed<br />
track are accepted only, to avoid ambiguities.<br />
Binary hits in the module channels are classified to<br />
‘efficient hits’ and ‘noise hits’ according to their proximity to<br />
the extrapolated track position and timing. A hit is considered<br />
‘efficient’ if found 100 μm from the track intersection of the<br />
module plane. Hits found more than 1 mm from the track are<br />
taken as noise hits. <strong>Bad</strong> channels known from lab and in-situ<br />
calibrations and their neighbours are excluded from the<br />
analysis. The analysis requires reference (anchor) planes to<br />
be efficient for all events. Furthermore, cuts on χ 2 , dX/dZ and<br />
dY/dZ are imposed.<br />
In order to monitor the efficiency dependence on the<br />
timing of each event, the trigger phase with respect to the 40<br />
MHz system clock is measured using a TDC. Figure 3 shows<br />
the efficiency dependence on the charge deposition moment,<br />
as measured by the TDC. As the modules were read out in<br />
efficiency<br />
K3112 at perp. incidence, 1.56 T, 250 V<br />
¢<br />
1<br />
¡<br />
0.8<br />
¡<br />
0.6<br />
¡<br />
0.4<br />
¡<br />
0.2<br />
¡<br />
0¡<br />
0<br />
1.0 fC<br />
©<br />
1.2 fC<br />
1.5 fC<br />
2.5 fC<br />
�<br />
3.5 fC<br />
4.5 fC<br />
¢<br />
10<br />
£<br />
20<br />
¤<br />
30<br />
trigger phase<br />
Figure 3: Efficiency dependence on trigger phase for the three<br />
recorded clock cycles<br />
ANYHIT compression where three time bin samples around<br />
the central time are recorded, the original 25 ns interval (the<br />
shaded area in the figure) can be extended on both sides using<br />
the full hit pattern information. As expected, the efficiency is<br />
seen to be a strongly varying function of the charge deposition<br />
moment. As the discriminators in the ABCD were operated<br />
with Edge Sensing OFF ("level" mode) the length of the<br />
¥<br />
40<br />
¦<br />
50<br />
§<br />
60<br />
¨<br />
70<br />
interval in which the modules are efficient depends strongly<br />
on the discrimination threshold. Analysis 1 reported here<br />
selects a rather broad 12 ns trigger phase window, attempting<br />
to minimise the effect on the efficiency while retaining as<br />
much statistics as possible.<br />
Three independent data analyses differing mainly in<br />
treating the time bin and TDC information were performed,<br />
yielding results which are largely identical [16], [17], [18].<br />
Several important benchmark values are then extracted:<br />
efficiency, noise occupancy and spatial resolution.<br />
1) S-curves<br />
VI. RESULTS<br />
Figures 4 and 5 show the efficiency and noise results as a<br />
function of threshold for unirradiated (K3112) and fully<br />
irradiated (RLT4) barrel modules, respectively, for several<br />
bias voltages. These data correspond to normal incidence in<br />
1.56T magnetic field.<br />
Figure 4: S-curves and noise occupancy in a 1.56 T field, normal<br />
incidence for unirradiated barrel module K3112 at all detector bias<br />
values studied.<br />
In the non-irradiated modules, the efficiency is seen to be<br />
nearly independent of bias voltage down to around 160 Volts.<br />
Only at very low voltages (80 V) does the efficiency decay<br />
significantly. The modules with irradiated detectors, on the<br />
other hand, show a very strong dependence of efficiency on<br />
the bias voltage. At 150 Volts virtually no signal is collected.<br />
The signal increases slowly with bias voltages all the way up<br />
to 500 Volts. The noise occupancies displayed in the same<br />
figure were determined using off-track hits.
Figure 5: S-curves and noise occupancy in a 1.56 T field, normal<br />
incidence for fully irradiated barrel module RLT4 at all detector bias<br />
values studied.<br />
2) Efficiency at 1 fC<br />
Threshold of 1 fC presents a nominal value for ATLAS<br />
running. Therefore, efficiency and noise occupancy at this<br />
point is of our interest. Figure 6 shows the dependence of<br />
efficiency on bias voltage averaged over all modules and all<br />
incidence angles.<br />
efficiency (%)<br />
100<br />
99.5<br />
99<br />
98.5<br />
�<br />
98<br />
97.5<br />
97<br />
96.5<br />
96<br />
95.5<br />
95<br />
0 100 200 300 400 500<br />
bias voltage (V)<br />
Figure 6: Efficiency at 1 fC without field (left) and in a 1.56 T field<br />
(right) for non-irradiated (filled circles) and irradiated detectors<br />
(open circles)<br />
Modules with non-irradiated detectors show only a marginal<br />
decay of the efficiency at the lowest bias voltages, although<br />
the charge loss is already considerable at 120 V. This result is<br />
compatible with the 99% benchmark.<br />
The modules with irradiated detectors, as expected, show a<br />
very pronounced dependence on bias voltage. On average,<br />
98% efficiency is reached above 300 V.<br />
For other than perpendicular incidence, a net reduction of<br />
the collected charge is observed, however, the efficiency at 1<br />
fC at relatively high bias voltage is nearly unaffected for the<br />
angle range from 16° to –14°.<br />
3) Noise occupancy<br />
Noise occupancies at the 1 fC nominal operating point<br />
derived from the off-track hits do not change significantly<br />
with any of the scanned variables. The table below gives a<br />
global noise number, valid for all incidence angles, bias<br />
voltages and magnetic fields, at 1 fC threshold, and also the<br />
threshold at the specification noise level of 5×10 -4 .<br />
From the table follows that unirradiated barrel modules<br />
have no measured noise, and irradiated barrel modules are<br />
still within or very close to specifications.<br />
High noise of the forward modules has been subject to<br />
extensive further investigations. Several effects have been<br />
found, which can explain large part of the noise increase.<br />
Forward modules were run at substantially higher hybrid and<br />
chips temperature. This is known to have a strong influence<br />
on the noise, but also on the threshold and calibration DACs,<br />
hence the threshold scale of the forward modules was likely<br />
wrong. Furthermore, large part of the noise can be attributed<br />
to the common noise. This effect has been addressed in later<br />
designs.<br />
Table 1: Noise occupancy at 1 fC and lowest threshold setting which satisfies the SCT noise occupancy specification of 5×10 -4 .<br />
module K3112 RLT5 SCAND1 FR153 FR152 K3113* RLT9* RLT10* RLT4**<br />
Noise at 1 fC
K3112 front: residuals @ 1fC Chi2 /<br />
�<br />
Chi2 / ndf = 665.3 / 65<br />
Constant = 506.6 ± �<br />
Constant = 506.6 6.598<br />
Mean = -2.582 ± 0.275<br />
500<br />
Sigma = 23.92 ± 0.1513<br />
400<br />
300<br />
200<br />
100<br />
0<br />
-200 -150 -100 -50 0 50 100 150 200<br />
Residual (um)<br />
X residual Chi2 / ndf = Chi2 / ndf = 350 / 86<br />
Constant � 350 / 86<br />
= 656.1 ± 9.065 �<br />
Constant = 656.1<br />
Mean = -1.584 ± 0.2163<br />
600<br />
Sigma = 19.4 ± 0.1585<br />
500<br />
400<br />
300<br />
200<br />
100<br />
0<br />
-200 -150 -100 -50 0 50 100 150 200<br />
X residual (um)<br />
K3112 back: residuals @ 1fC Chi2 /<br />
�<br />
Chi2 / ndf = 717.4 / 56<br />
Constant = 538.3 ± �<br />
Constant = 538.3 6.854<br />
Mean = 4.836 ± 0.2664<br />
500<br />
Sigma = 23.58 ± 0.1448<br />
0<br />
-200 -150 -100 -50 0 50 100 150 200<br />
Residual (um)<br />
Figure 7: Spatial resolution in u,v and X,Y of K3112, for<br />
perpendicular incidence (at 250 Volt detector bias and a 1.56 T<br />
magnetic field).<br />
VII. CONCLUSIONS<br />
Beamtests of an important sample of SCT modules of<br />
both barrel and forward types representing near-final<br />
designs with the ABCD2T readout chip were successfully<br />
performed covering a wide range of irradiation states,<br />
incidence angles, magnetic field states, and detector bias<br />
voltages representative of expected operating conditions.<br />
The modules met, or nearly met, most of the relevant<br />
specifications of the Inner Detector TDR. An exception<br />
was the number of bad channels which was mostly due to<br />
the now-understood and addressed ABCD2T trim DAC<br />
problem. The unirradiated barrel module prototypes tested<br />
in the June and August beams satisfy the noise occupancy<br />
specification (5×10 -4 ) down to 0.9 fC threshold. The<br />
efficiency at 1 fC is around (99±1)% irrespective of<br />
magnetic field and incidence angle. Only at the lowest bias<br />
voltages does ballistic deficit of the shaper lead to<br />
efficiency loss.<br />
The forward modules were noisier than expected<br />
compared to many laboratory measurements. Further<br />
investigations attributed this fact to several effects<br />
(substantially higher temperature leading to incorrect<br />
calibration, common mode noise susceptibility, etc.) This is<br />
being addressed in later designs. The efficiency at 1 fC is<br />
similar to the barrel modules.<br />
Three modules built with detectors irradiated to 3×10 14<br />
p/cm 2 and one complete module irradiated to the same<br />
fluence were tested. The modules with irradiated detectors<br />
had higher noise, but still satisfied the ATLAS noise<br />
occupancy specification at 1 fC. The fully irradiated<br />
module required a threshold of 1.2 fC to meet the noise<br />
specification.<br />
Two out of three modules built with irradiated detectors<br />
reach 98% efficiency at a bias voltage of around 350 V.<br />
The slightly lower efficiency of the other, K3113, is not<br />
fully understood but is probably due to an overestimation<br />
of the calibration response, as indicated by the consistently<br />
low median charge, and noise and efficiency at 1fC. Batch<br />
to batch uncertainties in the ABCD2T calibration<br />
capacitors of 10 to 20%, as well as temperature dependence<br />
of the gain and calibration charge amplitude all contribute<br />
to a systematic uncertainty in the response sufficient to<br />
400<br />
300<br />
200<br />
100<br />
Y residual Chi2 / ndf = 395.5 / 92<br />
250<br />
200<br />
150<br />
100<br />
50<br />
Constant = 244.1 ± 3.262<br />
Mean = -23.99 ± 8.797<br />
Sigma = 769.9<br />
± 5.627<br />
0<br />
-3000 -2000 -1000 0 1000 2000 3000<br />
Y residual (um)<br />
account for this discrepancy. Remarkable is the high<br />
efficiency of the fully irradiated module, RLT4. This may<br />
be due to the thicker detectors or the altered timing<br />
characteristics of the front-end ABCD electronics after<br />
irradiation. No charge collection plateau is reached in a bias<br />
voltage scan to 500V.<br />
VIII. REFERENCES<br />
[1] http://atlasinfo.cern.ch/Atlas/GROUPS/GENERAL/<br />
TESTBEAM<br />
[2] "Project Specification: ABCD2T/ABCD2NT<br />
ASIC", http://chipinfo.home.cern.ch/chipinfo<br />
[3] A. Barr et al.,<br />
http://atlas.web.cern.ch/Atlas/GROUPS/INNER_DETECT<br />
OR/SCT/tbAug2000/note.pdf<br />
[4] SCT Testbeam web site, follow links for June and<br />
August 2000,<br />
http://atlasinfo.cern.ch/Atlas/GROUPS/INNER_DETECT<br />
OR/SCT/testbeam<br />
[5] M.Postranecky et al., "CLOAC Clock and Control<br />
Module", http://www.hep.ucl.ac.uk/atlas/sct/#CLOAC<br />
[6] M.Morrissey, "SLOG Slow command generator",<br />
http://hepwww.rl.ac.uk/atlas-sct/mm/Slog/<br />
[7] M.Morrissey & M.Goodrick, "MuSTARD",<br />
http://hepwww.rl.ac.uk/atlas-sct/mm/Mustard/<br />
[8] J.Stastny et al, "ATLAS SCT LV Power Supplies",<br />
Prague AS, http://wwwhep.fzu.cz/Atlas/WorkingGroups/Projects/MSGC.html<br />
[9] http://wwwhep.fzu.cz/Atlas/WorkingGroups/Projects/MSGC/hvspec_0<br />
1feb26.pdf<br />
[10] G.Moorhead, "TBDAQ Testbeam DAQ",<br />
http://home.cern.ch/s/sct/public/sctdaq/www/tb.html<br />
[11] J.C.Hill, G.F.Moorhead, P.W.Phillips, "SCTDAQ<br />
Module test DAQ",<br />
http://home.cern.ch/s/sct/public/sctdaq/sctdaq.html<br />
[12] http://root.cern.ch/<br />
[13] A.A.Carter,<br />
http://www.fys.uio.no/elab/oled/abcd2t.htm<br />
[14] Y.Unno, "High-density, low-mass hybrid and<br />
associated technology", LEB2000<br />
[15] Forward Kapton Hybrid version KIII,<br />
http://runt1.physik.unifreiburg.de/feld/sct/hybrid/index.htm<br />
[16] M.Vos et al.,<br />
http://ific.uv.es/~vos/tb2000/aug2000/aug2000.html<br />
[17] P. Kodys et al., http://wwwrunge.physik.unifreiburg.de/kodys/TBAugust/TBAugust.html<br />
[18] Y.Unno et al.,<br />
http://atlas.kek.jp/managers/silicon/beamtests.html
Design and Test of a DMILL<br />
Module Controller Chip for the ATLAS Pixel Detector.<br />
Abstract<br />
The main building block of the Atlas Pixel Detector is a<br />
module made by a Silicon Detector bump bonded to 16<br />
analog Front-End chips. All FE's are connected by a star<br />
topology to the Module Controller Chip (MCC) with data<br />
push architecture. MCC does system configuration, event<br />
building, control and timing distribution. The electronics has<br />
to tolerate radiation fluences up to 10 15 cm -2 1 MeV in<br />
equivalent neutrons during the first three years of operation.<br />
The talk describes the first implementations of the MCC in<br />
DMILL (a 0.8 µm Rad-Hard technology). Results on tested<br />
dices and irradiation results of these devices at the CERN PS,<br />
up to 30 Mrad, will be presented. 8 chips were operating<br />
during irradiation and allowed to measure SEU effects.<br />
I. INTRODUCTION<br />
The ATLAS pixel detector [2] is constituted of 3-barrel<br />
layers and of 3 forward and backward disks. The barrels are at<br />
5.05, 8.85 and 12.25 cm from the beam, with a tilt angle of<br />
20° and a total dose of 30 Mrad for the innermost layer (B-<br />
Layer) is expected after 3 years of operation. Each barrel is<br />
organized into staves and each disk into sectors; both of them<br />
are in turn composed of modules. 1744 identical modules will<br />
be used in the whole detector.<br />
The Module Controller Chip (MCC) is an ASIC, which<br />
provides complete control of the Atlas Pixel Detector module.<br />
Besides the MCC the module hosts 16 FE chips bump-bonded<br />
to a silicon detector [3].<br />
The talk is divided in three sections. In the first section we<br />
describe the requirements that the MCC has to fulfil. Main<br />
features of this device are the ability to perform event<br />
building which provides some data compression on data<br />
coming from 16 Front-End chips read out in parallel. Output<br />
data stream is transmitted on one or two serials streams<br />
allowing a data transfer up to 160 Mbit/s. The system clock<br />
frequency is 40 MHz. First a prototype and then a full version<br />
of the chip were designed and tested. The second section<br />
describes the test set-up developed in Genova, which allows a<br />
comparison between the actual chip, hosted in a VME board,<br />
R. Beccherle<br />
INFN Genova, Via Dodecaneso 33, Italy<br />
Roberto.Beccherle@ge.infn.it<br />
on behalf of the<br />
Atlas Pixel Collaboration [1]<br />
and a C++ model simulating the chip. Test results of both<br />
chips will be presented in the third section. We focus on the<br />
irradiation tests, which allowed us to operate the chip while<br />
irradiating it, and this allowed performing detailed SEU<br />
studies.<br />
II. SYSTEM DESCRIPTION<br />
16 analog FE chips and a digital Module Controller Chip<br />
make the electronics part of the Atlas Pixel Detector. The<br />
interconnections have been kept very simple, and all<br />
connections that are active during data-takings use low<br />
voltage differential signalling (LVDS) standards to reduce<br />
EMI and balance current flows. The data link topology from<br />
the 16 FE’s to the MCC in a module is a star topology using<br />
unidirectional serial links. This topology has been chosen to<br />
improve the tolerance of the whole system to individual FE<br />
failure, as well as to improve the bandwidth by operating the<br />
serial links in parallel.<br />
FE chips electronics is organized in 18 columns and 160<br />
rows of 50 × 400 µm 2 pixel cells. These chips provide 7-bit<br />
charge information using a time-over-threshold front-end<br />
design. Time over threshold information is digitised using the<br />
40 MHz beam crossing rate. Event data coming from the FE<br />
chips are already formatted and are provided as soon as<br />
possible using data push architecture. This is done by the<br />
digital End of Column logic built-in in each chip. MCC has to<br />
perform event building collecting events from all 16 FE chips.<br />
In order to be able to perform this task, data from each link<br />
are buffered into a 21 × 32 bit deep FIFO. All FIFO’s are full<br />
custom blocks, while the remaining part of the chip was built<br />
using the DMILL standard cell library. As soon as one<br />
complete event has been received from the FE chips and<br />
stored in the FIFO’s, the MCC starts event building.<br />
Formatted data are sent to the output using one or two serial<br />
streams that can provide a total data transfer up to 160 Mbit/s.<br />
Besides event building the MCC has also to perform<br />
system configuration, by a serial protocol, being the only<br />
electronics part of the module able to communicate with the<br />
Read Out Driver (ROD). In order to perform this task a serial<br />
command decoder has been implemented. This is a crucial<br />
part of the chip and particular effort has been put into its
ealization in order to ensure a high tolerance to single bit<br />
flips in the data being received. The protocol is divided in two<br />
main classes, Data Taking mode and Configuration mode and<br />
it is not possible to mix these two states, by construction, even<br />
with single bit flip. This is especially important in case of a<br />
noisy environment and for the B-layer, where high SEU<br />
induced effects are expected. Another feature of the command<br />
decoder block is the ability to reset its circuitry without the<br />
need of a power up reset or a hard reset pin. Each FE chip is<br />
independently accessible in order to be able to configure it<br />
through a set of dedicated commands. MCC configuration<br />
information is stored in a Register Bank.<br />
The chip has also to ensure synchronization between all<br />
events and to distribute Trigger commands to all FE’s.<br />
Trigger and Timing Circuitry block (TTC) performs this task.<br />
In case of any error in event reconstruction, due, for example,<br />
to a data overflow in the internal FIFO’s, synchronization is<br />
guaranteed and error words are added to the data stream in<br />
order to correctly flag the failure.<br />
In addition to the main functions also a self-testing<br />
capability of all internal structures has been added to the chip.<br />
This test capability is a key feature in order to be able to<br />
build, test and operate with reliability a complex system such<br />
as the ATLAS Pixel Detector module. As an example,<br />
implemented test structures allow for testing the correctness<br />
of event builder, writing FE events into the internal storage<br />
structures as if they would come directly from the FE chips.<br />
This is done using the serial data protocol. In order to verify<br />
the correctness of event building, simulated events can be<br />
downloaded to the MCC first and are then reconstructed and<br />
formatted by the event builder and transmitted back to the off<br />
detector electronics (ROD).<br />
After a first chip, built using AMS 0.8 µm technology,<br />
which was successfully used to build complete modules and<br />
used in test beam, we decided to implement a rad-hard version<br />
of the MCC [4]. The chosen technology was DMILL, a<br />
0.8 µm rad-hard technology, for its similarities, in terms of<br />
design rules, with the AMS one.<br />
We developed two successive versions of the MCC called<br />
MCC-D0 and MCC-D2. The first chip is a simple test chip<br />
containing only one full custom FIFO, the complete command<br />
decoder and the register bank. The main goal of this chip was<br />
to understand eventual technology issues and to perform an<br />
irradiation at the CERN PS proton irradiation facility. As<br />
described later in the paper the chip was successfully<br />
irradiated up to 30 Mrad. The second chip developed using<br />
this technology (MCC-D2) is a full version of the module<br />
controller chip and is therefore suited to build complete radhard<br />
modules.<br />
III. TEST SETUP<br />
The main difficulties in testing a chip like the MCC is by<br />
far the ability to fully debug, during the chip design phase,<br />
and then to test the correctness of the event building algorithm<br />
and all it’s possible exceptions. Also the verification of the<br />
correctness of the Trigger distribution to the FE chips without<br />
loss of synchronization presents many challenges. The MCC<br />
has to distribute LEV1 triggers to the FE’s. Up to 16 Trigger<br />
commands can be received by the MCC. Each time the MCC<br />
receives a Trigger command a counter is incremented. As<br />
soon as all FE’s have sent a complete event the counter<br />
keeping track of the received triggers is decremented. If more<br />
than 16 triggers have to be processed the trigger command is<br />
dropped and an empty event is generated by the MCC in order<br />
to keep up with event synchronization. Therefore the overflow<br />
mechanism strongly depends on all 16 concurrent FE data<br />
streams and is therefore very hard to recreate both in the<br />
laboratory and in the Verilog simulation used to validate the<br />
chip. In order to overcome these challenges we decided to<br />
develop a VME based test board (MCC exerciser) that allows<br />
us to completely simulate MCC input data. A C++, timing<br />
oriented, simulation package that is part of a larger simulation<br />
environment, developed in Genova, and called SimPix,<br />
controls the board.<br />
Figure 1: MCC exerciser board with two mounted memory cards.<br />
The MCC exerciser board, see Figure 1, is a standard<br />
VME board that can host a packaged version of the chip. On<br />
the motherboard we can plug-in 10 smaller “memory cards”.<br />
Each of them is equipped with two 8 Mbit deep memory<br />
banks and can either store data to be transmitted to the MCC<br />
or can sample data lines coming from the MCC. In a typical<br />
laboratory test we use 16 channels to simulate FE data, one<br />
channel to simulate data commands coming from the ROD,<br />
and the last channel is used to sample one MCC output data<br />
line. Two other channels are used to sample, for example,<br />
other MCC I/O’s. The whole board is synchronous with a<br />
system clock of 40 MHz. Up to 200 ms of operation can be<br />
simulated with this set-up. SimPix provides input data to the<br />
memory cards using a 32 bit wide VME bus interface.<br />
SimPix is a modular simulation program, written in C++,<br />
which allows us to simulate the whole ATLAS Pixel Detector<br />
starting from physics data. A block diagram describing its<br />
main features is presented in Figure 2. The input data to the<br />
simulation can be provided both by random selected data and<br />
by means of a GEANT simulation of the whole detector.<br />
LEV1 triggers can be generated both randomly and according<br />
to the ATLAS trigger specifications. Data produced by this<br />
first step are then sent to 16 FE models which in turn produce
the correct inputs for the MCC. Data processed by the MCC<br />
are then analysed by an automatic analysis tool that flags<br />
possible mismatches. The peculiar aspect of this simulation<br />
environment is the modular approach that allows one to<br />
replace each component, FE or MCC model, with a similar<br />
description that can be at a different level of accuracy. This is<br />
very useful for example if one wants to simulate different<br />
versions of the FE chip. Therefore different models are<br />
provided in order to be able to simulate an ideal FE or a<br />
model that follows, as closely as possible, the real hardware<br />
implementation. Being the simulation time oriented, one can<br />
simulate the input/output of each electronics component down<br />
to the single clock cycle. In the case of the MCC this<br />
approach is pushed forward and besides an ideal model and a<br />
full C++ simulation of the chip one can actually replace the<br />
MCC module with a Verilog simulation of the chip running<br />
on a remote workstation. This is accomplished by some<br />
routines that use TCP/IP to interface two different machines<br />
on a network. This approach, in which one runs at the same<br />
time the Verilog and the C++ model of the chip, allows to<br />
quickly comparing results as the chip is being developed.<br />
Another option is to directly interface the simulation software<br />
to the MCC exerciser board previously described. In this case<br />
the software interfaces to the VME crate hosting the actual<br />
hardware. Using the same approach one can also interface<br />
directly to a logic state analyser. This approach has been<br />
proven very useful in order to perform both development and<br />
test of the whole chip.<br />
Figure 2: SimPix block diagram.<br />
IV. TEST RESULTS<br />
In this section we present the results of the tests performed<br />
on both chips designed in Genova using the DMILL 0.8 µm<br />
rad-hard technology. The design of both chips was done<br />
starting with a Verilog behavioural model and using Synopsys<br />
as synthesis tool to map the design to the standard cell library<br />
provided by the foundry. Layout was performed using<br />
Cadence Cell Ensemble. DRC and LVS were performed on a<br />
completely flat design. The only step of the standard design<br />
flow that was not performed was a simulation of the circuit<br />
using extracted values for capacitance after the layout was<br />
completed due to some problems in the provided design kit.<br />
On both chips all the full custom blocks were hand placed and<br />
the clock tree was done using a distributed buffer as a clock<br />
tree generation tool was not provided.<br />
Both chips were tested in the laboratory, after packaging<br />
them, using the MCC exerciser board and a logic state<br />
analyser. The MCC-D0 chip was also tested at the CERN PS<br />
proton beam and irradiated up to 30 Mrad.<br />
A. Tests of the MCC-D0<br />
The first chip, MCC-D0, is a test chip developed in order<br />
to test the new technology. This chip contains a 21 bit wide<br />
and 32 words deep full custom FIFO designed using a<br />
conservative 12 transistors layout. In addition to the FIFO a<br />
command decoder and a register bank were implemented. Up<br />
to 40 configuration bits can be stored in the register bank. The<br />
main goal of this version of the chip was to irradiate it at the<br />
CERN proton irradiation facility in order to be able to<br />
perform SEU studies.<br />
For this purpose some special functions were added to the<br />
chip in order to maximize the test that could be performed<br />
during irradiation. For example, the information of a detected<br />
bad command is stored inside an error register during<br />
irradiation. By reading such register we have been able to<br />
quantify the effect of induced errors.<br />
Figure 3: MCC-D0 test beam set-up. Data taking was active during<br />
irradiation and this allowed for SEU studies.<br />
In order to perform the irradiation test a dedicated test<br />
system was built. This test set-up is shown in Figure 3. Up to<br />
8 chips can be irradiated at the same time in the set-up. All<br />
chips are mounted on support cards, which only have passive<br />
components. These boards are connected to repeater cards,<br />
located 5 m away from the beam. The repeater cards are<br />
connected, by a 29 m long flat cable to the selector card that<br />
essentially implements a multiplexer allowing selecting one of<br />
the 8 support cards as being active. The selector board is
controlled by means of a standard VME bus by the MCC<br />
exerciser that is in turn controlled by SimPix. This set-up<br />
therefore allows addressing all 8 chips, only one being active<br />
at a time.<br />
Figure 4: The upper plot shows results for a 12 transistor based<br />
memory cell, while the second one is for standard cell memory cells.<br />
The CERN PS proton irradiation facility has a bunched<br />
beam, with one or two 200 ms bursts every two seconds.<br />
We synchronised our data taking with the start of burst in<br />
order to be able to operate the chips during irradiation. We<br />
performed two different tests, a dynamic one on the actual<br />
active chip and a static one on the remaining 7 chips. After<br />
each bunch the active chip was changed. The dynamic test<br />
consists in continuously write, read and compare data in the<br />
FIFO and configuration registers. After that we read out data<br />
from the chip checking that all commands were correctly<br />
recognized by the MCC and if some bit flipped inside the data<br />
structures. Each read operation was performed three times, in<br />
order to ensure that no transmission error occurred during the<br />
readout phase. The static tests instead, consisted in writing a<br />
known data configuration pattern in the MCC data structures<br />
before each beam spill and in reading out data after the spill in<br />
order to evaluate the bit flip probability. To select the 8 good<br />
chips we performed a test in the laboratory on packaged chips,<br />
because they were not tested at the production site. We tested<br />
14 chips and 11 turned out to be working perfectly. During<br />
this test we measured power consumption, maximum clock<br />
frequency and checked that all writ able structures were<br />
correctly addressable. Maximum working clock frequency<br />
turned out to be ~90 MHz. All working chips were in perfect<br />
agreement with synthesis and simulation results.<br />
All chips were successfully irradiated up to 30 Mrad. After<br />
irradiation all chips were perfectly working but after some<br />
days of cooling down four of them stopped to respond to any<br />
command. We also tried to anneal them in an oven at 100 C°<br />
for one week without being able to make them working again.<br />
We suppose that this problem is related to our full custom<br />
LVDS I/O pads design that in fact showed no activity on the<br />
non-working chips. We performed the same measurements<br />
done before irradiating the chips on the four remaining chips<br />
and compared the results. The only measured difference was a<br />
reduction of the maximum clock frequency of about 40%.<br />
Anyway, even after this frequency reduction, the chips where<br />
still functional at the LHC working frequency of 40 MHz.<br />
SEU measurements, presented in Figure 4, show almost no<br />
errors in the dynamic test and a pure static bit flip probability<br />
per burst of 1.2% for data stored inside a standard cell scan<br />
flip-flop and a probability of 2% for the full custom, twelve<br />
transistor design, FIFO memory cell.<br />
B. Tests of the MCC-D2<br />
The second chip developed by our group is a complete<br />
version of the MCC that is compliant with the ATLAS Pixel<br />
specifications. The goal of this second chip was to be able to<br />
implement, together with a DMILL version of the Front End<br />
chips, a rad-hard version of the ATLAS Pixel Detector<br />
module. The foundry did not test these chips and therefore we<br />
packaged 19 of them in order to be able to test them with our<br />
test set-up. We performed three different types of tests on the<br />
chips: first a DC test to measure static power consumption,<br />
then a test with the logic state analyser in order to test some<br />
simple patterns on the FIFO’s and on the Register Bank at<br />
different clock speeds, and finally a functional test with the<br />
MCC exerciser, in order to fully debug the event building<br />
circuitry of the chip. Results of these tests are shown in<br />
Table 1.<br />
Table 1: Test results of MCC-D2: in the first column of the table<br />
DC current measurements are shown while in the second one the<br />
maximum working clock frequency is reported. The last two<br />
columns show how many chips passed the logic state analyser and/or<br />
the MCC exerciser test. Finally in the last row the total chip yield<br />
after each test is shown. Both fully working chips passed all test only<br />
operating them at a frequency of 33 MHz.<br />
Chip # DC test Max ck LSA test MCC ex test<br />
1 24 mA 74 MHz OK Failed<br />
2 22 mA 72 MHz OK Failed<br />
3 34 mA 73 MHz OK Failed<br />
4 Failed - - -<br />
5 21 mA 74 MHz OK Failed<br />
6 21 mA 73 MHz Failed -<br />
7 20 mA 72 MHz OK Failed<br />
8 Failed - - -<br />
9 20 mA 73 MHz OK Failed<br />
10 Failed - - -<br />
11 18 mA 73 MHz OK OK @ 33 MHz<br />
12 30 mA 72 MHz OK Failed<br />
13 22 mA 75 MHz Failed -<br />
14 19 mA 74 MHz OK Failed<br />
15 41 mA 73 MHz Failed -<br />
16 21 mA 74 MHz OK Failed<br />
17 18 mA 75 MHz Failed -<br />
18 Failed - - -<br />
19 21 mA 74 MHz OK OK @ 33 MHz<br />
Yield 84% 58% 11%<br />
DC measurements showed a power consumption of 20 to<br />
40 mA, in agreement with expectations. The maximum<br />
working clock frequency turned out to be in a range between<br />
70 and 75 MHz that is in quite good agreement with synthesis
esults that predicted 78 MHz. One has to remember though,<br />
that we did not perform a simulation of the chip after<br />
extracting parasitic capacitor values from the layout and that<br />
therefore we rely entirely on wire load models. It is worth to<br />
remember that the test done with the logic state analyser did<br />
not cover all possible timing paths in the chip; we expect it<br />
covers ~10%. This test also allowed a detailed study of the<br />
Command Decoder built in the chip that performed very well,<br />
meeting all specifications. Also, by design, the Command<br />
Decoder should be able to initialise itself into a well-known<br />
state in order to be able to operate the chip without a hard<br />
reset pin or a power-up reset, but only issuing a global reset<br />
command. This feature worked flawlessly.<br />
Figure 5: Trigger commands without and with bit flip. In the graph<br />
we can see the 40 MHz system clock, the serial input line with of the<br />
MCC the two Trigger commands. The last line shows the MCC<br />
output line and one can see that the timing information between<br />
commands is preserved.<br />
Another implemented and tested feature was the ability to<br />
distinguish between two consecutive Trigger commands and<br />
to correctly detect correct timing information even in case of a<br />
single bit flip in the input data line during the Trigger<br />
command. This can be seen in Figure 5.<br />
11 out of 19 chips passed these first two tests. The last<br />
test, for which we calculated a system coverage of ~70% was<br />
performed doing real event building using the MCC exerciser.<br />
Results of these tests are also presented in Table 1 and one<br />
can see that only 11% of the chips turned out to be fully<br />
functional after these tests. One thing to note though is that<br />
both working chips had to be operated at only 33 MHz in<br />
order to get correct results. This discrepancy with synthesis<br />
results can both be due to the lack of simulation of a netlist<br />
that included parasitic capacitor extracted from the actual<br />
layout or to an incorrect timing in the synthesis models.<br />
As we can see the results of these test show an<br />
unacceptable low yield and some problems in the design<br />
technology and/or it’s modelling. Some rough calculations<br />
made taking also in account DMILL forecasts for a digital<br />
chip of this size should have provided a yield of 20, 30%.<br />
Much more detailed studies on these results should have been<br />
performed in order to accept this technology for our needs.<br />
In our collaboration also two distinct versions of the<br />
analogue Front End chip, also developed in DMILL, were<br />
submitted within the same reticule and these chips turned out<br />
to have a yield that was lower than 1%.<br />
V. CONCLUSIONS<br />
In this paper we presented test results of two chips that<br />
were submitted by the Genova group of the ATLAS Pixel<br />
Collaboration to DMILL, a 0.8 µm rad-hard technology. One<br />
test chip was tested and successfully irradiated up to 30 Mrad<br />
at the CERN PS proton beam facility. SEU studies have been<br />
performed on this chip operating the chip while irradiating it.<br />
Results showed that this technology, from the radiation<br />
tolerance point of view, is suitable for the ATLAS Pixel<br />
Detector even for the innermost layer. In addition we<br />
presented test results of a full version of the ATLAS Module<br />
Controller Chip. These results show some severe yield<br />
problems (11% on 19 tested dices) and problems in the timing<br />
models of the libraries that produced a chip working only at<br />
33 MHz, despite of the synthesis results that showed 78 MHz.<br />
This, connected to the fact that also the FE chip of our<br />
collaboration, submitted on the same reticule of the MCC,<br />
showed a very poor yield result (less than 1%) made our<br />
collaboration decide to temporarily drop this technology in<br />
order to explore the 0.25 µm technology, with rad-tolerant<br />
layout techniques, developed at CERN. Therefore the<br />
collaboration will submit a new version of both the FE and the<br />
MCC in this technology in October this year.<br />
VI. REFERENCES<br />
[1] ATLAS Collaboration, Pixel Detector TDR,<br />
CERN/LHCC/98-13, 1998<br />
[2] G. Darbo, The ATLAS Pixel Detector System<br />
Architecture, Proceedings of the “Third Workshop on<br />
Electronics for LHC Experiments”, London,<br />
CERN/LHCC/97-60 (1997), pp. 196-201.<br />
[3] R. Beccherle, The Module Controller Chip (MCC) of<br />
the ATLAS Pixel Detector, Proceedings of the<br />
“Nuclear Science Symposium”, Montreal, Canada,<br />
(1998).<br />
[4] P. Natchaeva et al., Results on 0.7% X0 thick pixel<br />
modules for the ATLAS detector, Proceedings of<br />
“Pixel 2000”, Genova, Nuclear Instruments and<br />
methods in Physics Research A 465 (2001),<br />
pp. 204-210.
Radiation Tests on Commercial Instrumentation Amplifiers,<br />
Analog Switches & DAC's a<br />
J. A. Agapito 1 , N. P. Barradas 2 , F. M. Cardeira 2 , J. Casas 3 , A. P. Fernandes 2 , F. J. Franco<br />
1 , P. Gomes 3 , I. C. Goncalves 2 , A. H. Cachero 1 , J. Lozano 1 , J. G. Marques 2 , A. Paz 1 ,<br />
M. J. Prata 2 , A. J. G. Ramalho 2 , M. A. Rodríguez Ruiz 3 , J. P. Santos 1 and A. Vieira 2 .<br />
1 Universidad Complutense (UCM), Electronics Dept., Madrid, Spain.<br />
2 Instituto Tecnológico e Nuclear (ITN), Sacavém, Portugal.<br />
3 CERN, LHC Division, Geneva, Switzerland.<br />
agapito@fis.ucm.es<br />
Abstract<br />
A study of several commercial instrumentation<br />
amplifiers (INA110, INA111, INA114,<br />
INA116, INA118 & INA121) under neutron and<br />
vestigial gamma radiation was done. Some parameters<br />
(Gain, input offset voltage, input bias<br />
currents) were measured on-line and bandwidth,<br />
and slew rate were determined before and after<br />
radiation. The results of the testing of some<br />
voltage references REF102 and ADR290GR<br />
and the DG412 analog switch are shown. Finally,<br />
different digital-to-analog converters were<br />
tested under radiation.<br />
I. INTRODUCTION<br />
The irradiations were performed using a<br />
dedicated irradiation facility in the Portuguese<br />
Research Reactor. The components under test<br />
were mounted on several PCBs, in a simple<br />
support placed inside a cylindrical cavity created<br />
in one of the beam tubes of the reactor,<br />
thermally conditioned. For these experiments,<br />
the reactor was operated at the nominal power<br />
of 1 MW. The fluence of 5 · 10 13 n · cm -2 in the<br />
central PCB was reached in about 5 days, with<br />
14 hours operation + 10 hours stand-by per day.<br />
x 10 13 n·cm -2<br />
12<br />
10<br />
8<br />
6<br />
4<br />
2<br />
0<br />
450 550 650 750 850 950<br />
X (mm)<br />
Figure 1: Fluence of neutrons 2.001<br />
A 0.7 cm thick boral shield cut the thermal<br />
neutron component of the beam and a 4 cm<br />
thick Pb shield was used to reduce the total<br />
gamma dose below 2 kGy for the central PCB.<br />
The neutron fission fluxes were measured with<br />
Ni detectors placed at the centre of the boxes<br />
that contained the PCBs.<br />
A photodiode sensitive to neutrons was<br />
placed in several boards, so that the neutron flux<br />
was monitored online. A channel for monitoring<br />
the gamma radiation was also implemented.<br />
Integration dosimeters placed on the back of<br />
first and last PCBs reveal, after completion of<br />
the tests, a total gamma dose in the 1.3 - 2.7<br />
kGy range.<br />
II. INSTRUMENTATION AMPLIFIERS<br />
All irradiated instrumentation amplifiers are<br />
build in bipolar technology for the amplifying<br />
and output stages. The main difference between<br />
them is the input stage technology and the circuit<br />
topology, designed to confer the specific<br />
features of the device. Four samples of all devices<br />
were tested on line under neutron radiation.<br />
The INA110KP is a monolithic FET-input<br />
instrumentation amplifier. Its current-feedback<br />
circuit topology and laser trimmed input stage<br />
provide excellent dynamic performance and<br />
accuracy. And the INA111AP is a high speed,<br />
FET-input instrumentation amplifier offering<br />
excellent performance. Both amplifiers have an<br />
extended bandwidth (450KHz at G=100).<br />
The differential gain remains constant during<br />
the irradiation period until a total accumulated<br />
dose of neutrons of 1.25-1.5 · 10 13 n · cm -2 ,<br />
500 Gy is reached. A slight decrease of less than<br />
1% (Figure 2) precedes to the dramatic drop off<br />
and the destruction of the amplifiers occurs<br />
when the total dose reaches 6-7 · 10 13 n · cm -2<br />
(2.4kGy).<br />
a This work has been financed by the co-operation agreement K476/LHC between CERN & UCM, by the<br />
Spanish research agency CICYT (TIC98-0737), and by the Portuguese research agency ICCTI.
There is no common behaviour for the input<br />
offset voltage. There is a high increment of this<br />
parameter for all devices. The highest measured<br />
value is 5.5 mV. However, no increment of the<br />
input bias currents was observed.<br />
Figure 2: INA110. Differential. Gain<br />
On the other hand, the bandwidth is reduced<br />
drastically, and the harmonic distortion increased,<br />
on both amplifiers (Table 1).<br />
Table 1: INA110. Bandwidth and slew rate before<br />
and after radiation<br />
Flux(n ·<br />
cm -2 )<br />
B.W. MHz<br />
(G=10)<br />
B.W. kHz<br />
(G=100)<br />
S. R. V/μs<br />
0 2.2 470 21<br />
3.5 · 10 13<br />
1.9kGy<br />
1.55 200 6.2<br />
5.1 · 10 13<br />
2.2 kGy<br />
1.2 130 4.8<br />
6.8 · 10 13<br />
2.4 kGy<br />
0.83 62 3.0<br />
The input-output dc voltage transfer characteristic<br />
was measured before and after irradiation<br />
(Figure 3). The voltage transfer characteristic<br />
was asymmetrically altered for all devices,<br />
and the positive and negative saturation voltages<br />
decreased.<br />
Figure 3: DC Voltage transfer for INA110 amplifier<br />
with ±15 V power supply<br />
The INA121PA is a low power FET-input<br />
instrumentation amplifier, with a very low bias<br />
current and a smaller bandwidth than the former<br />
(50kHz G=100). The measured parameters were<br />
altered in a similar way to the precedents with<br />
the irradiation dose values reduced to a third.<br />
This can be related both to the smaller bandwidth<br />
and to the low power characteristics.<br />
The INA114 is a low cost, general purpose<br />
bipolar instrumentation amplifier offering excellent<br />
accuracy. Two different models of the same<br />
amplifier were tested, AP & BG (plastic and<br />
ceramic package). The differential gain is constant<br />
until a total dose of 1.5 10 13 - 1.8 · 10 13 n ·<br />
cm -2 (1kGy) is reached, and then it increases up<br />
to a 3% and is abruptly destroyed at 2.2 · 10 13 -<br />
2.8 · 10 13 n · cm -2 (1.4kGy) (Figure 4).<br />
Figure 4: Diff. Gain bipolar Inst. Amp. (INA114)<br />
An almost linear ratio between the input offset<br />
voltage and the accumulated total neutron<br />
dose is detected (Figure 5). The input bias currents<br />
increase slightly in all devices after<br />
irradiation.<br />
Figure 5: Input Offset Voltage vs. Neutron flux<br />
A different behaviour of the two models was<br />
observed. The BG device revealed a higher tolerance<br />
to radiation, which can be attributed to<br />
the difference in the packages [1], and also that<br />
has lower values for bias currents and input offset<br />
voltage.<br />
The INA118P is a bipolar low power, general<br />
purpose, instrumentation amplifier. Although<br />
their bandwidth is higher than that of<br />
INA114 these devices were destroyed earlier.<br />
The total neutron dose was 2 - 3 · 10 12 cm -2<br />
(200Gy). the input offset voltage increases up to<br />
6 mV, and no variation in the input bias current<br />
has been detected.<br />
The INA116PA is a complete monolithic
FET-input instrumentation amplifier with extremely<br />
low input bias current, DiFET inputs<br />
and special guarding techniques. It was quickly<br />
depredated, and were destroyed all devices with<br />
a total dose of 2 · 10 12 cm -2 (200Gy). Although<br />
DiFET operational amplifiers were revealed to<br />
be the best radiation tolerant devices [2], this<br />
can be attributed to the low bandwidth and micropower<br />
design technology of this device.<br />
III. VOLTAGE REFERENCES<br />
Several items of REF102BP and ADR290<br />
voltage references were exposed to neutron radiation.<br />
A 15 V power supply was used for all<br />
devices, externally loaded to assure the 50% of<br />
the maximum current. The output voltage and<br />
the bias supply current for each device were<br />
measured on line. After irradiation the inputoutput<br />
DC transfer characteristic was determined<br />
for all surviving devices<br />
A. REF102BP<br />
REF102BP is a 10 V buried Zener diode<br />
voltage reference, from BURR-BROWN. The<br />
nominal output voltage error is less than<br />
±0.05%, and the quiescent current is smaller<br />
than 1.4 mA with a maximum output current of<br />
10 mA. Eight devices of two different fabrication<br />
series were irradiated with a total neutron<br />
dose between 2 and 9.9 · 10 13 n · cm -2 and a<br />
gamma residual total dose between 1400 and<br />
2700 Gy.<br />
Table 2: REF102BP. Minimum input voltage<br />
Total Dose Min. Input Voltage<br />
0 10.8 V<br />
2.0· 10 13 n· cm -2 1.4kGy 10.8 V<br />
2.6· 10 13 n· cm -2 1.6kGy 12.6 V<br />
3.1· 10 13 n· cm -2 1.8kGy 17.0 V<br />
3.5· 10 13 n· cm -2 1.9kGy 13.1 V<br />
5.1· 10 13 n· cm -2 2.1kGy 19.9 V<br />
The minimum voltage supply to get the<br />
nominal output of 10 V varies with the total<br />
accumulated dose as showed in Table 2<br />
Table 3: REF102BP. Quiescent current<br />
Total Dose Quiesc. Current (mA)<br />
0 1.30<br />
2.0· 10 13 n· cm -2 1.4kGy 0.90<br />
2.6· 10 13 n· cm -2 1.6kGy 0.94<br />
3.1· 10 13 n· cm -2 1.8kGy 0.75<br />
3.5· 10 13 n· cm -2 1.9kGy 0.88<br />
5.1· 10 13 n· cm -2 2.1kGy 0.55<br />
The line regulation coefficient increased<br />
with radiation. For those items, that suffered a<br />
total neutron dose between 2.0 and 3.5 · 10 13 n ·<br />
cm -2 (1.7 kGy), the value for the line regulation<br />
changed from 10 μV/V to between 10 and 20<br />
mV/V. On the other hand, the quiescent current<br />
varied with the total neutron dose as shown in<br />
Table 3. All these parameters seem to be independent<br />
of the series production.<br />
B. ADR290GR<br />
The ADR290 is a 2.048 ± 0.006 V low<br />
noise, micropower precision voltage references<br />
that use an XFET (eXtra implanted junction<br />
FET) reference circuit. The new Analog Devices<br />
XFET architecture claims to offer significant<br />
performance improvements over traditional<br />
bandgap and Zener-based references.<br />
Three samples were irradiated to a total neutron<br />
dose between 4.2 and 11 · 10 13 n · cm -2 . All<br />
devices were destroyed at a value between 5 and<br />
7 · 10 12 n · cm -2 (400Gy). Fig. 6 shows the behaviour<br />
of the output voltage with the radiation.<br />
Figure 6: ADR290. Output Voltage vs. radiation<br />
The output voltage exceeds the specification<br />
limits (± 6 mV) at a neutron dose between 2.5<br />
and 3.5 · 10 12 n · cm -2 , 200Gy. There is a small<br />
decrease of it in all samples, and then a hard<br />
drop off from 1.8 to 0.7 V. This is quite similar<br />
to other traditional voltage references as was<br />
previously reported (REF02) [3].<br />
All the reported references use hard radiation<br />
tolerant devices as Zener diodes or JFET<br />
transistors as the first stage, and the amplifying<br />
and power stages built in bipolar technology.<br />
This suggests to be the cause of its degradation.<br />
Finally, the bias current decreases as the output<br />
voltage does<br />
IV. ANALOG SWITCHES<br />
DG412 is a four SPST normally open<br />
CMOS analog switches from MAXIM. It can<br />
operate with single or bipolar supply and is<br />
TTL/CMOS compatible. Three devices were<br />
exposed to a neutron radiation between 4.4 and<br />
8.8 · 10 13 n · cm -2 , and a residual gamma radiation<br />
about 2.000 - 2.700 Gy.<br />
Analog switches are designed with two transistors<br />
in parallel, NMOS and PMOS, so that
the equivalent resistance is almost constant for<br />
any voltage applied to their edges [4].<br />
Measurements of the on resistance, the<br />
switching voltage and leakage currents with<br />
several bias supply’s and logic levels were carried<br />
out on every device for the four switches<br />
before irradiation, and after the deactivation<br />
period (1 month later). During the deactivation<br />
period, these circuits remained unpolarized. The<br />
on resistances and the leakage currents were<br />
measured on line on the devices with a bias supply<br />
of ± 15 V and a logic level VL = +5 V.<br />
The on line measurements are shown on<br />
Figure 7.<br />
Figure 7: DG412. On resistance vs. radiation<br />
The increase of the on resistance may be attributed<br />
either to the decrease of the mobility<br />
and concentration of carriers caused by neutrons<br />
or to the change in the threshold voltage of any<br />
of the MOS transistors. The latter effect (associated<br />
to the residual gamma radiation) could explain<br />
that the channel cannot be closed at the<br />
window between 2.25 and 3 · 10 13 n · cm -2 , 700<br />
and 900 Gy.<br />
From the on line measurements a high increment<br />
of the leakage currents was detected,<br />
from nanoamps up to 2 mA. However, after the<br />
deactivation period a new measurement revealed<br />
that the leakage currents have disappeared.<br />
This may be associated to some annealing<br />
effect during the cooling period.<br />
After irradiation the operating supply voltage<br />
V+ - V- need to be higher than a value between<br />
11.9V and 13.4V, according to the radiation,<br />
and VL = VCC. This effect was not detected<br />
during the on line periods, and all switches were<br />
operating at Vsupply = ± 15 V and VL = 5 V. This<br />
latter degradation of CMOS devices took place<br />
during the cooling period after irradiation due to<br />
the movement of charges in dielectric materials.<br />
For those devices still in operation the characteristic<br />
of resistance as a function of the input<br />
voltage was highly modified. Figure 8 shows<br />
this value for a ±10V supply voltage. For a<br />
higher supply voltage this anomalous value of<br />
the resistance with the input voltage decreases.<br />
Since this characteristic is similar to that of a<br />
switch with a single operating MOS transistor<br />
(NMOS) [4], it can be assumed that the threshold<br />
voltage of the PMOS transistor has been<br />
strongly modified and consequently is always<br />
operating as an open circuit.<br />
Figure 8: DG412. Resistance vs. Input voltage.<br />
Furthermore, the switching voltage was<br />
measured, and Table 4 shows the switching<br />
level for a device that suffered a total dose of<br />
8.8 · 10 13 n · cm -2 and 2.7 kGy.<br />
Table 4: DG412 Switching voltage (V L = V CC).<br />
Supply Before exp. After exp.<br />
Bipolar ±10 V 3.03 V 1.19 V<br />
Bipolar ±15 V 4.08 V 2.72 V<br />
Unipolar 0-15 V 4.24 V 2.71 V<br />
V. DAC’S<br />
Three different models were tested (AD565,<br />
AD667 and AD7541). The first two models<br />
were selected for their TTL technology, fast<br />
response and that were reported to tolerate up to<br />
3 kGy, the first, and more than 2 · 10 12 n · cm -2 ,<br />
the second [5, 6]. The third model was selected<br />
for its CMOS technology.<br />
On line output voltages measurements were<br />
carry out as the conversion to a digital sweep<br />
from zero to 4095. Neither offset nor gain correction<br />
were made. The neutron dose was between<br />
3.1 and 3.4 · 10 13 n · cm -2 , and gamma<br />
radiation about 1.8 kGy.<br />
C. AD565AJD<br />
The AD565A uses 12 precision, high-speed<br />
bipolar current-steering switches, control amplifier<br />
and a laser-trimmed thin-film resistor network<br />
to produce a very fast, high accuracy analog<br />
output current. The AD565A also includes a<br />
buried Zener reference comparable to the best<br />
discrete reference diodes. An external amplifier<br />
was implemented to provide a unipolar 0 to +10<br />
volt output.<br />
Four samples were tested in two different<br />
sessions. The offset error remains between 5<br />
and 8 LSB. The gain error varies with the neu-
tron radiation (Figure 9). A change lower than 5<br />
LSB in the gain error is measured as the radiation<br />
increases from 0 up to 3.1 · 10 13 n · cm -2 ,<br />
1.8 kGy. Since then, the converter malfunctions.<br />
A day after the reactor stops it recovers and the<br />
obtained values are close to normal.<br />
Figure 9: AD565 Gain error.<br />
NEFF remained during all the operation between<br />
10.5 and 11 bits. Finally, the internal reference<br />
voltage varied 10 mV at the final radiation<br />
period. In one of the devices, there was an<br />
interval where the reference voltage decreased<br />
down to 7 V, but at the end recovered the nominal<br />
value.<br />
D. AD667<br />
The AD667 is a complete voltage output 12bit<br />
digital-to-analog converter including a high<br />
stability buried Zener voltage reference and<br />
double-buffered input latch on a single chip.<br />
The converter uses 12 precision high speed bipolar<br />
current steering switches and a laser<br />
trimmed thin-film resistor network to provide<br />
fast settling time and high accuracy.<br />
Figure 10: AD667. Offset error<br />
Two samples were tested on line with radiation<br />
up to 3 · 10 13 n · cm -2 , 1.8 kGy. The initial<br />
offset error was less than 1.5 LSB but at a dose<br />
of 8 · 10 12 n · cm -2 (550Gy) increases abruptly<br />
up to 50 LSB at 1.2 · 10 13 n · cm -2 (720Gy).<br />
Then a decrease down to 30 LSB at maximum<br />
radiation point takes place (Figure 10). The gain<br />
error decreases from 15 down to 10 LSB, and<br />
the internal voltage reference increases 10 mV<br />
in 10 V at 3· 10 13 n· cm -2 . NEFF remain between<br />
13 and 11 bits.<br />
E. AD7541AKN & MX7541AKN<br />
The AD7541A is a low cost, high performance<br />
12-bit monolithic multiplying digital-toanalog<br />
converter. It is fabricated using advanced,<br />
low noise, thin film on CMOS technology.<br />
An external amplifier was implemented<br />
to provide a unipolar 0 to - 10 volt output.<br />
One sample from Analog Devices and another<br />
from Maxim were tested on line with radiation<br />
up to 3 · 10 13 n · cm -2 , 1.8kGy. All of<br />
them were destroyed at 1.3 · 10 13 n · cm -2<br />
(780Gy). It was reported that this converter<br />
could be destroyed at an accumulated gamma<br />
radiation dose of 100 Gy [6].<br />
The offset and gain errors were constant until<br />
a dose 9 · 10 12 n · cm -2 (600Gy) was reached,<br />
then they increase until a total destruction. NEFF<br />
was constant at 11 bits until 4 · 10 12 n · cm -2<br />
(270Gy), and then decrease down to 7 bits at 1.1<br />
· 10 13 n · cm -2 (800Gy) and then to 0 abruptly.<br />
VI. CONCLUSIONS<br />
Micropower design seems to decrease the<br />
radiation tolerance of instrumentation amplifiers.<br />
On the contrary, broad bandwidth and JFET<br />
inputs increase the radiation hardness.<br />
The voltage reference REF102 can operate<br />
up to 5 · 10 13 n · cm -2 and 2.2 kGy.<br />
Gamma radiation modifies strongly the<br />
characteristics of analog switches.<br />
Some bipolar DAC’s can operate without a<br />
significant degradation (EOff
Low Dose Rate Effects And Ionization Radiation Tolerance Of The Atlas Tracker<br />
Front-End Electronics.<br />
M. Ullán 1,3 , D. Dorfan 1 , T. Dubbs 1 , A. A. Grillo 1 , E. Spencer 1 , A. Seiden 1 , H. Spieler 2 , M. Gilchriese 2 ,<br />
M. Lozano 3<br />
1 Santa Cruz Institute for Particle Physics (SCIPP), University of California at Santa Cruz, Santa Cruz, CA 95064, USA<br />
ullan@scipp.ucsc.edu<br />
2 Lawrence Berkeley National Laboratory (LBNL), University of California at Berkeley, 1 Cyclotron Rd, Berkeley, California<br />
94720, USA<br />
3 Centro Nacional de Microelectrónica (CNM-CSIC), Campus UAB, 08193 Bellaterra, Barcelona, Spain<br />
Abstract<br />
Ionization damage has been investigated in the IC<br />
designed for the readout of the detectors in the Semiconductor<br />
Tracker (SCT) of the ATLAS experiment at the LHC, the<br />
ABCD chip. The technology used in the fabrication has been<br />
found to be free from Low Dose Rate Effects which facilitates<br />
the studies of the radiation hardness of the chips.<br />
Other experiments have been done on individual<br />
transistors in order to study the effects of temperature and<br />
annealing, and to get quantitative information and a better<br />
understanding of these mechanisms. With this information,<br />
suitable irradiation experiments have been designed for the<br />
chips to obtain a better answer about the survivability of these<br />
chips in the real conditions of the ATLAS detector.<br />
I. INTRODUCTION<br />
The specific characteristics of the silicon detectors to be<br />
used in the Inner Detector of the ATLAS experiment that will<br />
be installed in the Large Hadron Collider (LHC) at CERN,<br />
together with the large amount of data required to be<br />
processed in a very short period of time, force the 'front-end'<br />
electronics designed to do this job to be placed very close to<br />
the actual detectors. This means, in fact, that the ICs for the<br />
immediate data acquisition and pre-processing of the signals<br />
coming from the detectors will be working in the active area<br />
of the experiment, very close to the collision point. The ICs<br />
will, therefore, be operated in a very harsh environment due to<br />
the amount of radiation in that area.<br />
For this reason, radiation-hard microelectronic<br />
technologies have to be used, and the radiation hardness of the<br />
ICs should be verified previous to the installation in the<br />
experiment. This is the framework of this work, in which the<br />
radiation hardness of the two bipolar microelectronic<br />
technologies that have been proposed for the experiment is<br />
being evaluated, and the total ionization radiation damage<br />
expected for the ICs measured.<br />
A basic approach to test the radiation hardness of the ICs<br />
designed for the experiment is, in principle, to irradiate them<br />
in a short period of time up to the total dose expected in the<br />
real case, and then measure their performance to see if their<br />
parameters remain within specs. The problem, however, is<br />
usually not so straightforward. In the last years it has been<br />
reported that bipolar transistors can suffer more radiation<br />
damage when irradiated at low rates than when irradiated at<br />
high rates [1], [2]. This means that an irradiation experiment<br />
done at higher dose rates than the actual rate can<br />
underestimate the damage. These phenomena are called Low<br />
Dose Rate Effects (LDRE) of the bipolar transistors.<br />
Therefore, a more conservative approach for the test<br />
would be to irradiate the chips in the real conditions of the<br />
experiment. Then we would have the damage in the real case.<br />
But this possibility, though achievable in some particular<br />
cases due to the lower doses involved, is not realistic in the<br />
majority of the high energy physics or astrophysics<br />
applications, in which the long term operations of the<br />
experiments lead to high energy depositions during the whole<br />
life of the experiment but still at very low dose rates.<br />
This is the case of the ATLAS experiment which is<br />
intended to operate for 10 years with a total expected energy<br />
deposition (considering the stopping periods) of 10<br />
Mrads(SiO2), but at a dose rate of 0.05 rads(SiO2)/s [3]. In<br />
these circumstances an experiment to check the radiation<br />
hardness of the chips in the real conditions would take<br />
approximately 6.5 years which is not practical.<br />
Many different approaches have been tried to test dose<br />
rate effects on a bipolar technology, and late studies have<br />
shown that high temperature irradiations at high dose rates<br />
can mimic the effects of low dose rate irradiations [4]-[6], but<br />
no one approach has yet been presented which covers all the<br />
possibilities and, therefore, there is still a lack of a universal<br />
hardness assurance approach for bipolar technologies.<br />
Nevertheless, it has been seen that these effects are strongly<br />
technology dependent which means that, in many cases, some<br />
devices will not suffer from LDRE for the particular<br />
conditions of the experiment. In such cases the first approach<br />
described above could be used for hardness assurance studies,<br />
avoiding a lot of trouble in complicated and long term LDRE<br />
studies.<br />
In this work, the ionization radiation hardness of the IC<br />
designed for the front-end readout of the detectors of the<br />
ATLAS-SCT (ABCD chip) is evaluated taking into account<br />
the possible presence of Low Dose Rate Effect in the<br />
technology (DMILL).
II. TESTING PLAN<br />
Four experiments have been devised in order to study the<br />
LDRE in the DMILL technology and the ionization damage<br />
characteristics on it:<br />
i) Experiment 0, the sensitivity to LDRE of both<br />
technologies is evaluated irradiating test structures at a wide<br />
range of dose rates, but only to a dose that is reasonably<br />
achievable at the interesting low rates, 1 Mrad in our case.<br />
ii) Experiment A, after evaluating the sensitivity of these<br />
technologies to LDRE, the actual value of these effects is<br />
measured for the total dose of interest, and the final damage<br />
on the transistors measured for the full total dose at the dose<br />
rate of interest.<br />
iii) Experiment B, the test structures are irradiated at a<br />
high rate up to the total dose of interest and at different<br />
temperatures in order to identify an appropriate temperature<br />
(optimum temperature) for the accelerated tests which best<br />
mimics the damage produced at a low rate irradiation.<br />
iv) Experiment C, Accelerated tests are carried out on the<br />
ICs at high dose rate and up to the total dose of interest using<br />
the optimum temperature if necessary.<br />
III. EXPERIMENTAL PROCEDURES<br />
All irradiations have been done using three different Co60<br />
sources which provide 1.2 and 1.3 MeV gamma radiation<br />
which is widely used in ionization damage studies. A Pb+Al<br />
shielding box has been used together with geometrical<br />
considerations in order to avoid dose enhancement effects<br />
according to standards [7], [8], guaranteeing less than a 20%<br />
systematic error in the dosimetry. Thermoluminiscent devices<br />
(TLD) have been used, also according to standards [9], in<br />
order to identify the irradiation positions for the different dose<br />
rates, obtaining a statistical deviation of less than a 5%. The<br />
devices were kept biased during irradiation in order to be<br />
closer to the real life conditions and low mass materials have<br />
been used for the supporting boards. The temperature control<br />
has been provided via resistive tape heaters and liquid cooling<br />
for actuators. Thermocouples and Resistance Temperature<br />
Detectors (RTD) in physical contact to the chips were used<br />
for the measurements.<br />
The Gummel Plots and common emitter current gains (β)<br />
of the transistors have been extracted before and after<br />
irradiation. Consecutive measurements have been made on<br />
every transistor, right after the irradiation, and for several<br />
weeks later until the annealing process has been completed.<br />
Two main parameters have been used to characterize the<br />
damage produced by radiation [9]: the excess base current<br />
density (∆Jb), defined as the difference between the base<br />
current density before and after irradiation at a base-emitter<br />
voltage of 0.7 V; and the relative beta change (∆β%), defined<br />
as the difference in the common emitter current gain, at the<br />
same base-emitter voltage (0.7 V), before and after irradiation<br />
normalized to the one before irradiation. Temperature has<br />
been controlled during the measurements to make sure that it<br />
stays within a ±2 °C margin. Still, a commonly used<br />
correction for small temperature differences has been used for<br />
the base current by applying a factor to the post irradiation<br />
value which is equal to the ratio between the pre and post<br />
irradiation value of the collector current (which is known not<br />
to be affected by radiation).<br />
Test structures containing sets of bipolar transistors from<br />
the same technology in which the ABCD [10] chip is made<br />
have been used. Two different transistor sizes have been used<br />
for the irradiations and small differences have been seen<br />
between the radiation damage for each. The sizes of the<br />
transistors tested are 1.2 µm x 1.2 µm for the “minimum”<br />
transistor and 1.2 µm x 10 µm, for the, so called, “primary”<br />
transistor.<br />
IV. EXPERIMENT 0: LDRE SENSITIVITY<br />
The purpose of this experiment is to evaluate the<br />
sensitivity of the DMILL technology to LDRE. This first step<br />
is extremely important because it has been seen that LDRE<br />
are strongly technology-dependent, and in many cases a<br />
particular technology can be free of them or at least they<br />
might not show in the range of dose rates of interest of the<br />
experiment. In those cases tests of radiation damage can be<br />
done directly at high dose rates saving a lot of effort in<br />
complicated LDRE studies.<br />
A first study of the annealing of the damage produced by<br />
the radiation from the moment just following the irradiation<br />
and the damage measured after a certain time has been carried<br />
out in order to be sure that these effects don’t interfere with<br />
the effects produced by the low dose rates [11]. The results<br />
show that, in all the cases, he annealing, if it appears, is<br />
beneficial (the damage is reduced), stops after at most three<br />
weeks, and can not be accounted for the differences in the<br />
damage for different dose rates. In the following all the data<br />
points represents measurements taken at least three weeks<br />
after irradiation.<br />
For this experiment the transistors have been irradiated at<br />
a very wide range of dose rates and all of them up to a total<br />
dose of 1 Mrad(SiO2) in order to obtain data even for the very<br />
low dose rates in a reasonable period of time. The dose rates<br />
chosen cover a range of 4 full decades and the values are:<br />
0.05, 0.28, 1.33, 31.1, 112, 575 rads(SiO2)/s.<br />
Figure 1: Excess base current density (∆Jb) versus dose rate for<br />
DMILL transistors from Experiment 0.
Figure 2: Relative beta change (∆β%) versus dose rate for DMILL<br />
transistors from Experiment 0.<br />
The results of this experiment are shown in Figure 1 and<br />
Figure 2 for the DMILL transistors, in which both excess base<br />
current density (∆Jb) and relative beta change (∆β%) are<br />
shown versus dose rate, all for the same total dose of 1 Mrad.<br />
All the data points correspond to the final measurement after<br />
annealing has been completed. It can be seen that there is no<br />
evidence of low dose rate effects in these transistors, or it is<br />
negligible. Similar plots are shown in Fig. 3 and Fig. 4 for the<br />
CB2 transistors. It is clear in these plots that these transistors<br />
suffer total dose effects showing appreciably more damage at<br />
low dose rates.<br />
V. EXPERIMENT A: MAPPING OF THE DAMAGE VS.<br />
TOTAL DOSE<br />
In view of the results from Experiment 0, two main<br />
consequences can be obtained. The first one is that it is<br />
necessary to know if there are still no LDRE for the DMILL<br />
technology at higher doses; and the second one is that an<br />
estimation of the extent of these effects is necessary for the<br />
CB2 transistors.<br />
For both of these measurements long term, low dose rate<br />
irradiations should be done up to the total 10 Mrads and at<br />
0.05 r/s dose rate, which are the real conditions in ATLAS-<br />
SCT. These experiments are not realizable because they<br />
would take too long to give results. A solution to the problem<br />
is to map out the damage on the transistors vs. the total dose<br />
taking intermediate measurements of the damage for<br />
increasing total doses up to 10 Mrads. This way one can<br />
perform low dose rate irradiations up to a certain achievable<br />
total dose and estimate the damage at the final total dose by<br />
extrapolation. Furthermore, one can see if irradiations done at<br />
different dose rates lead to a “rigid” shift of the curves or, on<br />
the contrary, they change the shape or slope of them. If the<br />
curves suffer rigid shifts for different dose rates, we can be<br />
assured that the differences between low and high dose rate<br />
irradiations are maintained for high total doses.<br />
The conditions of the four different mapping experiments<br />
that have been performed for this purpose can be seen in<br />
Table 1.<br />
Table 1: Conditions of the four mapping experiments.<br />
In Figure 3 and Figure 4 we can see the results of these<br />
experiments in terms of the excess base current density and<br />
the relative beta change. It can be seen that for all dose rates<br />
the excess base current density is linear vs. total dose for a<br />
logarithmic plot on both axis (dependency type: ∆Jb ∝<br />
(dose) a ; a = constant). The relative beta change follows also a<br />
linear dependency with total dose but in this case for only a<br />
logarithmic abscissas axis (dependency type: ∆β ∝ log(dose)).<br />
It also can be seen that the graphs for all four irradiations are<br />
parallel, and in fact superposed, meaning that the damage is<br />
the same for all transistors regardless of the dose rate and for<br />
the whole range of doses, indicating that there are no LDRE<br />
for these transistors up to 10 Mrads.<br />
Figure 3: Excess base current density (∆Jb) versus dose rate for<br />
DMILL transistors from Experiment A.<br />
Figure 4: Relative beta change (∆β%) versus dose rate for DMILL<br />
transistors from Experiment A.
The results show that the figures for the total ionization<br />
damage of the bipolar transistors of the DMILL technology<br />
are 8 x 10 -10<br />
A/cm 2<br />
for the excess base current density and<br />
-45% for the beta change, giving a final value of the<br />
transistors current gain (for the emitter sizes used in the<br />
experiments) around 90 to 125.<br />
VI. EXPERIMENT B: TEMPERATURE<br />
Given the fact that Experiment A has demonstrated that<br />
the DMILL technology doesn’t suffer from LDRE at least up<br />
to the conditions of interest for ATLAS-SCT, it can be<br />
concluded that accelerated tests are not necessary for the<br />
hardness assurance testing of the ABCD chip. Therefore it is<br />
not necessary to find the optimum temperature for these<br />
irradiations, which was the initial goal of Experiment B.<br />
Figure 5: Excess base current density (∆Jb) versus temperature for<br />
DMILL transistors from Experiment B.<br />
Figure 6: Relative beta change (∆β%) versus temperature for<br />
DMILL transistors from Experiment B.<br />
Nevertheless, a set of experiments at different<br />
temperatures has been carried out with the bipolar transistors<br />
of this technology in order to obtain the variations of the<br />
damage in the transistors for different temperatures of<br />
irradiation. The actual working temperature of the chips in the<br />
real ATLAS-SCT environment is not as yet fixed, and a low<br />
optimum (worst case) temperature with a sharp slope in the<br />
damage at low temperatures would make the hardness<br />
assurance testing more difficult.<br />
Different irradiations have been done of the bipolar<br />
transistors all of them up to the total ATLAS-SCT dose of 10<br />
Mrads and at a very high dose rate (575 rads/s). The<br />
temperatures used in the study have been: 11, 37, 57, 70, 91,<br />
and 110 °C.<br />
Figure 5 and Figure 6 show the results of Experiment B<br />
for the test structures of the DMILL technology. It can be<br />
seen that the worst case temperature appears at around 90 °C<br />
which is a high enough temperature for the slope to be smooth<br />
at low temperatures. Nevertheless, it can be seen that de<br />
different in the damage for an irradiation done at the expected<br />
actual working temperature of the ICs at the ATLAS-SCT (10<br />
°C), and an irradiation at room temperature is appreciable, and<br />
should be taken into account in future irradiation tests of the<br />
ABCD chips.<br />
VII. EXPERIMENT C: FINAL TEST<br />
The Results from Experiment A and Experiment B<br />
demonstrate that the DMILL technology is free from LDRE at<br />
the range of total doses and dose rates of interest in the<br />
ATLAS-SCT experiment, and that the difference in the<br />
damage for the actual operating temperature and room<br />
temperature is low. This results validates the previous high<br />
dose rate irradiation tests carried out by the collaboration with<br />
that result that the ABCD chip remains under specifications<br />
for the total life of operation [12].<br />
Nevertheless, the actual ABCD chips have been irradiated<br />
in order to obtain the figure the damage produced on them by<br />
10 Mrads of ionization radiation. The irradiations have been<br />
done at high dose rate (575 rads/s), up to the total dose of 10<br />
Mrads and at room temperature. The results from these<br />
irradiations are currently being analyzed.<br />
VIII. CONCLUSION<br />
The technology used in the fabrication of the ICs proposed<br />
for the front-end readout of the ATLAS-SCT (DMILL) has<br />
been tested for ionization damage and considering low dose<br />
rate effects. The results show that this technology, used in the<br />
fabrication of the ABCD chip, does not suffer from LDRE, or<br />
at least by a negligible amount. This result indicates that<br />
irradiations performed to test the radiation tolerance of the ICs<br />
can be done at high dose rates without underestimating the<br />
damage to the chips. This will save much effort in long term<br />
irradiations or accelerated tests.<br />
The results also show that the variation of the damage<br />
produced on the chips for different irradiation temperatures<br />
are not very large for the range of low temperatures, but still<br />
should be considered if the chips are in the edge of their<br />
survivability after the irradiation tests.<br />
Finally, the non-existence of LDRE in the DMILL<br />
technology validates and confirms the results of previous<br />
irradiation tests within the collaboration indicating that the
ABCD chip will remain under specifications after he 10 years<br />
of operation in the ATLAS-SCT environment.<br />
REFERENCES<br />
[1] E. W. Enlow, et al. “Response of Advanced Bipolar<br />
Processes to Ionizing Radiation”, IEEE Trans. on<br />
Nuclear Science, V.38, 1342 (1991)<br />
[2] R. D. Schrimpf, “Recent Advances in Understanding<br />
Total-Dose Effects in Bipolar Transistors”, IEEE Trans.<br />
on Nuclear Science, June 96, p. 787.<br />
[3] ATLAS Inner Detector Technical Design Report (TDR),<br />
CERN/LHCC/97-16/17, April 1997.<br />
[4] S.C. Witczac, et al., “Accelerated Tests for Simulating<br />
Low Dose Rate Degradation of Lateral and Substrate<br />
PNP Bipolar Junction Transistors”, IEEE Trans Nuclear<br />
Science, Dec. 96, pg. 3151.<br />
[5] O. Flament, et al., “Ionizing dose hardness assurance<br />
methodology for qualification of a BiCMOS technology<br />
dedicated to high dose level applications”, IEEE Trans.<br />
Nuclear Science, Dec. 98, p. 1420.<br />
[6] S.C. Witczac, et al., “Hardness Assurance Testing of<br />
Bipolar Junction Transistors at Elevated Irradiation<br />
Temperatures”, IEEE Trans. Nuclear Science, Dec. 97,<br />
p. 1989.<br />
[7] “Minimizing Dosimetry Errors in Radiation Hardness<br />
Testing of Silicon Electronic Devices Using Co-60<br />
Sources”, Annual Book of ASTM Standards, American<br />
Society For Testing And Materials (ASTM), E 1249–93,<br />
1993.<br />
[8] “Standard Guide for Ionizing Radiation (Total Dose)<br />
Effects Testing of Semiconductor Devices”, Annual<br />
Book of ASTM Standards, American Society For<br />
Testing And Materials (ASTM), F 1892–98, Nov. 98.<br />
[9] “Application of Thermoluminiscence-Dosimetry (TLD)<br />
Systems for Determining Absorbed Dose in Radiation-<br />
Hardness Testing of Electronic Devices”, Annual Book<br />
of ASTM Standards, American Society For Testing And<br />
Materials (ASTM), E668–97, 1997.<br />
[10] W. Dabrowski, et al. “Design and performance of the<br />
ABCD chip for the binary readout of silicon strip<br />
detectors in the ATLAS Semiconductor Tracker”, Proc.<br />
of 1999 IEEE Nuclear Science Symposium, Seattle,<br />
Washington, USA, Oct 1999.<br />
[11] D. Dorfan, et al. “Measurement of Dose Rate<br />
Dependence of Radiation Induced Damage to the<br />
Current Gain in Bipolar Transistors”, IEEE Trans. on<br />
Nuclear Science, Dec. 99, p. 1884.<br />
[12] E. Chesi, et al. “Performance of a 128 Channel Analogue<br />
Front-End Chip for Readout of Si Strip Detector<br />
Modules for LHC Experiments”, IEEE Trans. on<br />
Nuclear Science, Aug. 00, p. 1434.
Use of antifuse-FPGAs in the Track-Sorter-Master<br />
of the CMS Drift Tube Chambers<br />
R. Travaglini, G.M. Dallavalle, A. Montanari, F. Odorici, G.Torromeo, M.Zuffa<br />
Abstract<br />
The Track-Sorter-Master (TSM) is an element of the onchamber<br />
trigger electronics of a Muon Barrel Drift Tube<br />
Chamber in the CMS detector. The TSM provides the<br />
chamber trigger output and gives access to the trigger<br />
electronic devices for monitoring and configuration.<br />
The specific robustness requirements on the TSM are met<br />
with a partitioned architecture based on antifuse-FPGAs.<br />
These have been successfully tested with a 60 MeV proton<br />
beam: SEE and TID measurements are reported.<br />
h<br />
D = 23.7 cm Xcor<br />
1<br />
2<br />
BTI<br />
3<br />
4<br />
5<br />
6<br />
INFN and University of Bologna Italy<br />
40137 v.le B.Pichat 6/2, Bologna, Italy<br />
Riccardo.Travaglini@bo.infn.it<br />
Out 0 Out 1 Out 2 Out 3 Out 4 Out 5 Out 6 Out 7 Out 8 Out 9 Out 10 Out 11<br />
Kcor<br />
7<br />
8<br />
TRACO<br />
x<br />
In 0 In 1 In 2 In3<br />
TRIGGER SERVER<br />
9<br />
A<br />
B<br />
C<br />
D<br />
x 32<br />
BTI<br />
BTI<br />
Outer<br />
Layer<br />
Inner<br />
Layer<br />
Outer SL<br />
Inner SL<br />
x 32<br />
TRACO<br />
TRACO<br />
TRACO<br />
TRACO<br />
BTI<br />
2<br />
I. INTRODUCTION<br />
The trigger electronics of a Muon Barrel Drift Tube<br />
chamber [1] of the Compact Muon Solenoid (CMS) detector<br />
is a synchronous pipelined system partitioned in several<br />
processing stages, organized in a logical tree structure and<br />
implemented on custom devices (Fig. 1).<br />
The Track-Sorter-Master of the Trigger Server [2] is the<br />
system responsible for the trigger output from the chamber<br />
and for the trigger interface with the chamber control unit.<br />
sel<br />
TSS<br />
Phi Trigger Board 1<br />
BTI<br />
Theta Trigger Board 1<br />
9<br />
previews<br />
5<br />
2<br />
3<br />
4<br />
5<br />
6<br />
7<br />
TST<br />
2<br />
2<br />
9+2<br />
previews<br />
Server Board<br />
TSMS<br />
16<br />
sel<br />
25<br />
full tracks<br />
TSMD<br />
TSMD<br />
25<br />
full tracks<br />
Figure 1: CMS Drift Tube on-chamber trigger electronics overview. The upper right picture, named Server Board, is a block diagram<br />
representation of the Track-Sorter-Master system.<br />
30 bit<br />
20 bit<br />
To Sector<br />
Collector
Since the TSM system is the bottleneck of the trigger<br />
electronics of a muon chamber, a principal requirement it has<br />
to fulfil is robustness; it should also be fast in order to<br />
minimize the trigger latency.<br />
Moreover, the system should stand the radiation dose<br />
expected for 10 years of running of the muon chambers in<br />
CMS at the Large Hadron Collider.<br />
In the following we show that this can be achieved with a<br />
highly partitioned architecture that utilizes antifuse-FPGAs.<br />
A. Architecture<br />
II. TRACK-SORTER-MASTER<br />
ARCHITECTURE AND DESIGN<br />
In order to match the robustness requirement, the system<br />
is segmented in blocks with partially redundant functionality<br />
(Fig. 2). We favour an architecture where the TSM consists of<br />
three parts: a Selection (TSMS) block, two Data multiplexing<br />
(TSMD) blocks (called TSMD0 and TSMD1, covering half a<br />
chamber each). The TSMS receives Preselect Words (PRW)<br />
carrying the information of the first stage of sorting performed<br />
by the Track Sorter Slave (TSS) units [2]. The TSMDs have<br />
as an input the full TRACO data of the track segments<br />
selected by the TSSs.<br />
Figure 2: Track-Sorter-Master block diagram. Architecture and<br />
I/O signals are shown.<br />
The TSM can be configured into two distinct<br />
processing modes:<br />
• Default processing: theTSMSperformsasasorter<br />
while the TSMDs act as data multiplexers. The<br />
TSMS can select two tracks in TSMD0 or two<br />
tracks in TSMD1 or one track each.<br />
• Back-up processing: the TSMS is inactive. Each<br />
TSMD performs as sorter and as multiplexer on<br />
data from a half chamber. Each TSMD outputs<br />
one track.<br />
The Default processing implements the full performance<br />
and guarantees that dimuons are found with uniform<br />
efficiency along the chamber. In case of failure of one TSMD,<br />
the PRWs of the corresponding half chamber are disabled in<br />
TSMS sorting, so that full efficiency is maintained in the<br />
remaining half chamber.<br />
The Back-up processing mode is activated in case of<br />
TSMS failure. It guarantees full efficiency for single muons<br />
and for open dimuon pairs (one track in each half chamber).<br />
B. Design<br />
In the hardware design the TSMS, TSMD0 and TSMD1<br />
blocks are implemented as three distinct ICs. Each block has<br />
independent power lines. Three separate lines, from the<br />
chamber Controller, are used to provide enable signals<br />
(nPWRenSort, nPWRenD0 and nPWRenD1) for the power<br />
switches. When one IC is powered down, also all I/O lines to<br />
the chip are disconnected via bus isolation switches driven<br />
with the same enable signals. For this purpose highly reliable<br />
switches with very large MTBF are used. Three independent<br />
power fault signals are generated and reported to the<br />
Controller when an overcurrent condition is detected in the<br />
corresponding power net.<br />
The TSM processing configuration can be changed from<br />
the Controller by acting directly on the power enable signals.<br />
The TSMS also receives the power enable state of the<br />
TSMD0 and of the TSMD1, then it can change its processing<br />
mode to select two tracks from the same TSMD, when the<br />
other TSMD is powered off. Similarly each TSMD receives<br />
the power enable state of both the TSMS and the other<br />
TSMD, and it can switch to the back-up processing mode<br />
when the TSMS is not powered. The system can still run in<br />
the extreme scenario of only one functioning TSMD block<br />
and its connections undamaged.<br />
The processing mode is selected via configuration<br />
registers in all three devices. Registers are also used for the<br />
set-up of the sorting and the fake-rejection algorithms. Access<br />
to the configuration registers is possible in two independent<br />
ways: through a serial JTAG net for boundary scan and<br />
through the DT parallel access bus with an ad-hoc protocol,<br />
hereafter called Parallel Interface (PI).
Figure 3(a) shows the JTAG net through the three ICs: the<br />
net can be configured to run only through the chips that are<br />
powered on, using isolation switches controlled via the power<br />
enable lines.<br />
Figure 3(b) shows how the PI bus is distributed through<br />
the TSM system; each IC has its own TSM address. The PI<br />
commands from the chamber Controller are forwarded to the<br />
other trigger boards in the same chamber (Fig. 1) through the<br />
TSMS. The TSMS gives access to only one trigger board in<br />
turn. In case of TSMS failure the trigger boards can still be<br />
configured via their individual JTAG nets. The PI utilises the<br />
same lines used for propagating the PRW data; the PRW bus<br />
is bi-directional.<br />
(a)<br />
TDI<br />
(b)<br />
TMS<br />
TCK<br />
JADD(3:0)<br />
BADD(3:0)<br />
isol isol<br />
nPWRenD0 nPWRenD0 nPWRenD1 nPWRenD1 nPWRenSort nPWRenSort<br />
From Controller<br />
PICD(7:0)<br />
isol isol isol isol isol isol<br />
TSMD0 TSMD1 Sorter<br />
nProg,Strobe,nWrite<br />
nPWRenD0 nPWRenD1<br />
Figure 3: (a) TSM JTAG Net<br />
(b) TSM Parallel Interface Net<br />
isol isol isol<br />
nPWRenD0<br />
isol<br />
nPWRenSort<br />
isol<br />
nPWRenD1<br />
isol<br />
A. Choice of technology<br />
TSMD0<br />
Sorter<br />
TSMD1<br />
Strobe_0<br />
Strobe_6<br />
nWrite<br />
III. IMPLEMENTATION<br />
The most important aspect is the choice of technology in<br />
developing the TSMS and TSMD ICs.<br />
There is one TSM system in each DT chamber, that is a total<br />
of 250 TSMS and 500 TSMD ICs in the entire muon barrel<br />
detector of CMS [1]. This is a too limited production volume<br />
for justifying the risk of developing two ASICs<br />
Boundary Scan JTAG<br />
isol<br />
nPWRenSort<br />
Parallel Interface Data Flow<br />
nPWRenSort<br />
isol<br />
isol<br />
x 7<br />
TD0<br />
To TRB_0<br />
To TRB_6<br />
TSM addressing:<br />
Global address, then<br />
Individual address<br />
The use of FPGAs has two advantages:<br />
• The same type of device can be used for both the<br />
TSMS and the TSMD, because the chosen
architecture requires a comparable number of pins<br />
for both ICs.<br />
• It leaves flexibility for fine-tuning of the sorting<br />
and ghost rejection algorithms.<br />
However standard FPGAs are disfavoured because of their<br />
low level of radiation tolerance, which can easily result in<br />
erasure and uncontrolled corruption of the programmed logic.<br />
A solution is the antifuse-FPGAs, also called pASICs<br />
(programmable ASICs). They are based on silicon antifuse<br />
technology: silicon logic modules in a high density array are<br />
interconnected using 3 to 4 metal layers where metal-to-metal<br />
amorphous silicon interconnect elements (the antifuses) are<br />
embedded between the metal layers. The antifuses are<br />
normally open circuit and, when programmed, form a<br />
permanent low-impedance connection. Once programmed, the<br />
chip configuration becomes permanent, making it effectively<br />
like an ASIC.<br />
The Actel A54SX32 [3] device was chosen.<br />
The small dimensions of the board constitute an other<br />
design constraint. Therefore, we have built a full-functionality<br />
prototype board (Fig. 4) with final dimensions (98x206 mm 2 ).<br />
It has been possible to find a placement of the components<br />
that allows efficient routing and good high-frequency<br />
behaviour, using six signal layers and a standard 5 mils<br />
routing technology. This final prototype is under test.<br />
Figure 4: PCB prototype for the Track-Sorter-Master. The larger<br />
chip on the right side is an A54SX32 programmed as TSMS. Places<br />
to host both TSMDs are visible.<br />
A. Program of tests<br />
IV. IRRADIATION TEST<br />
The Actel A54SX chips have been chosen for building the<br />
TSM after a test of their radiation tolerance has been<br />
performed. Samples have been exposed in the 59 Mev proton<br />
beam of the Cyclotron Research Centre (CRC) at the<br />
Universite Catholique de Louvain (UCL), in Louvain-la-<br />
Neuve, Belgium, in October 2000. At this energy 10 10<br />
protons/cm 2 correspond to a dose of 1.4 krads.<br />
B. Test Setup<br />
Four pASICs, each implementing a 450-bit register,<br />
refreshed and monitored at 1 MHz, have been irradiated up to<br />
40 krads/chip (one of them up to 70 krads). The register size<br />
of 450 bits is similar to that of registers in both the TSMS and<br />
TSMD chips.<br />
Figure 5 shows the set-up used for these tests. Pattern Unit<br />
(PU) [4] is a high-throughput VME board, acting as pattern<br />
generator and as read-out module.<br />
Figure 5: Irradiation test set-up.<br />
C. Results<br />
There has been no failure and no latch-up.<br />
The Total Irradiation Dose (TID) result is summarised in<br />
figure 6: no significant increase in current is observed for<br />
doses well above that of a few krads expected for the CMS<br />
barrel muon chambers in 10 years of LHC operation [5].<br />
We have observed one Single Event Upset. The event has<br />
been recorded and studied off-line. The 450 bit register dump<br />
shows that in the event about 1/3 of the flip-flops have<br />
changed state, with no obvious correlation in pattern. With the<br />
help of Actel CAD tools, we have inferred that most probably<br />
the internal clock distribution to the register cells has failed.<br />
Because of this, we quote a SEU cross-section<br />
measurement per chip instead of per bit. According to the
procedure established in [5], we can use this measurement for<br />
estimating the SEU rate in CMS.<br />
Icc (mA)<br />
20<br />
15<br />
10<br />
5<br />
Actel A54SX32-3PQ208 TID test<br />
Oct.2000<br />
0<br />
0 10 20 30 40 50 60 70<br />
krads<br />
Figure 6: Irradiation test set-up.<br />
chip1 3.3 V<br />
chip2 3.3 V<br />
chip3 3.3 V<br />
chip4 3.3 V<br />
chip15V<br />
chip25V<br />
chip35V<br />
chip45V<br />
With one observed event and a total fluence of 1.4 10 12<br />
protons/cm 2 , we calculate a SEU cross-section upper limit (at<br />
90% c. l.) of<br />
σSEU
Neutron Radiation Tolerance Tests of Optical and Opto-electronic Components for the<br />
CMS Muon Barrel Alignment<br />
Baksay, L. 1 Bencze, Gy. L. 3 Brunel, L. 4 Fenyvesi, A. 2 Molnár, J. 2 Molnár, L. 1 Novák, D. 5<br />
Pszota, G. 1 Raics, P. 1 and Szabó, Zs. 1<br />
1 Institute of Experimental Physics, Debrecen University, Debrecen, Hungary H-4001<br />
2 Institute of Nuclear Research (ATOMKI), Debrecen, Hungary H -4001<br />
(e-mail: jmolnar@atomki.hu)<br />
3 Institute of Particle and Nuclear Physics, Budapest, Hungary H -1525<br />
CERN, CH-1211 Geneva 23, Switzerland<br />
4 Institute of Experimental Physics, Debrecen University, Debrecen, Hungary H-4001<br />
CERN, CH-1211 Geneva 23, Switzerland<br />
5 Royal Institute of Technology (KTH), SCFAB, S - 106 91 Stockholm, Sweden<br />
Abstract<br />
Neutron irradiation tests were performed with broad<br />
spectrum p(18MeV)+Be neutrons (En
The optical and opto-electronic components will have to<br />
work in a radiation environment, where the highest expected<br />
flux of the neutron component is about 1.0E+03 n/cm2/sec,<br />
and the estimated time of operation is 5.0E+10 sec.<br />
The total expected neutron fluence is 2.6E+12 n/cm2 and<br />
8.0E+13 n/cm2 for the Barrel Muon and ME1/1 chambers,<br />
respectively [1]. Radiation damage induced by neutrons can<br />
alter electrical and optical characteristics of the components<br />
and thus the accuracy of the whole BAM system.<br />
Our present paper addresses some key issues for the cost<br />
effective use of COTS electronic components in radiation<br />
environments that enable CMS Alignment system designers to<br />
manage risks and ensure final success [3,4].<br />
II. EXPERIMENTAL TECHNIQUES<br />
A. Samples tested<br />
LED light source<br />
Low current high intensity point-like LED light sources<br />
emitting at 660 nm were selected for construction of lighting<br />
panels of BAM system, type: FH1011, Stanley Electric Co.<br />
Ltd [5].<br />
LED driver<br />
The A6775 circuit is intended to use for LED -display<br />
applications [6]. Each BiCMOS device includes an 8-bit<br />
CMOS shift register, accompanying data latches, and eight<br />
NPN constant-current drivers. The serial CMOS shift register<br />
and latches allow direct interfacing with microprocessorsystem.<br />
CMOS serial data output permits cascade connections<br />
in applications requiring additional drive lines. The LED drive<br />
current (max. 90mA) is determined by user’s selection of a<br />
single resistor.<br />
Microcontroller<br />
The PIC16F84 is a high-performance, low cost, CMOS,<br />
fully static 8-bit microcontroller [7] with 1kx14 EEPROM<br />
program memory and 64-byte EEPROM data memory. The<br />
high performance of the PIC16F84 can be attributed to<br />
number of architectural features commonly found in RISC<br />
microprocessors. The chip uses a Harvard architecture, in<br />
which, program and data are accessed from separate<br />
memories. This improves bandwidth over traditional von<br />
Neuman architecture where program and data are fetched<br />
simultaneously. Separating program and data memory further<br />
allows instructions to be sized differently than 8-bit wide data<br />
word. In PIC16F84, op-codes are 14-bit wide making it<br />
possible to have all single word instructions. A 14-bit wide<br />
program memory access bus fetches a 14 -bit instruction in a<br />
single cycle. A two-stage pipeline overlaps fetch and<br />
execution of instructions. Consequently, all instructions<br />
execute in a single cycle except for program branches. The<br />
PIC 16F84 has four interrupt sources and an eight-level<br />
hardware stack. The peripherals include an 8-bit timer/counter<br />
with an 8-bit pre-scaler, 13 bi -directional I/O pins and a<br />
separate watchdog timer (WDT). The watchdog timer is<br />
realised as a free running on-chip RC oscillator that does not<br />
require any external components. That means that the WDT<br />
will run, even if the clock on the oscillator pins of the device<br />
has been stopped. A WDT timeout generates a device RESET<br />
condition. The high current drive of I/O pins help reduce<br />
external drivers and therefore, system cost.<br />
Video camera<br />
The VM5402 is a complete video camera [8] based on the<br />
highly integrated VV5402 monochrome CMOS sensor chip<br />
[9]. The module is suitable for applications requiring a<br />
composite video signal with minimum external circuitry. The<br />
camera incorporates a 388x295 (12micron x 12micron) pixel<br />
image sensor and all necessary support circuits to generate<br />
fully formatted composite video signal into a 75 Ohm load.<br />
Automatic controls of exposure, gain and black level allow<br />
use of a single fixed-aperture lens over a wide range of<br />
operating conditions. Automatic exposure control is achieved<br />
by varying pixel current integration time according to the<br />
average light level on the sensor. This integration time can<br />
vary from one pixel clock period to one frame period. Pixels<br />
above a threshold white level are counted every frame, and the<br />
number at the end of the frame defines the image exposure. If<br />
the image is other than correctly exposed, a new value for<br />
integration time is calculated and applied for the next frame.<br />
Optical lens<br />
The lenses were plano-convex single lenses made of BK7<br />
glass without coating. Their nominal focal length was 30.7mm<br />
and their diameter was 10 mm.<br />
B. Irradiation circumstances<br />
Neutron irradiations were done at the neutron irradiation<br />
facility [10] at the MGC-20E cyclotron at ATOMKI,<br />
Debrecen with p(18MeV)+Be reaction. Neutrons with a broad<br />
spectrum (En
c) voltage ON/OFF alternating in ratio to 1/19 for testing of<br />
the LED light-sources . The LED power rails were monitored<br />
using digital multimeters equipped with serial communication<br />
interface that allows automatic measurement of the LED<br />
current. The nominal current were checked post irradiation if<br />
any total dose degradation has occurred. Before and after<br />
irradiation the optical properties (light yield, intensity<br />
distribution, wavelength of the emitted light) of the diodes<br />
were measured and evaluated with a commercial PC based<br />
image analysing system.<br />
In order to determine the radiation tolerance of the chip<br />
itself the LED driver circuit was also investigated separately<br />
from the PIC microcontroller. For this purpose the special setup<br />
was constructed consisting of a Power -PC with serial RS-<br />
232/I 2 C/RS-232 bus converter and interface for measuring the<br />
current consumption of the circuit automatically. The current<br />
source outputs of the LED driver were terminated using<br />
radiation tolerant resistors as replacing of the LEDs. Using<br />
special ON/OFF codes for up dating the content of the serial<br />
8-bit register inside the chip resulted in a direct possibility to<br />
detect bit errors by measuring the total current only.<br />
The most important electronic component of the Barrel<br />
Alignment Control system is the PIC16F84 microcontroller<br />
that is an advanced highly scaled sensitive device. As it will<br />
work in a radiation environment, its rad iation tolerance is one<br />
of the most crucial questions. The errors and Single Event<br />
Upsets (SEU) in these kind of systems may not be observed,<br />
may cause data corruption, or may alter program flow<br />
depending on the location of the upset [12]. The consequence<br />
of these upsets is dependent on the criticality of the function<br />
performed by the system.<br />
The SEU characterisation of the microcontroller was<br />
performed by using a special test set-up. The test system<br />
based on the PC equipped with the whole necessary<br />
measuring and communication devices was placed outside the<br />
irradiation area in 30 -meter distance from the devices under<br />
test. The special test code developed and run on the PC was<br />
able to communicate with the controller trough the I 2 C bus<br />
regularly sending special commands and data associated with<br />
it. After a fixed irradiation time (1-10 minutes) the content of<br />
the registers of the PIC was read back and compared to the<br />
initial values. If the register content was changed by the<br />
radiation, i.e. the expected and received values differed, the<br />
errors with time stamps were registered in a report file before<br />
the register was filled again in the next irradiation cycle.<br />
The watchdog timer as one of the useful functions of the<br />
microcontroller was intensively used during the irradiation as<br />
basic indicator of the system general failure. The supply<br />
current of the PIC was automatically measured and compared<br />
to the reference value. In case of higher current consumption,<br />
then the default value, i.e. due to the Total Ionising Dose<br />
(TID) effect the test set-up was able to interrupt the<br />
measurement by switching off the voltage.<br />
The compact monochrome VM5402 video camera<br />
together with the related circuits was tested on-line during the<br />
full irradiation period. An automatic radiation damage<br />
monitoring system was developed and used for<br />
characterisation of the radiation tolerant ability of the device.<br />
The system was based on a video-monitor and a videotape<br />
recorder in order to record the video signal for off-line coding<br />
and evaluation. In direct connection with the monitoring<br />
system a PC equipped with a Frame Grabber card was used<br />
for capturing and digitising of the video frames periodically in<br />
every 1 minute. The actual current consumption of the camera<br />
was measured to avoid the TID threshold of the device.<br />
During the camera irradiation measurements different modes<br />
of operation were investigated similarly to the LED’s tests.<br />
D. Optical set-up and measurement<br />
The focal length (and thus indirectly the refractive index)<br />
and spectral transmission characteristics of the lenses was<br />
measured before and after irradiation. A He-Ne laser was used<br />
for focal length measurements. A high-pressure xenon arc<br />
lamp was employed as light source for the spectral<br />
transmission measurements. A calibrated Si-UV detector was<br />
used to measure spectra.<br />
III. RESULTS AND DISCUSSIONS<br />
A. Electronic measurements<br />
Low current high intensity point-like LED light sources<br />
emitting at 660 nm were irradiated up to 2.6E+12 n/cm2.<br />
Three modes of operation were studied: a) voltage ON<br />
permanently, b) voltage OFF permanently and c) voltage ON<br />
for 1 sec and OFF for 19 sec. For all of these modes of<br />
operation, the light yield decreased almost linearly as a<br />
function of the neutron fluence and approximately 50 %<br />
decrease was observed at the end of the irradiation. No other<br />
change in the electrical and spectral characteristics was<br />
measurable (Figure 2).<br />
P/P0<br />
1.2<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0<br />
0 0.5 1 1.5 2 2.5 3<br />
Neutron fluence [ x 10 12 cm -2 ]<br />
Figure 2. Light yield of the LED vs. neutron fluence<br />
LED current driver and controller electronics with<br />
Microchip PIC16F84 microcontroller were irradiated up to<br />
8.0E+13 n/cm2. Some 20 % loss of the output currents of the<br />
LED controllers was observed at the end of the irradiation<br />
(Figure 3).
Current (mA)<br />
6<br />
5<br />
4<br />
3<br />
2<br />
1<br />
0<br />
0.00E+00 2.00E+13 4.00E+13 6.00E+13 8.00E+13 1.00E+14<br />
Neutron fluence (cm -2 )<br />
Figure 3. Current of the LED driver vs. neutron fluence<br />
The degradation of the current drivers was negligible<br />
below 1.0E+11 n/cm2 (the expected fluence at the position of<br />
operation of the device). Two microcontrollers were studied.<br />
Both became damaged only after delivering ~ 2.0E+13 n/cm2<br />
neutron fluence to them as the dramatically increased current<br />
consumption of the electronics indicated (Figure 4).<br />
Current (mA)<br />
30<br />
25<br />
20<br />
15<br />
10<br />
5<br />
0<br />
0.00E+00 2.00E+13 4.00E+13 6.00E+13 8.00E+13 1.00E+14<br />
Neutron fluence (cm -2<br />
)<br />
I0<br />
I0-I1<br />
I0-I2<br />
Figure 4. Current consumption of LED driver & controller<br />
electronics vs. neutron fluence<br />
I1<br />
I2<br />
VM5402 video cameras with VV5402 CMOS sensor<br />
device were irradiated with a fluence up to 2.8E+12 n/cm2.<br />
The radiation damage of the sensor resulted in the altered<br />
nearly Gaussian distribution of the light sensitivity of the<br />
individual pixels in all modes of operation. The mean values<br />
decreased while the sigma values increased in all three modes<br />
(a) voltage on per manently, b) voltage off permanently and c)<br />
voltage on for 1 sec. and off for 19 sec (see Table 1).<br />
Table 1. Homogenity of the video sensor (Parameters of the<br />
nearly Gaussian distribution of the sensitivity of the individual<br />
pixels)<br />
DC<br />
permanently<br />
ON<br />
DC<br />
periodically<br />
ON<br />
(5 % in total)<br />
DC<br />
permanently<br />
OFF<br />
Before<br />
irradiatio n<br />
Mean<br />
Sigma<br />
230.18<br />
1.20<br />
211.85<br />
1.31<br />
238.09<br />
0.81<br />
After<br />
irradiation<br />
Mean<br />
Sigma<br />
153.22<br />
2.58<br />
158.45<br />
2.06<br />
210.39<br />
1.51<br />
Before/After<br />
irradiation<br />
Mean<br />
Mean<br />
66.6 %<br />
74.8 %<br />
88.4 %<br />
The observed nonlinearity of the output signal vs. light<br />
intensity was not radiation-dependent. Apart from the general<br />
sensitivity loss, the spectral sensitivity of the sensor did not<br />
change (Figure 5).<br />
Figure 5. Typical picture of the video camera at the beginning<br />
of the neutron test. Tracks of recoils could be observed<br />
frequently
B. Optical measurements<br />
Plano-convex single optical lenses were irradiated up to<br />
8.0E+13 n/cm2. They were made of BK7 glass without<br />
coating and their diameter was 10 mm. No measurable change<br />
of the spectral transmission and the refraction (focal length)<br />
was observed.<br />
IV. CONCLUSIONS<br />
Neutron radiation tolerance of COTS optical and optoelectronic<br />
components to be used in the CMS Muon Barrel<br />
Alignment system was studied with broad spectrum<br />
p(18MeV)+Be neutrons (En
Radiation test and application of FPGAs in the Atlas Level 1 Trigger.<br />
Abstract<br />
The use of SRAM based FPGA can provide the benefits of<br />
re-programmability, in system programming, low cost and<br />
fast design cycle.<br />
The single events upset (SEU) in the configuration<br />
SRAM due to radiation, change the design's function obliging<br />
the use in LHC environment only in the restricted area with<br />
low hadrons rate.<br />
Since we expect in the Atlas muon barrel an integrated<br />
dose of 300 Rads and 5.65 . 10 9 hadrons/cm 2 in 10 years, it<br />
becomes possible to use these devices in the commercial<br />
version. SEU errors can be corrected online by reading-back<br />
the internal configurations and eventually by fast reprogramming.<br />
In the frame of the Atlas Level-1 muon trigger we<br />
measured for Xilinx Virtex devices and configuration<br />
FlashProm:<br />
• The Total Ionizing (TI) dose to destroy the devices;<br />
• Single Event Upset (SEU) cross section for logic and<br />
program cell;<br />
• An upper limit for Latch-Up (LU) event.<br />
With the expected SEU rate calculated for our<br />
environment we found a solution to correct online the errors.<br />
System Description<br />
The Atlas level-1 muon trigger [1],[7] is based on<br />
dedicated, fast and finely segmented muon detectors (RPC).<br />
V.Bocci (1) , M. Carletti (2) , G.Chiodi (1) , E. Gennari (1) ,<br />
E.Petrolo (1) , A.Salamon (1) , R. Vari (1) ,S.Veneziano (1)<br />
(1) INFN Roma, Dept. of Physics,<br />
Università degli Studi di Roma “La Sapienza”<br />
p.le Aldo Moro 2, 00185 Rome, Italy<br />
(2) INFN Laboratori Nazionai Frascati,<br />
Via Enrico Fermi 40, Frascati (Roma)<br />
The system is segmented in 832 trigger and readout<br />
modules (PAD) and 832 splitter modules used to fan-out the<br />
FE signals located in the RPC zones.<br />
The main components of the PAD are:<br />
• The four coincidence matrix chip (CM)<br />
• The Pad logic chip (PL)<br />
• The fieldbus interface based on CANBus ELMB<br />
• The optical link.<br />
The CM chip selects muon with predefined transverse<br />
momentum using fast coincidence between strips of different<br />
planes.<br />
The data from two adjacent CM in the η projection and<br />
the data from the two corresponding CM chip in the<br />
Figure 1. RPC location in the Atlas experiment.<br />
φ projection are combined in the Pad Logic (PL) chip. After<br />
the measurements of the characteristic of FPGA devices in a<br />
radiation environment, we decided to use for the Pad Logic<br />
chip. Pad logic chip, covers a region ∆ηx∆φ = 0.2x0.2,<br />
associates muon candidates with a region ∆ηx∆φ=0.1x0.1
(RoI). It selects the higher triggered track in the Pad solves<br />
overlap inside the Pad and performs the readout of the CM<br />
matrix data.<br />
I. RADIATION ENVIRONMENT<br />
The radiation dose accumulated on the muon spectrometer<br />
depends from the zone. The simulated radiation levels [2] for<br />
ten years of operation of the Atlas muon detectors for various<br />
RPC chamber without safety factor is given in table 1. The<br />
Table 1: Table 1: Simulated radiation<br />
environment in ten years of operation<br />
SRLtid<br />
SRLsee<br />
(Gy 10y -1 ) (>20 MeV<br />
h cm -2 10y -1 )<br />
BMF 3.02E+00 4.69E+09<br />
BML 3.04E+00 5.65E+09<br />
BMS 3.03E+00 4.73E+09<br />
BOF 1.19E+00 4.08E+09<br />
BOL 1.33E+00 4.21E+09<br />
BOS 1.26E+00 4.10E+09<br />
simulated maximum value over 10 years of operation for TID<br />
(Total Ionizing Dose) is 3.04 Gy (304 Rad) and a total flux of<br />
5.65 . 10 9 hadrons.<br />
II. XILINX VIRTEX AND FLASHPROM ARCHITECTURE.<br />
The Xilinx Virtex devices [3] have a regular architecture<br />
that comprises an array of configurable logic blocks (CLBs)<br />
surrounded by programmable input/output blocks (IOBs).<br />
CLBs interconnect through a general routing matrix<br />
(GRM). The GRM comprises an array of routing switches<br />
located at the intersections of horizontal and vertical routing<br />
channels. The VersaRing I/O interface provides additional<br />
routing resources around the periphery of the device. This<br />
routing improves I/O routability and facilitates pin locking.<br />
The configuration of each CLB and IOB and the<br />
interconnection between different elements is programmed<br />
using a substrate of SRAM cells. The Virtex devices are<br />
custom built by loading configuration data into these internal<br />
SRAM cells. The numbers of configuration flip-flop exceed<br />
the number of logic flip-flops inside CLB and IOB of one<br />
order of magnitude.<br />
In the master and selectmap mode it is possible to program<br />
the Virtex using an external nonvolatile memory with<br />
programmed inside the custom built design.<br />
The 18v02 memory devices are using CMOS FLASH<br />
process for memory cell. The Flash process leaves the<br />
possibility to reprogram the device and appear to be resistant<br />
to SEU and TID. The high data bandwidth between the<br />
Flashprom and the Virtex device give the possibility to<br />
reprogram the FPGA in few milliseconds.<br />
III. SEE TEST AT THE CYCLOTRON OF<br />
LOUVAIN-LA-NEUVE<br />
A. Measure of logic Flip/Flop hadrons cross<br />
section.<br />
The XCV200 Xilinx Virtex FPGA and the 18v02 Flashprom<br />
[4] were irradiated with 60 MeV protons at the CYClotron of<br />
LOuvain-la-NEuve (CYCLONE) of theUniversité Catholique<br />
de Louvain, in Belgium. To perform such irradiation a special<br />
prototype board containing a XCV200 and a flashprom 18v02<br />
were used (Figure 3).<br />
The main purpose was to study SEE effects on the logic flipflops<br />
in the configuration area and in the flash memory.<br />
Figure 3. XCV200 Bga352 prototype<br />
board.<br />
The Virtex was programmed (Figure 4.) with a 2048 bits<br />
circular shift register at the reset loaded with a 1010…10<br />
pattern.<br />
A very small part of the logic was dedicated to correct SEU<br />
errors and do detect such type of events<br />
The circular shift circuit is very sensitive to SEU in the<br />
program area any break in the flips-flops chain stop the<br />
regular function.<br />
Figure 4. The circular shift register circuit<br />
used to determine the flip-flops logic cross<br />
section.
Two devices were programmed with this circuit and<br />
clocked using a 40 MHz clock. After an exposition to<br />
Figure 5. SEU events in logic flip-flops. Zero<br />
to One transition and one to zero transition.<br />
6.14 . 10 10 protons/cm 2 we observed five SEU events like<br />
Figure 5 and 23 events like figure 6.<br />
The signature of events in Figure 5 is typical of a logic<br />
Figure 6. The circuit stop after a SEU in<br />
the program area.<br />
flips-flops SEU instead the Figure 6 events shown a stop in<br />
the normal behavioral of the circuit caused from an error in<br />
the Virtex program. From this test the logic flip-flop<br />
Xsection/bit=3.98*10 -14 cm 2 .<br />
No Latch-up events were observed.<br />
B. Measure of SEUs in the Flashprom.<br />
A total fluence of 8 . 10 11 protons/cm 2 was divided among four<br />
18v02 2 Mbit flashprom devices. No SEU was observed with<br />
a limit for the Xsection/bit < 6 * 10 -19 cm 2 .<br />
At about 2*10 11 protons (corresponding to a total dose of<br />
28 Krad for protons of 60 Mev in silicon) the programming<br />
feature stop to work.<br />
C. Measures of SEUs in the Virtex program<br />
memory.<br />
Two devices were programmed with the circular shift<br />
register and irradiated with the 60 Mev protons beam.<br />
We perform a read back of the device using the fast<br />
selectmap mode. We do the comparison between the read-<br />
back stream and the original one, masking the meaningless bit<br />
as specified from xilinx documentation [5].<br />
integrated protons/cm2<br />
1.80E+11<br />
1.60E+11<br />
1.40E+11<br />
1.20E+11<br />
1.00E+11<br />
8.00E+10<br />
6.00E+10<br />
4.00E+10<br />
2.00E+10<br />
Number of wrong bits vs number of protons/cm2<br />
y = 8E+07x<br />
R 2 = 0.9991<br />
0.00E+00<br />
0 500 1000<br />
number of wrong bits<br />
1500 2000<br />
Figure 7. Total fluence vs numbers of bits corrupted.<br />
For each run we accumulate thousands of bits error to<br />
have enough statistic. The results of various runs are shown in<br />
the Figure 7.<br />
The results were compatible with a Xsection of 1.25 . 10 -8<br />
cm 2 per device that correspond at Xsection/bit of 1.25 . 10 -14<br />
A total fluence 5.44 . 10 11 protons was divided among 2<br />
devices. We collect one event with an architectural break<br />
(wrong response from the read-back engine) the error was<br />
recovered after a reset.<br />
No Latch up was observed.<br />
IV. TID (TOTAL IONIZING DOSE) TEST WITH A<br />
60 CO GAMMA RAY SOURCE.<br />
We use for the total ionizing dose the 60 Co source in the<br />
Istituto Superiore di Sanita’ in Rome.<br />
The source gives a rate of 380 Rad/min.<br />
A. TID effects in Virtex FPGA<br />
end_run data<br />
data during 1st run<br />
Linear (end_run data)<br />
We tested tree devices, the first and the second device<br />
were loaded with the circular shifter register instead the third<br />
one was used to test the Xilinx without a loaded configuration<br />
Table 2.
during communication with jtag tap.<br />
The Tables 2,3,4 shows the data log of the currents sinked<br />
from the devices.<br />
The first device (xilinx1) worked correctly up to 73 Krad,<br />
including a reconfiguration and read-back, the circuit continue<br />
to work up to 83 Krad but at this value was impossible to<br />
reprogram the device. We note a strong increment of the<br />
current 150 mA instead of the 40 mA.<br />
The second device (xilinx2) worked correctly up to 65<br />
Krad but we note a factor two in the sinked current, 80 mA<br />
instead of 40 mA,. The device stopped to work at 72 Krad<br />
with the same behavioral of xilinx1.<br />
In the xilinx3 we monitored only the current of the device<br />
during the communication with the jtag interface without<br />
configure the device.<br />
This current was stable up to 92 Krad then started to<br />
increase slowly, at 112 Krad was impossible to communicate<br />
with the jtag machine and the device stopped to work.<br />
Table 3.<br />
Table 2.<br />
All the devices work without any problem up to 60 Krad.<br />
The Atlas requirement for the RPC zone is 4.2 Krad, that<br />
include a 20 safety factor. The device meets very well the<br />
requirement.<br />
B. TID effects in the 18v02 Flashprom .<br />
Two 18v02 flashprom were tested.<br />
The behavioral of the two devices was very similar and<br />
shown in the figure 8.<br />
The current sinked from the device start to increase at 20<br />
Krad at the 33 Krad was impossible to reprogram the device.<br />
Also in this case the device meet the Atlas requirements.<br />
The device stop to work with a total dose of 33 Krad in<br />
this value is similar with our measurements with protons (28<br />
Krad).<br />
Figure 8. Current vs total dose for a 18v02<br />
Flashprom.<br />
V. ANNEALING AFTER IRRADIATION WITH 60 CO<br />
GAMMA RAY SOURCE.<br />
After the irradiation we put all the devices inside an oven<br />
at 100 0 C. We log the current sinked from any device.<br />
The xilinx after 12 hours of annealing restarted to work<br />
correctly we note a big jump in the current reversing exactly<br />
our TID measurements Figure 9.<br />
Figure 9. current sinked from xcv200 during the<br />
annealing.<br />
The flashprom restarted to work after few hours and after<br />
one day returns at the normal current Figure 10.<br />
Figure 10. current sinked from the 18v02 flashprom<br />
during the annealing.<br />
All the devices working well after the annealing and the<br />
process seem to delete any effects of TID.
VI. THE FPGA SUBSYSTEM.<br />
After the resuts coming from the test we decide to implement<br />
the pad logic using a subsystem based on a Virtex FPGA, two<br />
Flashroms and a microcontroller is used to download and read<br />
back the Virtex configuration.<br />
The system is checked by a simple task running in the ELMB<br />
CANbus microcontroller [6], capable of accessing these<br />
devices via the ISP and JTAGbuses (Figure 11.).<br />
Figure 11. FPGA subsystem to recover SEU in<br />
program area.<br />
The system reads back continuously frame by frame the<br />
configuration inside the Xilinx using JTAG and checks the<br />
consistency for each frame with a precalculated CRC value<br />
stored in the SPI Flashrom (Figura 14). In case of error the<br />
Figure 14. Flow chart of the FPGA initialisation and<br />
check process.<br />
microcontroller rewrites part of the configuration correcting<br />
the wrong frame or reload the entire configuration.<br />
In the Atlas radiation environment with the Xsect=1.25*10 -8<br />
cm 2 and a flux of 5.65*10 9 hadrons/10 years we aspect 6.25<br />
SEUs in one year<br />
Using the ATLAS safety factors SFsim=5 for the<br />
simulation uncertainty and SFlot=4 for the chip lot uncertainty<br />
we have SEU with SF=6.25*5(SFsim)*4(SFlot)=125 in one<br />
year.<br />
VII. CONCLUSIONS<br />
The XCV200 Xilinx Virtex FPGA and 18v02 Xilinx<br />
Flashprom were irradiated with protons and gamma ray. The<br />
SEU logic cross section is similar to other devices<br />
with 0.25µm technology.<br />
For the Xilinx Virtex XCV200 the measured logic<br />
Xsection/bit= 3.98*10 -14 cm 2 and the measured configuration<br />
Xsection/device= 1.25 . 10 -8 cm 2 .<br />
The 18v02 Flashprom Xsection/bit < 6 * 10 -19 cm 2 .<br />
The SEU coming from the configuration memory get worse<br />
the problem of one order of magnitude respect to the pure<br />
Asic design. The TID tolerance is more than Atlas LVL1<br />
maximum requirements 4.2 Krad . All the XCV200 tested<br />
devices worked without problem up to 60 Krad.<br />
The 18v02 Flashprom program feature work up to 30 Krad<br />
and the device steel work at 100 Krad.<br />
The immunity of Flashprom technology to SEU can be used<br />
for fast reprogramming of Xilinx configuration on board. The<br />
availability of CPU power from DCS node can be used<br />
to continuously check the Xilinx program.<br />
References:<br />
[1] Muon Spectrometer Technical Design Report.<br />
1997 CERN/LHCC/97-22 ATLAS TDR 10.<br />
[2] M.Shupe<br />
Simulated Radiation Levels (SLR) version 18 Nov 2000.<br />
http://atlas.web.cern.ch/Atlas/GROUPS/FRONTEND/W<br />
WW/RAD/RadWebPage/RadConstraint/Radiation_Table<br />
s_181100.pdf<br />
(this document cancels and replaces the SRL tables given<br />
in Appendix 1<br />
of the ATLAS doc. ATC-TE-QA-0001, dated 21 July 00)<br />
[3] Virtex 2.5V Xilinx Datasheet<br />
DS003-1 (v2.5 ) April 2, 2001<br />
Xilinx.<br />
[4] XC18V00 Series of In-System Programmable<br />
Configuration PROMs.<br />
DS026 (v2.8) June 11, 2001.<br />
Xilinx Datasheet.<br />
[5] Virtex FPGA Series Configuration and Readback.<br />
XAPP138 (v2.4) July 25, 2001<br />
Xilinx application note.<br />
[6] B.Hallgren et al.<br />
The Embedded Local Monitor Board (ELMB)<br />
in the LHC Front-end I/O Control System.<br />
LEB 2001 Stockholm.<br />
[7] V.Bocci et al.<br />
Prototype Slice of Level-1 Muon Trigger Barrel Region<br />
of the ATLAS Experiment.<br />
LEB2001 Stockholm
A Radiation Tolerant Gigabit Serializer for LHC Data Transmission *<br />
P. Moreira 1 , G. Cervelli, J. Christiansen, F. Faccio, A. Kluge,<br />
A. Marchioro and T. Toifl 2<br />
Abstract<br />
In the future LHC experiments, some data acquisition and<br />
trigger links will be based on Gbit/s optical fiber networks. In<br />
this paper, a configurable radiation tolerant Gbit/s serializer<br />
(GOL) is presented that addresses the high-energy physics<br />
experiments requirements. The device can operate in four<br />
different modes that are a combination of two transmission<br />
protocols and two data rates (0.8 Gbit/s and 1.6 Gbit/s). The<br />
ASIC may be used as the transmitter in optical links that,<br />
otherwise, use only commercial components. The data<br />
encoding schemes supported are the CIMT (G-Link) and the<br />
8B/10B (Gbit-Ethernet & Fiber Channel). To guarantee<br />
robustness against total dose irradiation effects over the<br />
lifetime of the experiments, the IC was fabricated in a<br />
standard 0.25 µm CMOS technology employing radiation<br />
tolerant layout practices.<br />
The device was exposed to different irradiation sources to<br />
test its sensitivity to total dose effects and to single effects<br />
upsets. For this tests, a comparison is established with a<br />
commercial serializer.<br />
I. INTRODUCTION<br />
The high bunch crossing rate (40MHz) of particles<br />
together with the large number of channels in the LHC<br />
detectors will generate massive amounts of data that will be<br />
transmitted out of the different sub-detectors for storage and<br />
off-line data analysis. Trigger links will also be transmitting<br />
large amounts of data to the trigger processors. The last type<br />
requires low data latency and operation synchronous to the<br />
LHC master clock. Low latency reduces the required amount<br />
of storage memory needed inside the detectors while,<br />
synchronous operation avoids complex synchronization<br />
procedures of the data arriving to the trigger processors from<br />
the different locations in the detectors. Economic<br />
considerations as well as power budget, material budget and<br />
physical space impose the use of high-speed links for data<br />
CERN, 1211 Geneva 23, Switzerland<br />
J. P. Cachemiche and M. Menouni<br />
CPPM, Marseille, France<br />
transmission. Consequently, optical links operating in the<br />
Gbit/s range were chosen for these applications.<br />
Modern day commercial components meet (or exceed) the<br />
needs existing in the High Energy Physics (HEP)<br />
environment. However, for the applications mentioned above,<br />
the transmitters will be located inside de particle detectors and<br />
will be subject to high doses of ionizing irradiation during the<br />
lifetime of the experiments. In general, Commercial-Off-The-<br />
Shelf (COTS) devices are not designed to withstand<br />
irradiation. If the large number (~ 100K) of links planned for<br />
LHC is taken into account, the few radiation-hardened devices<br />
that exist on the market have prohibitively high prices. It was<br />
thus considered necessary to develop a dedicated solution that<br />
would meet the very special HEP requirements. Since only<br />
the transmitters will be subject to irradiation, only they need<br />
to be developed and qualified for radiation tolerance.<br />
Adopting commercial components for all the other parts in the<br />
chain reduces the development and maintenance costs.<br />
Following this line of reasoning, a transmitter ASIC was<br />
developed that is capable of operating with two of the most<br />
common data transmission protocols. Several features of the<br />
device are configurable so that different user requirements can<br />
be accommodated. It was designed using radiation tolerant<br />
layout practices that, when applied to CMOS sub-µm circuits,<br />
guarantee tolerance to irradiation effects to the levels<br />
necessary for the LHC experiments [1] and [2].<br />
In this work, emphasis will be put on the radiation effects<br />
on the circuit operation. A COTS serializer has also been<br />
irradiated. The results of these tests will be discussed.<br />
II. THE GOL ASIC ARCHITECTURE<br />
The basic principles of the serializer operation have<br />
already been discussed in previous publications [3] and [4]<br />
and consequently will not be repeated here. Only a brief<br />
discussion of the IC architecture will be made since it is<br />
relevant for the understanding of the irradiation tests.<br />
As shown in Figure 1 the ASIC “Data Interface” is<br />
composed of a 32-bit bus and two data flow control lines<br />
*<br />
This work has been supported by the European Community Access to Research Infrastructure action of the Improving Human<br />
Potential Programme, contract N. HPRI-CT-1999-00110<br />
1<br />
Email: Paulo.Moreira@cern.ch<br />
2<br />
Now with IBM Research, Zurich, Switzerland
(“dav” and “cav”). Depending on the transmission mode the<br />
data bus operates either as a 16 (least significant bits only) or<br />
as a 32-bit bus. Since the operation is synchronous with the<br />
LHC clock (running at 40 MHz), these two modes result in<br />
data bandwidths of 640 Mbit/s and 1.28 Gbit/s respectively.<br />
Before serialization, data undergo encoding using either the<br />
8B/10B [5] or the “Conditional-Invert Master Transition”<br />
(CIMT) [6] line coding schemes. The encoding procedures<br />
introduce two additional bits for each eight bits of data<br />
transmitted resulting in serial data rates of 800 Mbit/s or<br />
1.6 Gbit/s. Any combination of line coding and data rate can<br />
be used. If CIMT encoding is employed, a G-Link receiver [7]<br />
is required while if, 8B/10B coding is performed then either a<br />
Gbit Ethernet [8] or a Fiber Channel receiver can be used<br />
provided they are compatible with the data rates being<br />
generated.<br />
D(31:0)<br />
16/32b<br />
dav<br />
cav<br />
LHC<br />
clock<br />
I2C<br />
JTAG<br />
Config<br />
Data<br />
Interface<br />
PLL &<br />
Clock<br />
Generator<br />
Control &<br />
Status<br />
Registers<br />
16b<br />
CIMT<br />
Encoder<br />
8B/10B<br />
Encoder<br />
Serializer<br />
20b<br />
10b<br />
Word<br />
Multiplexer<br />
Laser<br />
Driver<br />
50Ω<br />
Line<br />
Driver<br />
out+<br />
out-<br />
Figure 1: IC architecture<br />
For operation in the 32-bit mode, the “Data Interface”<br />
performs time division multiplexing of the 32-bit input words<br />
into two 16-bit words at a rate of 80 Mwords/s. No<br />
multiplexing is done at this stage if the 16-bit mode is used.<br />
During encoding, the 16-bit data words are transformed into<br />
20-bit words that are further time-division multiplexed into<br />
two 10-bit words by the “Word Multiplexer” before they are<br />
fed to the “Serializer”. The “Serializer” converts them into the<br />
final serial data stream and drives both the “Laser Driver” and<br />
the “50Ω Line Driver”. The use of the output drivers is<br />
mutually exclusive.<br />
The several clock frequencies necessary to run the<br />
different circuits of the serializer are internally generated by a<br />
clock multiplying PLL that uses as a reference the LHC<br />
master clock signal (40.08 MHz).<br />
Due to radiation effects, it is expected that the threshold<br />
current of the laser diodes will increase with time over the<br />
lifetime of the experiments [9]. To compensate for this, the<br />
laser-driver contains an internal bias current generator that<br />
can be programmed to sink currents between 0 and 55 mA.<br />
Programming the ASIC can be done using either an I2C [10]<br />
or a JTAG [11] interface. External hardwired pins set the<br />
main operation modes of the receiver. Although these two<br />
interfaces are present in the ASIC they are not essential for its<br />
operation. They have been added to allow additional<br />
flexibility in the use of the serializer. The main modes of<br />
operation are configurable by external hard-wired pins<br />
allowing the ASIC to work standalone.<br />
III. EXPERIMENTAL RESULTS<br />
The ASIC was tested using the test setup shown in Figure<br />
2. The transmitter card is composed of a reference clock<br />
generator (40MHz), an FPGA that generates the test data to be<br />
fed to the GOL serializer, an optical transmitter, and a laserdiode.<br />
The same card, equipped differently, was used to test<br />
both the CIMT and the 8B/10B modes of operation. The<br />
optical transmitter used was the Infineon V23818-K305-V15<br />
for 800 Mbit/s operation in the G-Link mode and the Stratos<br />
MLC-25-8X-TL for 1.6 Gbit/s operation using 8B/10B<br />
encoding.<br />
FPGA<br />
Clock<br />
Generator<br />
Receiver Board<br />
Optical<br />
Receiver<br />
data<br />
control<br />
I2C<br />
JTAG<br />
Figure 2 : Test setup block diagram<br />
GOL<br />
Serializer<br />
Configuration<br />
G-Link or<br />
Gbit-Ethernet<br />
Deserializer<br />
data<br />
control<br />
Optical<br />
Transmitter<br />
Transmitter Board<br />
FPGA<br />
Clock<br />
Generator<br />
The receiver board contains a reference clock generator,<br />
an optical receiver, a de-serializer and an FPGA whose<br />
function is the detection and counting of errors present in the<br />
input data stream. The 8B/10B and the G-Link modes of<br />
operation required the use of two different receiver boards but<br />
their operation principle is the same. The two boards are<br />
equipped with parallel ports that allow them to be connected<br />
to either a computer or a logic analyzer for monitoring and<br />
analysis of errors. One of the boards was setup as a G-Link<br />
receiver operating at 800 Mbit/s and it is based on the Agilent<br />
HPMD-1024 de-serializer. The other was setup as an 8B/10B<br />
receiver operating at 1.6Gbit/s and the de-serializer used was<br />
the Texas Instruments TLK2501.<br />
This test setup was used both to perform the evaluation<br />
tests and the irradiation tests (total dose and single event<br />
effects).<br />
data<br />
control
A. Evaluation Tests<br />
All the ASIC functions proved functional. However, the<br />
ASIC laser driver displays levels of jitter incompatible with<br />
the data rates being transmitted (mainly at 1.6 Gbit/s).<br />
Because of that, all the Bit Error Rate (BER) tests reported<br />
here refer to data transmission using an external laser driver<br />
driven by the ASIC 50Ω line driver outputs.<br />
An error free four days long BER test was done in the G-<br />
Link mode at 800 Mbit/s. In addition, a 13-hour error free<br />
data transmission test was made in the 8B/10B mode at<br />
1.6 Gbit/s. The chip displays a power consumption of<br />
300 mW and 400 mW at 800M Bit/s and 1.6 Gbit/s,<br />
respectively (the power consumption includes a laser-diode<br />
bias current of 26 mA). Both the JTAG and the I2C interfaces<br />
proved fully functional.<br />
B. Total Dose Irradiation Tests<br />
The ASIC was irradiated with X-rays (10 KeV peak) in a<br />
single step to a total dose of 10 Mrad (SiO2) at a dose rate of<br />
10.06 Krad (SiO2)/min. A BER test was performed after<br />
irradiation for 72 hours and no errors were observed. The data<br />
transmission test was done using the G-Link mode at<br />
800 Mbit/s. The power consumption remained the same after<br />
irradiation.<br />
C. Single Event Effects Tests<br />
The ASIC was irradiated using heavy ions and protons at<br />
the cyclotron Research Center (CRC) of UCL Louvain-la-<br />
Neuve, to test its sensitivity to single event effects. The<br />
irradiation tests consisted in irradiating the IC during normal<br />
operation while at the same time monitoring the transmitted<br />
data for errors (BER test). The tests were performed in all<br />
cases at room temperature.<br />
1) Proton Test<br />
Two BER tests were made while irradiating the ASIC with<br />
60 MeV protons. The tests were done for the G-Link and the<br />
8B/10B modes of operation at 800 Mbit/s and 1.6 Gbit/s data<br />
rates, respectively. Table 1 summarizes the experimental<br />
conditions and results. No data transmission errors or PLL<br />
losses of lock were observed during the experiment leading to<br />
the limit cross sections of
3) Commercial Serializer<br />
Using a transmitter board similar in functionality to the one<br />
described above for the GOL transmitter, two samples of the<br />
TLK2501 Texas Instruments transceivers were subject to<br />
60 MeV protons irradiation.<br />
For this test, the proton flux was fixed at 3.5 10 8 p/(cm 2 .s). At<br />
a total proton fluence of 1.0 10 12 p/cm 2 , 8 events were<br />
observed for the first device tested and 11 upsets for the<br />
second device. Among those 19 events, 11 were found<br />
corresponding to single word upsets and 8 to PLL losses of<br />
synchronization. These result in cross-sections of 8 10 -12 cm 2<br />
for the loss of lock event and 1.1 10 -11 cm 2 for single errors.<br />
The transmitter board current consumption was monitored<br />
during irradiation. The test was interrupted when the power<br />
consumption reached a value 2.5 times higher than the preirradiated<br />
value, at a fluence of 9.4 10 11 cm 2 . This increase is<br />
surely due to total dose effects. The accumulated fluence<br />
corresponds to a radiation dose of about 130 Krad.<br />
Table 2 : Estimated error rates for four different CMS environments<br />
Environment Pixel<br />
Serializer [4]<br />
Errors/(Chip<br />
Hour)<br />
GOL<br />
800 Mbit/s<br />
Errors/(Chip<br />
Hour)<br />
GOL<br />
1.6 Gbit/s<br />
Errors/(Chip<br />
Hour)<br />
R = 4 -<br />
20 cm<br />
1.4 10 -2<br />
0<br />
9.4 10 -3<br />
Endcap<br />
ECAL<br />
R = 50 -<br />
130 cm<br />
1.9 10 -4<br />
0<br />
1.3 10 -4<br />
Outer<br />
Tracker<br />
R = 65 -<br />
120 cm<br />
8.4 10 -5<br />
0<br />
5.8 10 -5<br />
IV. ASIC UPGRADE<br />
Exp.<br />
Cavern<br />
R = 700 -<br />
1200 cm<br />
3.1 10 -8<br />
0<br />
2.2 10 -8<br />
As discussed before, the jitter levels on the laser driver<br />
output exceed the values reasonable for error free<br />
transmission. The causes for this problem were traced down<br />
and a new version of the ASIC with modifications aimed at<br />
solving this problem was submitted for fabrication. Besides<br />
this, a few more modifications were introduced that were<br />
requested by the CMS collaboration. A list of new features<br />
and modifications follows:<br />
� I/O input cells were redesigned to be TTL and 5V CMOS<br />
compatible;<br />
� An optional differential clock input was added. It is<br />
compatible with LVDS and PECL voltage swings;<br />
� An open fiber control safety logic circuit was introduced;<br />
� The ESD protection circuits were improved;<br />
� The input buffers of the I2C interface were replaced by<br />
Schmitt trigger cells;<br />
� The pinout was redefined.<br />
V. SUMMARY<br />
A configurable Gbit/s serializer (GOL) has been<br />
developed and manufactured to address the HEP experiments<br />
requirements. The device was experimentally validated to<br />
comply with the levels of radiation tolerance required by the<br />
LHC experiments. Both total dose and SEU irradiation tests<br />
were realized. The SEU tests were made using 60MeV<br />
protons and Heavy Ions. Using the SEU test results, an<br />
estimate of the error rates for such a device in different CMS<br />
environments was made. SEU results for a commercial<br />
serializer were also presented for 60 MeV proton irradiation<br />
tests. When compared to the commercial device, the GOL<br />
ASIC displays higher tolerance in what concern tolerance to<br />
total dose irradiation and single event upsets..<br />
VI. REFERENCES<br />
[1] G. Anelli, M. Campbell, M. Delmastro, F. Faccio, S.<br />
Florian, A. Giraldo, E. Heijne, P. Jarron, K. Kloukinas, A.<br />
Marchioro, P. Moreira, and W. Snoeys, “Radiation tolerant<br />
VLSI circuits in standard deep submicron CMOS<br />
technologies for the LHC experiments: practical design<br />
aspects”, IEEE Trans. Nucl. Sci. Vol. 46 No.6, p.1690, 1999.<br />
[2] K. Kloukinas, F. Faccio, A. Marchioro and P. Moreira,<br />
“Development of a radiation tolerant 2.0V standard cell<br />
library using a commercial deep submicron CMOS<br />
technology for the LHC experiments” Proc. of the fourth<br />
workshop on electronics for LHC experiments, pp. 574-580,<br />
Rome, 1998<br />
[3] P. Moreira, J. Christiansen, A. Marchioro, E. van der Bij,<br />
K. Kloukinas, M. Campbell and G. Cervelli, “A 1.25Gbit/s<br />
Serializer for LHC Data and Trigger Optical Links”,<br />
Proceedings of the Fifth Workshop on Electronics for LHC<br />
Experiments, Snowmass, Colorado, USA, 20-24 September<br />
1999, pp. 194-198<br />
[4] P. Moreira 1, T. Toifl, A. Kluge, G. Cervelli, F. Faccio, A.<br />
Marchioro and J. Christiansen, “G-Link and Gigabit Ethernet<br />
Compliant Serializer for LHC Data Transmission,” 2000<br />
IEEE Nuclear Science Symposium Conference Record,<br />
October 15 - 20, 2000, Lyon, France, pp. 9.6 – 9.9<br />
[5] IEEE Std 802.3, 1998 Edition<br />
[6] C. Yen, R. Walker, P. Petruno, C. Stout, B. Lai and W.<br />
McFarland, “G-Link: “A chipset for Gigabit-Rate Data<br />
Communication,” Hewlett-Packard Journal, Oct. 92.<br />
[7] See for example the Agilent HDMP-1034 receiver chip<br />
data sheet: http://www.agilent.com<br />
[8] See for example the Texas Instruments TLK2501<br />
transceiver chip data sheet: http://www.texasinstruments.com<br />
[9] F. Vasey, C. Azevedo, G. Cervelli, K. Gill, R. Grabit and<br />
F. Jensen, “Optical links for the CMS Tracker,” Proc. of the
fifth workshop on electronics for LHC experiments, pp. 175-<br />
179, Snowmass, 1999<br />
[10] “The I2C-BUS specification”, Philips Semiconductors,<br />
Version 2.1, January 2000<br />
[11] C. M. Maunder and R. E. Tulloss, “The Test Access Port<br />
and Boundary-Scan Architecture,” IEEE Computer Society<br />
Press, 1990<br />
[12] M. Huhtinen and F. Faccio, “Computational method to<br />
estimate Single Event Upset rates in an accelerator<br />
environment”, Nuclear Instruments and Methods A, vol. 450,<br />
pp. 155-170, 2000
Development of an Optical Front-end Readout System for the<br />
LHCb RICH Detectors.<br />
N.Smale, M.Adinolfi, J.Bibby, G.Damerell, N.Harnew, S.Topp-Jorgensen; University of Oxford, UK<br />
V.Gibson, S.Katvars, S.Wotton; University of Cambridge, UK<br />
K.Wyllie;CERN; Switzerland<br />
Abstract<br />
The development of a front-end readout system for the LHCb<br />
Ring Imaging Cherenkov (RICH) detectors is in progress.<br />
The baseline choice for the RICH photon detector front-end<br />
electronics is a binary readout ASIC for an encapsulated<br />
silicon pixel detector. This paper describes a system to<br />
transmit the binary data with address ID and error codes, from<br />
a radiation harsh environment while keeping synchronisation.<br />
The total data read out for the fixed Level-0 readout period of<br />
900ns is 32x36x440 non-zero-suppressed bits per Level-0<br />
trigger, with a sustained Level-0 trigger rate of 1MHz.<br />
Multimode fibres driven by VCSEL devices are used to<br />
transmit data to the off-detector Level-1 electronics located in<br />
a non-radiation environment. The data are stored in 512Kbit<br />
deep QDR buffers.<br />
I. INTRODUCTION<br />
The baseline photon detector for the LHCb RICH [1] detector<br />
is the CERN/DEP Hybrid Photon pixel detector (HPD) [1]<br />
whose active elements comprise a photo-cathode, electrostatic<br />
imaging system, encapsulated pixellated silicon detector and<br />
binary readout ASIC [2]. There are 440 HPDs to be read out<br />
in 900ns, each HPD has 32x32 pixel channels. Beam Count<br />
ID, error information and parity checking information are<br />
added to the data bringing the total transmission to<br />
32x36x440 bits per Level-0 trigger.<br />
32x32<br />
Add Bcnt etc<br />
LEVEL_0 ON DETECT0R<br />
LEVEL_1 OFF DETECT0R<br />
QDR 9Mb<br />
4 BURST<br />
MEMORY<br />
ADDRESS 19<br />
Data 16 wide<br />
16x36<br />
Parallel/serial.<br />
Driver VCSEL<br />
Control<br />
Data 16 wide<br />
ECS/TTCrx Parallel/serial. Driver VCSEL<br />
XILINX<br />
FPGA<br />
CONTROLLER<br />
ERROR CHK<br />
ECS/TTCrx<br />
16x36<br />
FROM THE NEXT<br />
HDMP-1034<br />
HDMP-1034<br />
HDMP-1034<br />
AMPs<br />
AMPs<br />
AMPs<br />
16x36 HDMP-1034 AMPs<br />
DATA 4 BURST @ 18X36 20X36<br />
16x36<br />
16x36<br />
16x36<br />
HPD CHANNEL<br />
20X36<br />
20X36<br />
20X36<br />
M2R-25-4-1-TL<br />
Figure 1: Illustrates the read-out chain for one HPD.<br />
>100M<br />
Multimode<br />
Fibre.<br />
The intention is to use a fibre-optic data transmission scheme<br />
to export the data from the detector electronics located in a<br />
radiation environment to the Level-1 electronics, located in a<br />
non-radiation environment, the control room. A reliable and<br />
cost effective solution, that allows a reasonable tolerance on<br />
the available bandwidth is to use two fibres per HPD, each<br />
operating at a data bandwidth 640Mbits/s.<br />
Data from the binary readout chip are interfaced to the fibreoptic<br />
link using a custom interface ASIC, the PInt chip. These<br />
data are serialised, multiplexed and driven into fibres using<br />
radiation hard Gigabit Optical Link (GOL) [3] and<br />
VCSEL [4] chips. A fibre receiver converts the incoming<br />
serial data stream to parallel data. The receivers, which are<br />
currently being investigated, will be commercial off the shelf<br />
(COTS) components. A FPGA controller checks the<br />
incoming parallel data and stores them to quad data rate<br />
(QDR) Level-1 buffers [5]. Control and synchronisation of<br />
this system is achieved by using the TTCrx [6] and LHCb<br />
Experiment Control System (ECS) [7] systems. Fig 1<br />
illustrates the read-out chain to the Level-1 buffer for one<br />
HPD. Component availability, radiation hardness, ease of<br />
replacement, accessibility, synchronicity and cost<br />
effectiveness all need to be considered and demonstrated<br />
during the prototyping stages.<br />
II. ENVIRONMENT<br />
The Level-0 electronics will be situated in the ~30 Gauss<br />
magnetic fringe field of the spectrometer magnet and will<br />
experience radiation doses of 3Krad/year [8]. A shell of<br />
ferromagnetic material provides shielding for the Level-0<br />
electronics reducing the fields too less than 10 Gauss [9]. All<br />
Level-0 electronics will be fabricated in a 0.25μm process<br />
using radiation tolerant layout techniques to protect against<br />
the radiation dose. The radiation tolerant layout employed<br />
uses guardrings and enclosed MOS transistors that prevent<br />
leakage currents in thick field oxides and reduces the<br />
probability of single event latch up (SEL). However this does<br />
not protect against a change in bit state or transient caused by<br />
an ionising particle depositing energy in the gate region,<br />
single event upset (SEU). To minimise the effects of SEU,<br />
redundancy and error correction have been added to the<br />
control logic. Protecting data against SEU effects is thought<br />
not to be necessary if the SEU rate can be considered small.<br />
The Level-1 electronics are situated in the counting room<br />
~100m away from the Level-0 area in a non-radiation and<br />
non-magnetic field region. The counting room can be
considered as an electronic friendly environment and<br />
therefore standard COTS components can be used. This has<br />
the advantage of availability, maintenance, and cost<br />
effectiveness, with a broad range of products and allows the<br />
use of FPGA devices. Error checking, error correction and<br />
self test algorithms will be built into the Level-1 electronics<br />
for ensuring that synchronisation is not lost and corrupt data<br />
are not being transmitted either to or from the Level-1 region.<br />
III. THE PIXEL INTERFACE (PInt) Chip<br />
The HPD Binary Pixel chip requires an interface chip (PInt)<br />
that generates chip biasing and calibration test levels, handles<br />
the ECS (Experiment Control System) and TTC (Timing and<br />
Trigger Control). The PInt chip adds error codes, addresses,<br />
parity and bunch crossing ID to the data. The data are<br />
synchronised into two Gigabit Optical Links (GOL). The PInt<br />
is being developed using a Spartan II FPGA, and finally<br />
ported into a 0.25μm CMOS radiation-hard ASIC. The PInt<br />
chip controls a second level of multiplexing and serialisation<br />
of the Level-0 data before fibre transmission, see sections IV<br />
and V. Fig 2 shows the block diagram of the PInt and its<br />
supporting blocks.<br />
Analogue<br />
Supplies,<br />
DACs<br />
and<br />
Filters..<br />
JTAG<br />
Control<br />
JTAG<br />
PInt<br />
Control.<br />
State Machine<br />
CMOS->GTL<br />
TTCrx<br />
TTC<br />
Interface<br />
Timing,<br />
Test<br />
And<br />
Control<br />
HPD BINARY CHIP<br />
Link<br />
Test<br />
Pattern<br />
16*12<br />
FIFO=<br />
BX Counter<br />
+<br />
ERRORS.<br />
Figure 2: PInt block diagram.<br />
GOL<br />
+<br />
Optical Driver<br />
GOL<br />
Control/<br />
Synch<br />
32Wide<br />
DATA<br />
Buffer.<br />
GTL->CMOS<br />
A. Pixel Chip Configuration<br />
The 44 internal 8 bit DACs of the Pixel Chip needed for<br />
setting the bias voltages and currents are configured using a<br />
JTAG interface. The PInt interfaces the ECS to the test<br />
access port (TAP) through a standard TAP state controller.<br />
The TAP controller is a 16 state FSM that responds to the<br />
control sequences supplied from the ECS.<br />
B. Data Handling<br />
The PInt translates all incoming signals to the Pixel chip I/O<br />
standard of GTL and outgoing signals to CMOS. The TTCrx<br />
bunch-crossing clock of 40.08MHz is used to synchronise the<br />
PInt with the Pixel, GOL chip and the LHCb system. Data<br />
coming from the HPD binary chip are of a binary format i.e. a<br />
binary ‘1’ for a hit pixel. On a trigger Level-0 accept (average<br />
trigger rate of 1MHz) 32x32 pixels are read out to the PInt<br />
chip. A 12-bit bunch crossing ID, which is reset to zero every<br />
3563 crossings, is taken from channel A of the TTCrx and<br />
added as a header to the data. The bunch crossing ID is taken<br />
over preference of the event ID because of the problem of<br />
consecutive Level-0 triggers in LHCb [10]. Any error<br />
conditions that the PInt chip may have identified are then<br />
flagged in a 32bit error word. The ECS is also informed of<br />
certain error conditions to enable a decision on what action to<br />
take. Finally, the data, parity and trailer bits are added. The<br />
parity check is a generated 32-bit number consisting of the<br />
parity of each of the 32 bit wide data set.<br />
32 BITS WIDE<br />
HEADER<br />
ERROR FLAGS<br />
DATA 0<br />
~~~~~~~~<br />
DATA 31<br />
PARITY<br />
TRAILER<br />
Figure 3: Event-building scheme.<br />
The trailer is expected to use cyclic redundancy checking<br />
(CRC) which is an error detection scheme in which the block<br />
check character is the remainder after dividing all the<br />
serialised bits in a transmission by a predetermined block<br />
number. Simulations will be performed to find the most<br />
efficient block size and predetermined binary number. Parity<br />
and CRC together will show bit error location and in which<br />
way the bit has changed allowing for correction at a later<br />
stage. The PInt event-building scheme is shown in fig 3.<br />
IV. PARALLEL TO SERIAL Data Transmission<br />
The (GOL) [3] chip is a multi-protocol high-speed<br />
transmitter. It is an ASIC fabricated in the 0.25um process<br />
and is able to withstand high doses of radiation. The chip is to<br />
be run in the G-Link mode at 800Mbits/s and is required to<br />
transmit 20 bits of data in 25nS, 16 of which are data and the<br />
remaining 4 are overhead bits for encoding. The CIMT<br />
(Conditional Invert Master Transition) encoding scheme is<br />
employed. Before being serialised the 20-bit encoded words<br />
are time-division multiplexed into two 10-bit words. The two<br />
10-bit words are serialised and transmitted via a VCSEL<br />
(Vertical Cavity Surface Emitting Laser) and multimode fibre.<br />
For this mode two GOLs per Pixel chip will be required. The<br />
threshold of the laser driver can be adjusted during the<br />
lifetime of the experiment with the GOL chip.<br />
V. FIBRE OPTIC DRIVERS<br />
VCSELs emit light perpendicularly to their p-n junctions.<br />
High output luminosity, focussing and large spectral width<br />
allows for easy coupling with multimode fibres. Wavelengths<br />
are generally in the (650-850-1300) nm ranges [11] and<br />
output power is typically 5mW for a multimode fibre. VCSEL<br />
arrays can be easily incorporated into single ICs, which allow<br />
for a much better multiple fibre package. The VCSELs have
een proven to be very robust in terms of radiation and<br />
magnetic fields. The proposal is to use two VCSELs per pixel<br />
chip and drive 800Mb/s of data through 100 metres of<br />
multimode fibre to the LHCb counting room located in a low<br />
radiation region.<br />
VI. THE FIBRE OPTIC RECEIVER AND SERIAL TO<br />
PARALLEL CONVERTER<br />
The intention is to use COTS items in the counting room<br />
region. The data are to be received by a pin diode receiver and<br />
amplifier array package and de-serialised using a Hewlett<br />
Packard HDMP1034 for the 16-bit data word. Fig 4 shows the<br />
general scheme.<br />
Evaluation of such a scheme, in the simplex transmission<br />
mode, is currently under way using the ODIN S-Link package<br />
[12]. In this mode studies on checking synchronisation,<br />
identifying lost data etc. are being done.<br />
M2R-25-4-1-TL<br />
Pre-<br />
Amp.<br />
Post<br />
Amp.<br />
RX<br />
RX-<br />
Hewlett Packhard-HDMP-1034<br />
(Later Texas- TLK2501IRCP)<br />
Clock<br />
Data<br />
Recovery.<br />
Clock<br />
Generator.<br />
Demux<br />
Word<br />
Align<br />
Invert.<br />
Decode.<br />
Flag<br />
DESCRM<br />
RxReady.<br />
Delay<br />
Figure 4: Fibre Optic Receiver<br />
Sync<br />
Logic.<br />
Output Latch<br />
Rx(0-15).<br />
RxFlag.<br />
RxError.<br />
RxData.<br />
RxCntl.<br />
RxDSlip.<br />
ShfIn.<br />
ShfOut.<br />
SRQIn.<br />
SRQOut.<br />
PassEn.<br />
The S-Link is a CERN specification for an easy-to-use FIFOlike<br />
data link that can be used to connect front-end to read-out<br />
at any stage in a data flow environment [13].<br />
If the GOL functions reliably with 32 bit data words at<br />
40 MHz the Texas Instruments TLK2501IRCP will be<br />
considered as the receiver package. This will reduce the<br />
number of required fibres by a factor of 2.<br />
VII. LEVEL_1 BUFFER<br />
Data arriving from each of the serial/parallel converters are in<br />
a 16-bit wide, 36 word format, received at a rate of<br />
640Mbits/s. The data contain header and error codes as<br />
illustrated in fig 3. The data, event ID and error codes are<br />
proposed to be time multiplexed and stored in the Level-1<br />
buffer. The bunch ID is checked against the expected ID,<br />
generated at the Level-1 region using another TTCrx, and the<br />
resulting error condition carried with the event<br />
Fibre Optic receiver and<br />
Demux 1:16 @ 40MHz<br />
16<br />
16<br />
16<br />
16<br />
TCCrx<br />
Controller<br />
FPGA<br />
Addr<br />
Data<br />
ECS<br />
Figure 5: Level_1 Buffer scheme<br />
To DAQ<br />
processor<br />
QDR<br />
The Level-1 buffer is implemented with a commercially<br />
available QDR SRAM (Quad Data Rate SRAM) and is<br />
controlled by the FPGA QDR controller. Fig 6 shows the<br />
general scheme with the support blocks.<br />
A. QDR SRAM<br />
The QDR SRAM is a memory bank of 9Mbits and can store<br />
up to 3.5K events from two HPDs pending the Level-1 trigger<br />
decision. The LHCb trigger architecture presently requires a<br />
buffer of at least 2K events. One QDR will store the data from<br />
four fibres or two HPD Pixel chips. Data can be read in and<br />
read out on the same clock edge at a rate of 333Mbits/sec. The<br />
QDR architecture is shown in Fig 6 and the key points of this<br />
device are:<br />
a) 9-Mbit Quad Data Rate Static RAM. (migration to 64Mb).<br />
b) Manufacturers: Cypress, IDT, Micron and NEC.<br />
c) Separate independent read and write data ports support<br />
concurrent transactions.<br />
d) 4-word burst for reducing address bus frequency.<br />
e) 167MHz clock frequency (333MHz data rate).<br />
Migration to 250MHz (500MHz data rate).<br />
Figure 6: QDR SRAM Architecture<br />
m<br />
e<br />
m<br />
o<br />
r<br />
y<br />
Memory/fibre<br />
can store upto<br />
3.5K events<br />
The QDR is a four-burst device and requires only one write<br />
address to be generated to store four 18-bit words. This<br />
therefore means that addresses are generated at a rate of<br />
40MHz while the data are being transferred at 160MHz. This<br />
allows a data word from each of the four receivers to be
stored in one bunch crossing. Reading of the data is a similar<br />
process but the read address has to be on an alternative K<br />
clock edge to the write address, where the K clock is the QDR<br />
clock (80MHz).<br />
More detail on the timing is given in section B below. As the<br />
data from the receivers are in 16 bit words and the QDR<br />
accepts 18-bit words, the remaining 2x36 bits of the memory<br />
can be used for error flagging and data validation in the<br />
following processing stages. The transmission check will<br />
consist of a coded 1x36 bit word that is stamped onto the side<br />
of an event.<br />
B. QDR Controller<br />
A Xilinx Spartan-II XC2S200 FPGA is used as the controller<br />
chip. The device offers 284 I/Os with access times of<br />
200MHz, internal clock speeds of 333MHz, 1176 control<br />
logic blocks and 5292 logic cells and is a low cost item.<br />
Internal Delay Lock Loops (DLL) are used for clock<br />
multiplication. The Spartan-II is programmable directly by<br />
JTAG or PROM.<br />
Figure 7: Write state machine.<br />
The Level-1 buffer logic is built around two state machines: a<br />
write machine shown in fig 7; and a read machine, which is<br />
very similar in operation but receives different control signals.<br />
The “One Hot” state machines ensure the correct timing of the<br />
QDR read/write signals by forcing a read/write on a rising<br />
edge of the QDR K clock (80MHz) and then passing data<br />
from/to the QDR on every edge of the K clock for four edges.<br />
The state machine operates at 160MHz.<br />
The write machine is activated on the 40MHz clock and the<br />
RXREADY flag when data are ready to be stored. Every time<br />
the write machine is activated a wrap around counter is<br />
incremented by 1. This counter is used for the QDR write<br />
address. Two counters generate the read address. The “offset<br />
counter” counts in multiples of 36 on every Level-1 trigger to<br />
ensure that the read pointer starts at the beginning of an event.<br />
On receipt of a Level-1 trigger accept the read machine is<br />
activated and cycles 36 times consecutively, incrementing the<br />
“sub counter” each time. The “sub counter” is reset after<br />
reading out one complete event. Logic has been incorporated<br />
into the design to ensure the read and write pointer cannot<br />
overtake each other.<br />
The Level-0 trigger has a 1MHz sustainable rate whereas the<br />
level-1 trigger is a variable ~40KHz rate. For this reason the<br />
write machine always has the priority over the read machine.<br />
To allow for this, Level-1 triggers and decisions are buffered.<br />
Fig 8 shows a simulation of data flow to and from the QDR<br />
for a condition where continuous write and read signals are<br />
requested. After an initial one K clock delay for both read<br />
and write the four 18-bit data words are read in and out of the<br />
device in one bunch crossing. The cursor is at the beginning<br />
of a write sequence. With the rising edge of K and<br />
“NOT wpsbar” the write address is taken from the input port<br />
“sa”. On the following rising edge of K and for three<br />
consecutive edges thereafter, data are stored to the QDR. Data<br />
read from the device use the same principle as the write<br />
procedure but “rpsbar” is the control signal. The “wpsbar”<br />
and “rpsbar” signals should not appear on the same rising<br />
edge of the K clock. The data in this simulation are arbitrary.<br />
C. TTCrx and ECS<br />
Figure 8: QDR wave-from<br />
Both the QDR and Spartan-II have boundary scanning<br />
facilities that will be used in production tests. The Spartan is<br />
also programmable via a JTAG interface. The ECS will be<br />
used to deliver the configuration signals.<br />
The controller chip requires a bunch-crossing clock, reset,<br />
bunch ID and Level-0 and Level-1 trigger information from<br />
the TTCrx. For the Level-0 information the TTCrx is used in<br />
the same way as for the Level-0 board (see Section III) but<br />
instead of the local bunch ID being added as a header it is<br />
compared against the header on the incoming Level-0 event<br />
for transmission checks. The local Level-0 bunch ID will be<br />
stored in 16 deep derandomiser buffers to track the on<br />
detector Level-0 process. The resulting bunch ID header<br />
check is stored as part of the 1x36 bit error word (see Section<br />
VII-A).
The short broadcast port of the TTCrx will be used for trigger<br />
information, reset and event ID. This has a limited number of<br />
bits (~8) that can be sent but does support the necessary<br />
bandwidth (~2MHz). Six bits of the eight are user<br />
configurable and are being defined by the LHCb collaboration<br />
according to the experiment requirements. Two bits have been<br />
allocated for event ID so that they can be compared with the<br />
two LSBs of a locally generated event ID to ensure that the<br />
Level-1 buffers have not lost any fragments or<br />
synchronisation.<br />
VIII. TEST BED<br />
Test boards have been built to check timing, chip<br />
programming, TTCrx compatibility and QDR functionality.<br />
The test bed has been made in a modular fashion, utilising a<br />
Spartan-II demonstration board [14] and two PCB’s. One<br />
PCB is for interfacing to the TTCrx test board and the other<br />
for the QDR.<br />
The demonstration board is equipped with a Spartan-II<br />
XC2S100 in a PQ208 package. The speed performance does<br />
not differ from the proposed final chip, XC2S200, but the<br />
packaging limits the amount of I/Os that can be made<br />
available to the user, in this case 196. For this reason only the<br />
essential ports of the TTCrx have been utilised and the<br />
number of data-in ports have been limited. To emulate the<br />
event data coming from the receivers a pattern generator is<br />
used for one of the 16-bit wide ports, the other three ports<br />
have fixed binary numbers set internally on the chip.<br />
Figure 9: Spartan/QDR/TTCrx test board<br />
A mezzanine board has been produced and locates on top of<br />
the FPGA I/O connector. This board has a QDR on it along<br />
with all the necessary power supplies and termination<br />
required for running the QDR chip. The QDR controller<br />
FPGA has its I/O configured as HSTL to suit the I/O voltage<br />
level of the QDR device.<br />
The TTCrx used at this design stage was delivered premounted<br />
to a test board [15]. The test board contains the<br />
TTCrx IC, an integrated detector and preamplifier, a serial<br />
configuration PROM (XC1736D) and a fibre optic postamplifier.<br />
Unfortunately the test board is not compatible with<br />
the Spartan-II demonstration board user area that route 30 of<br />
the FPGAs I/O to a breadboard area. Therefore a TTCrx test<br />
board fan out PCB has been produced to translate to the<br />
Spartan-II demonstration board.<br />
Fig 9 shows the assembled modules. Testing these boards is<br />
now in progress.<br />
IX. FUTURE PLANS<br />
A complete prototype HPD to Level-1 readout system for one<br />
HPD will be constructed over the next year. The prototype<br />
will, as closely as possible, match the specifications of the<br />
final design. This will allow full testing of a complete unit and<br />
show compatibility of the selected components.<br />
IX. REFERENCES<br />
[1] LHCb TP CERN/LHCC 98-4 LHCC/P4, 20 Febuary 1998<br />
[2] LEB 1999 CERN 99-09 CERN/LHCC/99-33<br />
[3] GOL REFERENCE MANUAL, Preliminary version<br />
March 2001 CERN-EP/MIC, Geneva Switzerland<br />
[4] http://www.lasermate.com/transceivers.htm<br />
[5] http://www.cypress.com/press/releases/200216.html<br />
[6] http://micdigital.web.cern.ch/micdigital/ttcrx.htm<br />
[7]http://lhcb-elec.web.cern.ch/lhcbelec/html/ecs_interface.htm<br />
[8] CERN/LHCC/2000-0037 LHCb TDR3, 7 September 2000<br />
[9] CERN/LHCC 98-4 LHCC/P4, 20 February 1998<br />
[10] TTC USE, Jorgen Christiansen, MICRO<br />
ELECTRONICS GROUP, CERN, http://lhcbelec.web.cern.ch/lhcb-elec/html/ttc_use.htm<br />
[11] Hewlett Packard fbre-optic technical training manual.<br />
[12]H.C. van der Bij et al, “S-LINK, a Data Link Interface<br />
Specification for the LHC Era”,<br />
http://his.web.cern.ch/HIS/link/introduce/introduce.htm, Sept.<br />
1997<br />
[13] Eric Brandin, “Development of a Prototype Read-Out<br />
Link for the Atlas Experiment”, Master Thesis, June 2000.<br />
[14] http://www.insightelectronics.com/solutions/kits/xilinx/spartan-ii.html<br />
[15] TTCrx Reference Manual, CERN-EP/MIC, Geneva<br />
Switzerland, October 1999 Version 3.0
A Radiation Tolerant Laser Driver Array for<br />
Optical Transmission in the LHC Experiments<br />
Giovanni Cervelli, Alessandro Marchioro, Paulo Moreira, and Francois Vasey<br />
Abstract<br />
A 3-way Laser Driver ASIC has been implemented in<br />
deep-submicron CMOS technology, according to the CMS<br />
Tracker performance and rad-tolerance requirements. While<br />
being optimised for analogue operation, the full-custom IC is<br />
also compatible with LVDS digital signalling. It will be<br />
deployed for analogue and digital transmission in the 50.000<br />
fibre link of the Tracker. A combination of linearization<br />
methods allows achieving good analogue performance (8-bit<br />
equivalent dynamic range, with 250 MHz bandwidth), while<br />
maintaining wide input common-mode range (±350 mV) and<br />
power dissipation of 10 mW/channel. The linearly amplified<br />
signals are superposed to a DC-current, programmable over a<br />
wide range (0-55 mA). The latter capability allows tracking of<br />
changes in laser threshold due to ageing or radiation damage.<br />
The driver gain and laser bias-current are programmable via a<br />
SEU-robust serial interface. The results of ASIC qualification<br />
are discussed in the paper.<br />
I. INTRODUCTION<br />
Data connection to the CMS Tracker Front-Ends is<br />
provided by a large number of optical fibre links: 50.000<br />
analogue for readout and 3.000 digital for trigger, timing, and<br />
control signals distribution [1]. The Front-End components<br />
must withstand the harsh radiation environment of the<br />
Tracker, over the planned detector lifetime of 10 years (total<br />
ionising dose and hadron fluence exceeding 10 MRads and<br />
10 14 neutron-equivalent/cm 2 respectively) [2]. The baseline<br />
technology for ASIC developments in the Tracker is a<br />
0.25 µm CMOS, 3-metals, commercial technology (5 nm<br />
oxide thickness) [3, 4, 5]. The intrinsic radiation tolerance of<br />
this technology is increased to the required levels, by using<br />
appropriately extended design-rules and self-correcting logic.<br />
The use of this technology for analogue applications was<br />
carefully evaluated before employing it for the design of the<br />
Front End chips.<br />
A Linear Laser Driver (LLD) array for the CMS Tracker<br />
links had been already developed and implemented in a nonradiation<br />
tolerant BiCMOS technology [6]. The design was<br />
then translated in the 0.25 µm CMOS technology at an earlier<br />
stage of the Tracker design [7]. A new LLD has now been<br />
implemented in the same technology, appropriately matching<br />
the Tracker modularity and functionality requirements for<br />
both analogue and digital links.<br />
Section II explains the device functionality and major<br />
specifications. Section III describes the electrical circuit and<br />
CERN, EP Division, 1211 Geneva 23, Switzerland<br />
Giovanni.Cervelli@cern.ch<br />
layout. Section IV reports on the measurement results and<br />
device qualification.<br />
II. FUNCTIONALITY<br />
Figure 1 shows the block diagram of the new LLD chip.<br />
The laser driver converts a differential input voltage into a<br />
single ended output current added to a pre-set DC current. The<br />
DC current allows correct biasing of the laser diode above<br />
threshold in the linear region of its characteristic. The<br />
absolute value of the bias current can be varied over a wide<br />
range (0 to 55 mA), in order to maintain the correct<br />
functionality of laser diodes with very high threshold currents<br />
as a consequence of radiation damage. The laser diode-biasing<br />
scheme (current sink) is compatible with the use of commonanode<br />
laser diode arrays.<br />
SCL<br />
SDA<br />
Adr (5)<br />
I2C Interface<br />
Reg (3)<br />
-Idc0<br />
Bias<br />
(7)<br />
Gain (2)<br />
V0-<br />
+ + +<br />
-Idc1<br />
-Idc2<br />
LD0 LD1 LD2<br />
V0+<br />
to lasers<br />
100 Ω 100 Ω 100 Ω<br />
V1-<br />
Figure 1: Block diagram.<br />
V1+<br />
from Mux<br />
Input signals are transmitted to the laser driver using some<br />
0-30 cm of 100 Ω matched transmission lines. The driver is<br />
optimised for analogue operation in terms of exhibiting good<br />
linearity and low noise. However, input voltage levels are<br />
compatible with the digital LVDS standard (±400 mV into<br />
100 Ω). The gain can be chosen from 4 pre-set values. Gain<br />
control provides an extra degree of freedom for optimally<br />
equalising the CMS Tracker readout chain. A system-level<br />
simulation of the fibre link performance achievable with a<br />
four-gain equalisation is presented in [8].<br />
The IC modularity is 3 channels per chip. About 20<br />
thousand 3-way laser drivers will be used for the CMS<br />
Tracker readout and control links. The total power dissipation<br />
of individual chips must remain constant regardless of the<br />
modulation signal to minimise cross-talk and noise injection<br />
in the common power supplies.<br />
V2-<br />
V2+
The channels can be individually addressed via a serial<br />
digital interface (Philips Semiconductors I2C standard), which<br />
allows individual power down, gain control, and pre-bias<br />
control. Robustness to Single Event Upsets is achieved by<br />
tripling the digital logic in the interface and by using a<br />
majority voting decision scheme. The power-up I2C register<br />
configuration is read from a set of hard-wired inputs. Thus it<br />
is possible to insure that the optical links are correctly biased<br />
at power-up.<br />
III. CIRCUIT AND LAYOUT<br />
The Linear Laser Driver consists of a Linear Driver and a<br />
laser-diode bias generator (Figure 2).<br />
dummy<br />
output<br />
Vdd<br />
Vdd<br />
Vdd Vdd<br />
V1 V2<br />
Vss Vss Vss Vss<br />
Vss<br />
I1<br />
r<br />
active<br />
bulk<br />
I2<br />
Figure 2: Circuit diagram.<br />
Vdd<br />
gain<br />
control<br />
I1-I2 IOUT<br />
gain<br />
control<br />
The Linear Driver consists of a degenerated PMOS<br />
differential pair and a push-pull output stage. The degenerated<br />
differential pair, in comparison to alternative solutions, is<br />
conceptually simple and offers good dynamic and noise<br />
performance with limited power dissipation. The PMOS<br />
version is bulk-effect-free, thus allowing a larger input<br />
common-mode range. The required linearity is obtained by<br />
combining two source-degeneration methods: a parallel<br />
source-degeneration resistor, and a source-bulk crossconnection<br />
between the transistors of the differential pair. The<br />
use of both methods allows keeping the degeneration resistor<br />
to a value compatible with the required input common-mode<br />
range. The push-pull output stage mirrors the currents in the<br />
differential pair branches and subtract them at the output<br />
node. Three switched output stages can be activated in<br />
parallel, to provide four different selectable gains. In order to<br />
keep the power supply current constant, a dummy output<br />
stage dumps the complement of the modulation current<br />
directly into the power supplies.<br />
The laser-diode bias generator circuit consists of an array<br />
of current sources and sinks. The enabling logic allows them<br />
to be switched on and off as appropriate in order to generate a<br />
current linearly variable between 0 and 55 mA. A regulated<br />
cascode scheme [9] has been used to keep the output<br />
impedance high and the compliance voltage low (
A. Static performance<br />
Figure 4 shows the pre-bias current for 5 chips differently<br />
processed (different σs). The measured LSB is 0.45 mA and<br />
the highest current that can be generated on-chip is 57 mA.<br />
The transfer characteristics (differential and common-mode)<br />
and output characteristic of the LLD have been measured with<br />
a Semiconductor Parameter Analyser. Figure 5 shows the<br />
differential transfer characteristics of the LLD, for four<br />
different gains (and different σs). The measured<br />
(transconductance) gain values are 5.3 mS, 7.7 mS, 10.6 mS,<br />
and 13.2 mS (5% above their nominal design values).<br />
MODULATION CURRENT [mA]<br />
PRE-BIAS CURRENT [mA]<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
σ = -3.0<br />
σ = -1.5<br />
σ = 0<br />
σ = +1.5<br />
σ = 3.0<br />
0<br />
0 16 32 48 64<br />
I2C REGISTER<br />
80 96 112 128<br />
10<br />
8<br />
6<br />
4<br />
2<br />
0<br />
-2<br />
-4<br />
-6<br />
-8<br />
GAIN = 5mS<br />
GAIN = 7.5mS<br />
GAIN = 10mS<br />
GAIN = 12.5mS<br />
Figure 4: Pre-bias current.<br />
-10<br />
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1<br />
DIFFERENTIAL INPUT VOLTAGE [V]<br />
Figure 5: Differential transfer characteristic.<br />
Figure 6 shows the linearity error, for different input<br />
common mode voltages (Vcm = 0 and Vcm = ±625 mV). The<br />
linearity error is calculated as the absolute difference between<br />
the real output current and its (least-square) linear fit, and is<br />
expressed as a percentage of the specified operating range<br />
(integral linearity deviation). In absence of common mode,<br />
the error is less than 0.5% over the whole linear operating<br />
range (±300 mV). The common-mode has an impact on<br />
linearity. However, performance degradation is negligible for<br />
an input common mode between ±350 mV. Figure 7 shows<br />
the integral linearity deviation as a function of the input<br />
common mode.<br />
INTEGRAL LINEARITY DEVIATION [%]<br />
INTEGRAL LINEARITY DEVIATION [%]<br />
2<br />
1.5<br />
1<br />
0.5<br />
0<br />
-0.5<br />
-1<br />
-1.5<br />
-2<br />
3<br />
2.5<br />
2<br />
1.5<br />
1<br />
0.5<br />
VCM = -625mV<br />
VCM = 0V<br />
VCM = +625mV<br />
-0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25<br />
DIFFERENTIAL INPUT VOLTAGE [V]<br />
Figure 6: Linearity error.<br />
σ = -3.0<br />
σ = -1.5<br />
σ = 0<br />
σ = +1.5<br />
σ = +3.0<br />
0<br />
-0.75 -0.625 -0.5 -0.375 -0.25 -0.125 0 0.125 0.25 0.375 0.5 0.625 0.75<br />
COMMON MODE INPUT VOLTAGE [V]<br />
Figure 7: Common-mode impact on linearity error.<br />
The Common Mode Rejection Ratio at DC is inferred<br />
from the common-mode transfer characteristic. The DC-<br />
CMRR is 40 dB in the worst case (maximum laser-bias) and<br />
becomes as high as 70 dB at low biases. The output<br />
impedance (inferred from the output characteristic) of the<br />
LLD varies between 3 kΩ and 10 kΩ, depending on the prebias,<br />
and is in all cases much higher than the typical dynamic<br />
impedance of the laser diode (
B. Dynamic performance<br />
The dynamic performance of the LLD has been evaluated<br />
with laser emitters, which are representative of the ones to be<br />
used in the final application. A wide-bandwidth optical head<br />
is used for receiving the optical signal and converting it back<br />
to electrical for compatibility with standard instrumentation.<br />
The pulse response of different chips and for different gains is<br />
shown in Figure 8. The response exhibits little overshoot and<br />
ringing. The measured rise and fall times are below 2.5 ns.<br />
The measured settling times (to within 1% of the final value)<br />
are of 10-12 ns, which leaves (for a 40 MHz sampled system)<br />
13-15 ns for correctly sampling the output signal.<br />
OUTPUT VOLTAGE [V]<br />
x 10-3<br />
12<br />
10<br />
8<br />
6<br />
4<br />
2<br />
0<br />
-2<br />
-5 0 5 10 15 20 25 30 35 40 45<br />
TIME [ns]<br />
Figure 8: Pulse response.<br />
GAIN = 5mS<br />
GAIN = 7.5mS<br />
GAIN = 10mS<br />
GAIN = 12.5mS<br />
The frequency responses (differential and common-mode)<br />
are shown in Figure 9. The analogue bandwidth has been<br />
measured with a network analyser and was found to be<br />
250 MHz. The equivalent input noise into this bandwidth is<br />
gain and laser bias dependent. The measured noise is in all<br />
cases below 1 mVrms. The CMRR is shown in Figure 10 as a<br />
function of frequency (15 mA bias current). Cross-talk<br />
between channels has been also measured and is below –<br />
60 dB.<br />
TRANSFER FUNCTION [dB]<br />
-30<br />
-40<br />
-50<br />
-60<br />
-70<br />
-80<br />
10 1<br />
-90<br />
10 2<br />
FREQUENCY [MHz]<br />
Figure 9: Frequency response.<br />
GAIN = 5mS<br />
GAIN = 7.5mS<br />
GAIN = 10mS<br />
GAIN = 12.5mS<br />
CMRR [dB]<br />
45<br />
40<br />
35<br />
30<br />
25<br />
20<br />
15<br />
10<br />
5<br />
0<br />
10 1<br />
-5<br />
10 2<br />
FREQUENCY [MHz]<br />
GAIN = 5mS<br />
GAIN = 7.5mS<br />
GAIN = 10mS<br />
GAIN = 12.5mS<br />
Figure 10: Common Mode Rejection Ratio.<br />
C. Radiation hardness<br />
The circuit has been tested for total ionising dose effects<br />
using an X-ray source, to investigate possible performance<br />
degradation related to ionising effects (charge trapping in<br />
oxide interface states). The experiment has been carried out<br />
according to ESA/SCC recommendation for IC qualification<br />
with respect to total dose effects [10]. The chip has been<br />
irradiated in three steps to 1 Mrad, 10 Mrad and 20 MRad<br />
(SiO2), at a constant dose rate of 21.2 Krad/min. After<br />
irradiation the chip was annealed for 24 hours at room<br />
temperature, followed by 168 hours at 100°C (accelerated<br />
life). The full set of static measurements was carried out after<br />
each step in order to assess any change in performance. The<br />
chip was under nominal bias during irradiation with the three<br />
channels switched on at maximum pre-bias.<br />
The results of radiation and accelerated life testing show<br />
that the LLD will operate within specifications all during the<br />
experiment lifetime (10 years). The overall radiation effects<br />
are negligible or acceptable. The laser-bias current shows an<br />
increase of 5%, 10% and 15% for three different channels in<br />
the same chip (see Figure 11).<br />
PRE-BIAS CURRENT [mA]<br />
70<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0Rads<br />
1MRads<br />
10MRads<br />
20MRads<br />
24h @ 25°C<br />
168h @ 100°C<br />
0<br />
0 16 32 48 64<br />
I2C REGISTER<br />
80 96 112 128<br />
Figure 11: Pre-bias during irradiation.
This is attributed to the NMOS type VT-referred current<br />
reference and is compatible with the previously measured<br />
threshold variations in NMOS devices. There is no significant<br />
change in the LLD differential or common-mode transfer<br />
characteristics nor in the output characteristics.<br />
The LLD robustness to SEU needs to be tested and an<br />
experiment is being planned before the end of the year.<br />
V. CONCLUSIONS<br />
A Linear Laser Driver array has been developed and<br />
implemented in a commercially available 0.25 µm CMOS<br />
technology. The device has been designed to comply with the<br />
stringent CMS requirements for analogue optical transmission<br />
in the Tracker readout. It is however also compatible with<br />
digital optical transmission modes in the Tracker slow control<br />
system. Sample devices have been tested and shown to be<br />
fully functional. The switched gains can be used to equalise<br />
the significant insertion loss spread expected from the 50.000<br />
analogue optical links. The pre-bias current is programmable<br />
over a wide range, with 7-bit resolution, allowing tracking of<br />
optical source degradation during detector lifetime. The LLD<br />
array has a modularity of three channels. However, since the<br />
channels can be individually disabled any modularity below<br />
that can also be chosen without a power penalty. The<br />
extensive set of measurements showed that the device<br />
matches or exceeds the required analogue performance.<br />
Integral linearity deviation is better than 0.5% over an input<br />
common mode range of ±350 mV. Input referred noise is less<br />
than 1 mV in an analogue bandwidth of 250 MHz. Power<br />
dissipation at maximum pre-bias is below 110 mW per<br />
channel. The radiation testing of one device showed that the<br />
analogue performance would also be maintained after a total<br />
ionising dose comparable with the one expected during the<br />
experiment lifetime. The parameters spread and yield for the<br />
tested devices are very good. Twelve devices have been tested<br />
and shown to be fully functional. The new chips will be<br />
packaged in a 5 mm x 5 mm LPCC case for ease of testing<br />
and installation in the Tracker readout and control hybrids.<br />
VI. ACKNOWLEDGEMENTS<br />
The precious contribution to this work from Gulrukh<br />
Kattakh of Peshawar University, Pakistan, and from Robert<br />
Grabit and Cristophe Sigaud of CERN, Geneva, is<br />
acknowledged.<br />
VII. REFERENCES<br />
[1] F. Vasey, V. Arbet-Engels, J. Batten et al., Development<br />
of radiation-hard optical links for the CMS tracker at<br />
CERN, IEEE Transactions on Nuclear Science (NSS<br />
1997 Proceedings), Vol. 45, No. 3, 1998, pp. 331-337.<br />
[2] CMS Collaboration, The Tracker Project, Technical<br />
Design Report, CERN/LHCC/98-6, 1998.<br />
[3] A. Rivetti, G. Anelli, F. Anghinolfi et al., Analog Design<br />
in Deep Sub-micron CMOS Processes for LHC,<br />
Proceedings of the Fifth workshop on electronics for<br />
[4]<br />
LHC experiments, CERN/LHCC/99-33, Snowmass,<br />
1999, pp. 157-161.<br />
P. Jarron, G. Anelli, T. Calin et al., Deep sub-micron<br />
CMOS technologies for the LHC experiments, Nuclear<br />
Physics B (Proceedings Supplements), Vol. 78, 1999, pp.<br />
625-634.<br />
[5] G.Anelli et al., “Radiation tolerant VLSI circuits in<br />
standard deep-submicron CMOS technologies for the<br />
LHC experiments: practical design aspects”, IEEE<br />
Transactions on Nuclear Science, Vol. 46, No. 6, 1999.<br />
[6] A. Marchioro, P. Moreira, T. Toifl and T. Vaaraniemi,<br />
An integrated laser driver array for analogue data<br />
transmission in the LHC experiments, Proceedings of<br />
the Third workshop on electronics for LHC experiments,<br />
CERN/LHCC/97-60, London, 1997, pp. 282-286.<br />
[7] G. Cervelli, A. Marchioro, P. Moreira, F. Vasey, A<br />
linear laser driver array for optical transmission in LHC<br />
eperiments, 2000 IEEE Nuclear Science Symposium<br />
Conference Record, Lyon, October 2000.<br />
[8] T.Bauer, F.Vasey, A Model for the CMS Tracker<br />
Analog Optical Link, CMS NOTE 2000/056, September<br />
2000.<br />
[9] E. Säckinger, W. Guggenbühl, A high-swing highimpedance<br />
MOS cascode circuit, IEEE Journal of Solid-<br />
State Circuits, Vol. 25, No. 1, February 1990.<br />
[10] Total Dose Steady-State Irradiation Test Method,<br />
ESA/SCC (European Space Agency / Space<br />
Components Co-ordination Group), Basic Specifications<br />
No. 22900, Draft Issue 5, July 1993.
Quality Assurance Programme for the Environmental Testing of CMS Tracker<br />
Optical Links<br />
Abstract<br />
The QA programme is reviewed for the environmental<br />
compliance tests of commercial off-the-shelf (COTS)<br />
components for the CMS Tracker Optical link system. These<br />
environmental tests will take place in the pre-production and<br />
final production phases of the project and will measure<br />
radiation resistance, component lifetime, and sensitivity to<br />
magnetic fields. The evolution of the programme from smallscale<br />
prototype tests to the final pre-production manufacturing<br />
tests is outlined and the main environmental effects expected<br />
for optical links operating within the Tracker are summarised.<br />
A special feature of the environmental QA programme is the<br />
plan for Advance Validation Tests (AVT's) developed in close<br />
collaboration with the various industrial partners. AVT<br />
procedures involve validation of a relatively small set of basic<br />
samples in advance of the full production of the<br />
corresponding batch of devices. Only those lots that have<br />
been confirmed as sufficiently rad-tolerant will be purchased<br />
and used in the final production.<br />
I. INTRODUCTION<br />
Final production of the CMS Tracker optical links will begin<br />
in 2001 and continue until 2004. Approximately 40000 unidirectional<br />
analogue optical links, and ~1000 bi-directional<br />
digital optical links will be produced. Quality Assurance (QA)<br />
procedures have been developed in order to guarantee that the<br />
final links meet the specified performance and are produced<br />
on schedule. A detailed QA manual has been written[1] and in<br />
this paper we focus on the part of the QA programme<br />
concerning environmental testing of components.<br />
The CMS Tracker environment is characterised by the high<br />
levels of radiation, up to ~2x10 14 /cm 2 fluence and 100kGy<br />
ionizing dose for the optical link components over the first 10<br />
years of operation[2]. The particle fluence at the innermost<br />
modules of the Tracker is dominated by pions and photons,<br />
with energies ~100MeV, and by ~1MeV neutrons at the<br />
outermost modules. In addition to resisting the radiation<br />
environment the components must operate in a 4T magnetic<br />
field and at temperatures close to -10°C.<br />
The basic elements of the optical link system are illustrated in<br />
Fig. 1[1]. Both analogue and digital optical links for the CMS<br />
Tracker share the same basic components, namely 1310nm<br />
InGaAsP/InP multi-quantum-well edge-emitting lasers and<br />
InGaAs p-i-n photodiodes coupled to single-mode optical<br />
fibre. The optical fibre is in the form of buffered single-way<br />
fibre, ruggedized 12-way ribbon fibre cable, and dense,<br />
ruggedized 96-way multi-ribbon cable. MU-based single-way,<br />
and MT-based multi-way, optical connectors are used at the<br />
various optical patch panels.<br />
K. Gill, R. Grabit, J. Troska, F. Vasey and A. Zanet<br />
CERN, 1211 Geneva 23, Switzerland<br />
karl.gill@cern.ch<br />
Detector<br />
Hybrid<br />
CCU<br />
CCU<br />
A-Opto -Hybrid<br />
D-Opto -Hybrid<br />
Analogue Readout<br />
40000 links @ 40MS/s FED<br />
96<br />
Rx Module<br />
12<br />
Digital Control<br />
1000 links @40MHz<br />
12<br />
96<br />
FEC<br />
Front-End Back-End<br />
Figure 1: Optical link systems. Components at the front-end are<br />
exposed to radiation, a 4T magnetic field, and will operate at -10°C.<br />
All of the elements listed above are either commercial off-theshelf<br />
(COTS) components or devices based on COTS. In an<br />
extensive series of sample tests, which was carried out in the<br />
development phase of the project, the sensitivity of the<br />
various link components to the expected Tracker environment<br />
has been thoroughly investigated[3]. These data have allowed<br />
identification and selection of suitable components and have<br />
also allowed tailoring of the link specifications to compensate<br />
for unavoidable effects, e.g. radiation damage, now that the<br />
effects have been well quantified.<br />
Despite having restricted the final choice of candidate<br />
components to those that have passed the sample tests, the use<br />
of COTS means that environmental QA testing must continue<br />
into the production phase of the project. This is simply<br />
because the radiation resistance of the COTS components can<br />
not be guaranteed by the vendors as the devices are not<br />
manufacturer-qualified for the CMS Tracker environment.<br />
It is clear that we must avoid, if possible, any situation where<br />
a delivered production batch of fully assembled components<br />
is found to be non-compliant with the Tracker environment.<br />
Diagnosing and remedying such a problem would incur a<br />
substantial delay in the already tight production schedule.<br />
We propose a programme of QA procedures, outlined in the<br />
following section, that will guarantee that the final optical<br />
links will meet the specified functional performance and<br />
environmental resistance, whilst also avoiding the possibility<br />
of rejecting fully assembled devices due to non-compliance<br />
with the Tracker environment. A particular requirement of the
programme is that we must validate laser diodes, fibre and<br />
photodiodes in special 'Advance Validation Tests' (AVT's).<br />
The AVT procedures are described in detail in this paper, and<br />
for details of other QA issues and requirements the reader is<br />
referred to the full QA manual[1].<br />
II. QA PROCEDURES<br />
From the development of the first prototypes to large-scale<br />
production, a wide range of QA procedures has been, and will<br />
be, implemented as shown in Fig. 2[1].<br />
Following extensive testing of early prototypes[3], the first<br />
formal step in the QA procedure was the technical<br />
qualification of suppliers in the framework of CERN market<br />
surveys. Market surveys for semiconductor lasers (MS2690)<br />
and optical connectors (MS2691) were issued in 1999.<br />
Market surveys for optical fibre, ribbon and cable (MS2811)<br />
as well as for receiver modules (MS2810) were issued in<br />
2000. In all of these surveys, evaluation samples were<br />
requested from the companies interested in tendering, and<br />
subjected to a sample validation procedure described in the<br />
next section. Based on the results of this sample-validation<br />
procedure, manufacturers were qualified and those that were<br />
successful were invited to tender for the production of final<br />
components or assemblies.<br />
Prototyping<br />
Prototype Validation<br />
Publications<br />
Market Survey<br />
Sample-Validation<br />
Invitation to Tender<br />
Pre-production<br />
Purge + Test<br />
by Manufacturer<br />
Qualification<br />
Advance<br />
rad-hardness<br />
validation<br />
Production<br />
Purge + Test<br />
by Manufacturer<br />
Lot validation<br />
Figure 2: QA procedures during various project phases.<br />
A. Sample validation<br />
Assembly<br />
Full test<br />
Evaluation samples sent to CERN in the framework of market<br />
surveys were validated according to the procedure sketched in<br />
Fig. 3. For the semiconductor lasers (MS2690), the irradiation<br />
consisted of both gamma and neutron tests, while for the<br />
connectors (MS2691) and fibres/cables (MS2811) it consisted<br />
only of gamma tests. These choices of radiation sources<br />
reflected the types of effects that were observed in the earlier<br />
tests[3]. No CERN-specific environmental tests (B-field or<br />
irradiation) were performed on the Rx modules (MS2810),<br />
which will be operated in the counting room, away from the<br />
radiation and magnetic field. Results of Sample-Validation<br />
tests made within the Market Surveys were published in<br />
confidential reports, with a copy sent to the manufacturer.<br />
Thorough environmental tests have also been made within the<br />
CERN EP/MIC group on samples of the electronic chips,<br />
namely the laser driver[4]and digital receiver chip[5], that will<br />
be used in the optical links and located within the Tracker.<br />
B-field<br />
Functionality<br />
Irradiation<br />
Functionality<br />
No Yes<br />
S upplier D isqualified Success?<br />
Figure 3: Sample validation procedures.<br />
B. Pre-production qualification<br />
S upplier Q ualified<br />
Qualification of the pre-production will involve rigorous<br />
testing of fully-assembled devices sampled from the preproduction<br />
delivery in order to qualify the devices and<br />
manufacturing processes in preparation for full production.<br />
This includes evaluating the compliance of the components<br />
and assemblies to their specifications whilst, or following,<br />
exposure to conditions representative of the Tracker<br />
environment, as illustrated in Fig. 4. Results of pre-production<br />
qualification tests will be archived in the EDMS database,<br />
with a copy sent to the manufacturer.<br />
Repeat<br />
Interact. with<br />
manufacture<br />
Failure analysis<br />
Manufacturer<br />
qualification<br />
N<br />
Sampling<br />
Success?<br />
Y<br />
Functionality<br />
Environmental<br />
Annealing<br />
Functionality<br />
Accelerated ageing*<br />
N<br />
Success?<br />
Y<br />
Final Production clearance<br />
* in some cases only<br />
Advance Production clearance<br />
Figure 4: Flow chart of the pre-production qualification procedure.<br />
C. Advance Validation Test (AVT)<br />
The primary aim of the AVT procedure is to avoid the<br />
problems caused by the possible rejection of whole batches of<br />
assembled devices because of non-compliance with the<br />
Tracker environment. This would clearly benefit both the<br />
component suppliers and the CMS groups responsible for the<br />
optical links.<br />
The AVT procedures will be applied to the lasers,<br />
photodiodes and optical fibre. These elements of the optical<br />
links are recognised as being the most sensitive to the Tracker<br />
environment, particularly the strong radiation field.
The advance validation procedure is outlined in Fig. 5. Where<br />
AVT overlaps with pre-production qualification of the same<br />
components, it should be possible to streamline significantly<br />
the pre-production qualification procedure. As with the other<br />
QA tests, results will be archived in the EDMS database, and<br />
a copy sent to the manufacturer.<br />
Repeat<br />
Procure new lot<br />
Interact. with<br />
manufacturer<br />
Failure analysis<br />
N<br />
N<br />
Sampling<br />
Environmental<br />
Testing<br />
Annealing<br />
Success?<br />
Y<br />
Accelerated ageing *<br />
Success?<br />
Y<br />
Final clearance<br />
Figure 5: Flow chart of the advance validation procedure.<br />
* in some cases only<br />
Advance clearance<br />
One or more AVT's per component type will take place<br />
during the pre-production and extending into the final<br />
production period where necessary. Final device assembly<br />
will proceed only after samples from laser and photodiode<br />
wafers and naked fibre lots have passed the AVT.<br />
A close working relationship is clearly necessary with the<br />
various suppliers of these components, to ensure that the AVT<br />
steps are achievable. The precise AVT procedures, actions,<br />
and schedule will be agreed in the final contracts.<br />
D. Lot validation<br />
Once pre-production components and assemblies have been<br />
fully qualified, production can be launched. Lot validation<br />
involves sample testing of every delivered batch, as in Fig. 6,<br />
with the outcome of either accepting or rejecting the tested<br />
lot.<br />
Results of lot validation tests will be archived in the EDMS<br />
database, with a summary sent to the manufacturer. The lot<br />
validation step does not include environmental testing and the<br />
description here is only given to complete the brief outline of<br />
the overall QA procedures. All the environmental QA tests<br />
will be covered in the AVT and pre-production qualification<br />
steps.<br />
Interact. with<br />
manufacturer<br />
Failure analysis<br />
Lot rejection<br />
Procure new lot<br />
N<br />
Sampling<br />
Functionality<br />
(reduced test)<br />
Success?<br />
Figure 6: Flow chart of the lot validation procedure.<br />
Y<br />
Lot acceptance<br />
III. ENVIRONMENTAL QA TEST PLANS<br />
The procedures described in this section focus mainly on<br />
radiation damage testing, since this is the most important<br />
aspect of the environment affecting the optical links. The<br />
Tracker will also operate at a temperature close to -10 °C and<br />
in a 4T magnetic field. The atmosphere will be constantly<br />
flushed, dry nitrogen. Concerning these the thermal aspect of<br />
the Tracker environment, all the link components are already<br />
specified for operation above -20°C. In addition, components<br />
with magnetic packaging have been excluded and recent tests<br />
have confirmed that the link performance is not affected by a<br />
magnetic field of 4T[6].<br />
A. Radiation damage effects<br />
A brief summary of the radiation damage effects[3] observed<br />
in tests carried out on all link components, during the<br />
development phase of the project, are listed in Table 1.<br />
Table 1: Summary of the effects of radiation damage in the optical<br />
link components to be used in the CMS Tracker.<br />
Component Radiation damage effects<br />
Laser Threshold current increase and efficiency decrease.<br />
Significant annealing of both effects.<br />
No effect on wearout rate.<br />
Photodiode Leakage current increase and responsivity loss. Some<br />
annealing of leakage current but no significant<br />
annealing of responsivity loss.<br />
No effect on wearout rate.<br />
Sensitive to SEU.<br />
Optical fibre Increased attenuation. Significant annealing of damage.<br />
No mechanical degradation.<br />
96-way<br />
Attenuation in the optical fibre.<br />
Optical cable<br />
No mechanical degradation.<br />
Optical<br />
No degradation.<br />
connector<br />
Laser driver<br />
chip<br />
Digital<br />
receiver chip<br />
No degradation.<br />
No degradation.
B. Lab simulation of the radiation environment<br />
In order to validate components for radiation hardness within<br />
the production schedule we are forced to carry out accelerated<br />
tests. For radiation effects testing this means using only a<br />
limited number of radiation sources, and irradiating the<br />
samples with fluxes or dose rates in excess of those expected<br />
in the Tracker.<br />
For each type of radiation damage mechanism, namely<br />
ionization, displacement, or single event effect (SEE), we<br />
therefore assume that the radiation damage effects from<br />
different incident particles (or from particles of different<br />
energy) can be compared at some basic level. Under this<br />
assumption all validation for a given component, in terms of<br />
testing each damage mechanism, can then be made with just<br />
one type of radiation source per mechanism. We therefore<br />
propose to use photon sources ( 60<br />
Co, or X-ray) for ionization<br />
damage tests, neutron sources for displacement damage tests<br />
and proton sources for SEE tests. Suitable radiation sources<br />
are identified in the QA manual[1].<br />
The accelerated testing extends to measurements of annealing<br />
and wearout degradation. These effects are usually thermally<br />
activated and can be accelerated by increasing the<br />
temperature. In all of the validation tests the effects expected<br />
over the lifetime of the components within the Tracker are<br />
then determined by extrapolation of the results from the<br />
accelerated tests to the conditions expected at a given location<br />
in the tracker.<br />
C. Device-specific tests<br />
A summary of the device-specific environmental QA tests is<br />
given in Table 2. The most unusual aspect of the testing<br />
programme, which is the advance validation testing, is<br />
detailed in the following section with procedures for laser,<br />
photodiode and optical fibre AVT.<br />
The reader is referred to the QA manual[1] for full details of<br />
the other environmental tests (and functionality) tests that are<br />
foreseen.<br />
Table 2: Summary of environmental tests to be performed on optical link components.<br />
Tests in Italics involve other groups and are still to be finalised.<br />
Optical link element Link system Pre-production qualification Advance validation<br />
Laser diode chip Analogue and Digital - total dose, fluence and annealing<br />
accelerated ageing<br />
Laser transmitter Analogue and Digital magnetic field -<br />
Laser driver Analogue and Digital total dose and annealing<br />
accelerated ageing<br />
SEE<br />
-<br />
Optohybrid substrate Analogue and Digital to be decided to be decided<br />
Analogue optohybrid Analogue total dose, fluence and annealing<br />
SEE<br />
magnetic field<br />
accelerated ageing<br />
-<br />
PIN photodiode receiver Digital magnetic field total dose, fluence and annealing<br />
accelerated ageing<br />
Digital receiver amplifier Digital total dose and annealing<br />
accelerated ageing<br />
SEE<br />
-<br />
Digital optohybrid Digital total dose, fluence and annealing<br />
SEE<br />
magnetic field<br />
accelerated ageing<br />
-<br />
Optical fibre Analogue and Digital - total dose and annealing<br />
Buffered fibre Analogue and Digital total dose -<br />
Optical fibre ribbon Analogue and Digital total dose -<br />
Ruggedized ribbon Analogue and Digital total dose -<br />
Dense multi-ribbon cable Analogue and Digital total dose -<br />
Optical connectors Analogue and Digital total dose<br />
magnetic field<br />
-
(i) Laser AVT.<br />
At least 20 lasers will be irradiated from each candidate<br />
wafer, then aged along with 10 unirradiated lasers in advance<br />
of the final production of packaged devices from the given<br />
wafer. All the samples should be packaged in the final form,<br />
to facilitate mounting and testing, and will have already been<br />
burned-in prior to delivery.<br />
The lasers will be irradiated under bias, with gamma rays and<br />
then neutrons, up to the worst-case equivalent doses and<br />
fluences. Both gamma and neutron irradiations will be made<br />
at room temperature with in-situ monitoring of the laser L-I<br />
and V-I characteristics at periodic intervals before, during and<br />
after irradiation. The rates of degradation and annealing of the<br />
threshold current and output efficiency can therefore be<br />
determined and the results of the damage and annealing tests<br />
can then be extrapolated to the conditions of damage and<br />
annealing expected in the Tracker.<br />
In the accelerated ageing step the devices will be operated at<br />
80°C for at least 1000 hours to measure any potential wearout<br />
degradation. The inclusion of both irradiated and unirradiated<br />
samples allows a control of any possible degradation<br />
mechanisms that are due to radiation damage. Measurements<br />
of the laser L-I and V-I characteristics will be made at<br />
periodic intervals during the ageing test. In between<br />
measurement cycles, the lasers will be biased at 60mA. This<br />
represents the maximum current available with the final laser<br />
driver.<br />
Any failure will be analysed post-mortem, in order to<br />
establish the cause of failure. Only failures that are intrinsic to<br />
the device-under-test will be counted in the statistics of the<br />
test. For example, any failure of wire-bonds from the laser to<br />
the test-board will not be counted.<br />
Proposed acceptance criteria for pre-production qualification<br />
and advance validation are such that 90% of the lasers should<br />
remain within all the operating specifications for the system,<br />
under the worst-cases of radiation damage exposure, and any<br />
additional wearout degradation, when extrapolating to the full<br />
10year lifetime of the links.<br />
(ii) Photodiode AVT.<br />
20 photodiodes will be irradiated from a given wafer, and then<br />
aged along with 10 unirradiated photodiodes. The devices<br />
should be packaged in the final form and have been burned-in<br />
before delivery.<br />
The photodiodes will be irradiated under bias, with gamma<br />
rays and then neutrons, up to the worst-case equivalent doses<br />
and fluences. Both the gamma and neutron irradiations will be<br />
made at room temperature with in-situ monitoring of the<br />
photodiode leakage and response characteristics made at<br />
periodic intervals before, during and after irradiation. The<br />
rates of degradation and annealing can therefore be<br />
determined.<br />
The devices will be aged at 80 °C for at least 1000 hours to<br />
determine the rate of long-term wearout degradation. In-situ<br />
monitoring will be used to make measurements of the<br />
photodiode leakage and response characteristics at periodic<br />
intervals during the ageing test. The photodiodes will be<br />
biased constantly at -2.5V.<br />
A similar type of failure analysis, acceptance criteria and<br />
rejection action will apply for the photodiodes as for the laser<br />
diodes.<br />
(iii) Optical fibre AVT.<br />
Bare optical fibre samples from each preform will be tested<br />
by advance validation to ensure that it is suitable for use in the<br />
CMS Tracker, before it is integrated into the final production.<br />
The same type of bare fibre is used in all parts of the links: the<br />
buffered fibre for pigtails and ribbonized fibre for the<br />
ruggedized 12-way cables and the 96-way cables.<br />
Two 100m long samples of optical fibre per preform will be<br />
irradiated with gamma rays and neutrons up to the maximum<br />
dose and fluence expected inside the Tracker. In-situ<br />
measurements of the radiation-induced attenuation in the fibre<br />
and the subsequent annealing will be performed in these tests.<br />
The proposed acceptance criterion for the preforms tested in<br />
the advance validation is that the loss will be no more than<br />
50dB/km. This would be equivalent to a loss of 0.5dB over<br />
~10m of fibre per link channel situated inside the Tracker.<br />
IV. CONCLUSION<br />
All of the components in the CMS Tracker optical links are<br />
either commercial off-the-shelf (COTS) components, or<br />
devices based on COTS. It is therefore necessary to extend<br />
environmental validation tests from the development phase all<br />
the way into the production phase of the project.<br />
This document reviewed some of the various test procedures<br />
related specifically to compliance of the various components<br />
with the CMS Tracker environment, particularly the intense<br />
radiation field.<br />
Advance validation test (AVT) procedures have been<br />
introduced as a special measure within the QA programme.<br />
These procedures should allow the problems associated with<br />
the possible rejection of fully assembled batches of noncompliance<br />
of COTS components to be avoided.<br />
V. ACKNOWLEDGEMENTS<br />
The success of the QA programme depends upon a good<br />
working relationship between the optical link development<br />
team and the various suppliers. We gratefully acknowledge all<br />
the suppliers and their valuable contributions to this work.<br />
VI. REFERENCES<br />
[1] CMS Tracker Optical Links Quality Assurance Manual.<br />
K. Gill, J. Troska and F. Vasey (2001).<br />
[2] CMS Tracker Technical Design Report. CERN LHCC 98-<br />
6 (1998).<br />
[3] A full list of publications of radiation damage tests carried<br />
out within the framework of the CMS Tracker Optical Links<br />
Project (1996-2001), is available at:<br />
http://cms-tk-opto.web.cern.ch/cms-tk-opto/rad_pubs.html<br />
[4] G. Cervelli et al., "A linear laser driver array for optical<br />
transmission in the LHC experiments", Proceedings of<br />
IEEE/NSS conference, Lyon (2000).<br />
[5] F. Faccio et al., "Status of the 80Mbit/s Receiver for the<br />
CMS digital optical link", 6th LEB Workshop Proceedings,<br />
299 (2000).<br />
[6] T. Bauer et al., CMS Note in preparation.
Design and Performance of a Circuit for the Analogue<br />
Optical Transmission in the CMS Inner Tracker<br />
M.T. Brunetti, G.M. Bilei, F. Ceccotti, B. Checcucci, V. Postolache, D. Ricci, A. Santocchia<br />
Abstract<br />
A new circuit for the conversion of analogue electrical<br />
signals into the corresponding optical ones has been built<br />
and tested by the CMS group of Perugia. This analogue<br />
opto-hybrid circuit will be assembled in the readout<br />
electronic chain of the CMS tracker inner barrel. The<br />
analogue opto-hybrid is a FR4 (vetronite) substrate<br />
equipped with one programmable laser driver chip and 2 or<br />
3 laser diodes, all being radiation tolerant. The description<br />
of the circuit, production flow and qualification tests with<br />
results are reported and discussed.<br />
I. CIRCUIT LAYOUT<br />
The analogue opto-hybrid circuit designed and built by<br />
the Perugia CMS group [1] will be employed in the<br />
analogue optical link of the TIB (Tracker Inner Barrel) and<br />
TID (Tracker Inner Disks) parts of the CMS tracker [2]. A<br />
similar circuit has been prototyped for the TOB (Tracker<br />
Outer Barrel) and TEC (Tracker End Caps) at CERN. The<br />
schematic of the analogue optical link is reported in Figure<br />
1.<br />
Patch<br />
panels<br />
2 or 3<br />
12<br />
Opto-Hybrid<br />
12<br />
8 x 12<br />
~ 65m<br />
Receiver Module<br />
Control<br />
A<br />
D<br />
C<br />
TIMING<br />
INFN Sez. di Perugia, Via A. Pascoli, 06123 Perugia, Italy<br />
MariaTeresa.Brunetti@pg.infn.it<br />
Detector PLL Hybrid Delay<br />
Link<br />
Interface<br />
TT<br />
MUX<br />
2:1<br />
amplifiers<br />
pipelines<br />
128:1 MUX<br />
APV<br />
processing<br />
buffering<br />
compression<br />
TTC<br />
Figure 1: The tracker analogue optical link.<br />
R L<br />
Control<br />
VME<br />
DAQ Control<br />
Detector<br />
Front End<br />
Back End<br />
FED<br />
Readout<br />
memory<br />
The analogue opto-hybrid converts the differential input<br />
voltage, coming from silicon microstrip detectors and<br />
sampled by the APV chips, into analogue optical signals<br />
transmitted via optical fibres to the back-end electronics.<br />
Each fibre will carry the multiplexed signals of 256 silicon<br />
detector microstrips. A total of 4000 analogue opto-hybrids<br />
for the TIB/TID system is required.<br />
The dimensions of the PCB are 30 x 22 mm 2 and 0.5<br />
mm in thickness. Figure 2 shows the upper side of the optohybrid<br />
circuit housing the connector, the un-packaged laser<br />
driver chip [3,4] and passive components.<br />
Figure 2: Connector side view of the analogue opto-hybrid.<br />
COTS (Commercial Off-The-Shelf) components are<br />
used extensively in the opto-hybrid circuit for cost saving<br />
strategy. Laser diodes and the coupled optical fibres are<br />
located in the backside respect to the Figure 2. The laser<br />
driver chip is programmable via the I 2 C interface and biases<br />
the laser diodes in their linear operational region. Input<br />
signals from the front-end amplifier directly modulate the<br />
bias current and are converted into optical signals<br />
(wavelength 1310 nm) by commercially available InGaAsP<br />
edge-emitting laser diodes. They are then transmitted to<br />
InGaAs photodiodes located at the receiver side for optical<br />
to electrical conversion. The number of laser diodes (fibres)<br />
is 2 or 3 depending on the opto-hybrid position in the<br />
detector. Both laser driver and laser diodes are glued to the<br />
substrate with a thermally conductive resin (Epo-Tek<br />
T7110) and the electrical contact is made through<br />
ultrasonic wedge-to-wedge aluminium wire bonds.<br />
Actually, PCB has to be redesigned to house next version<br />
of laser driver that will be packaged.
II. PRODUCTION FLOW<br />
In the production phase (2002-2004), the analogue<br />
opto-hybrid circuit will be assembled by following the<br />
scheme reported in Figure 3.<br />
Figure 3: Production flow of the analogue opto-hybrid.<br />
The opto-hybrid substrate is produced in industry, while<br />
the active devices (laser driver and laser diodes) are<br />
procured by CERN. All these subcomponents are 100%<br />
tested. The assembly of the circuit using COTS and active<br />
devices is done by the manufacturing industry that will take<br />
care of the test on 100% of the production. Perugia will<br />
receive the opto-hybrids and will test a sample (see the<br />
following paragraph). The circuits will then be sent to CMS<br />
tracker sub-assembly centers in order to be mounted in the<br />
front-end modules. At present, about 30 opto-hybrid<br />
circuits have been produced by the manufacturing industry<br />
and delivered to the CMS group of Perugia.<br />
III. CHARACTERISATION<br />
A series of tests to characterise the opto-hybrid circuit<br />
have been defined in the specifications for the CMS tracker<br />
optical link [5]. Electrical tests, thermal cycles and<br />
irradiation tests are some of those required for the circuit<br />
qualification. Table 1 reports the complete list of the tests<br />
to be done both by the manufacturer (i.e. the manufacturing<br />
industry) and the Perugia CMS group for TIB/TID. The<br />
main effort in the test activity is foreseen before production<br />
during product qualification when extensive measurements<br />
are required. In the subsequent production phase, the<br />
industry will be provided with an ATE (Automatic Test<br />
Equipment) by CERN for the lot validation tests. Test<br />
centres are charged with the lot acceptance tests on a<br />
sample of the production.<br />
Table 1:<br />
Specification to be<br />
Manufacturer CMS Institute in charge<br />
tested Product Lot validation Product<br />
Lot<br />
qualification (before delivery qualification acceptance<br />
Number of channels ♦ ♦ ♦ ♦<br />
Gain ♦ ♦ ♦<br />
Peak signal to noise ratio ♦ ♦ ♦<br />
Integral linearity deviation ♦ ♦ ♦<br />
Bandwidth ♦ 1 ♦ 1 ♦ 1<br />
Settling time to ±1% ♦<br />
Skew ♦<br />
Jitter ♦<br />
Crosstalk ♦<br />
Max. operating input<br />
♦ ♦ ♦<br />
voltage range<br />
Input voltage range ♦ ♦ ♦<br />
Input impedance ♦<br />
Quiescent operating point ♦ ♦ ♦<br />
Quiescent operating point<br />
after reset<br />
♦ ♦ ♦<br />
Hardware Reset ♦ ♦ ♦<br />
Power supply ♦<br />
Power<br />
ratio<br />
supply rejection<br />
♦<br />
Power dissipation<br />
Wavelength<br />
♦<br />
Output power range ♦ ♦ ♦<br />
Pre-bias output resolution ♦ ♦ ♦<br />
Magnetic field ♦<br />
Hadronic fluence ♦<br />
Gamma radiation dose ♦<br />
Temperature ♦<br />
Operating humidity ♦<br />
* rise time and fall time measurements<br />
A. Electrical Tests<br />
Some of the electrical tests reported in Table 1 have<br />
been done on 5 out of 30 analogue opto-hybrids already<br />
delivered to Perugia. Before starting the tests, each laser<br />
diode has to be biased at its working point, i.e. the current<br />
value corresponding to the linear response region (typically<br />
a few mA). This is achieved through the programmable<br />
laser driver that permits also to set the gain of the signal<br />
between 5 mS and 12.5 mS. These features are used to<br />
compensate changes in laser characteristics caused by the<br />
tracker radiation environment. Since the bias current<br />
depends on the temperature, the electrical specifications are<br />
referred to 25 °C. The electrical tests reported in this paper<br />
are the link gain measurements, the noise, the deviation<br />
from linearity and the bandwidth. The set-up is shown in<br />
Figure 4, except for the bandwidth measurement, where a<br />
network analyzer is employed. The input signal is<br />
generated by an AWG (Arbitrary Waveform Generator)<br />
and is converted into a differential signal for input to the<br />
analogue opto-hybrid. The test card is connected to the<br />
opto-hybrid by a kapton cable and provides the power<br />
supply and the inputs (data and clock lines) to the circuit.<br />
The opto-hybrid output signal carried by the optical fibres<br />
is converted into an analogue electrical signal by a<br />
prototype of the optical link receiver [6]. The output is<br />
digitized by the 16 bit ADC and data are stored in the<br />
computer which runs Labview.
Figure 4: Set-up for electrical tests of the analogue optohybrid.<br />
1)Link Gain<br />
The gain of the laser driver chip is configurable via the<br />
I 2 C interface. The nominal values are 5, 7.5, 10 and 12.5<br />
mS. The link gain, G, is estimated by measuring the link<br />
transfer characteristic. A 100 step input voltage ramp (staircase)<br />
between –500 mV and 500 mV is generated by the<br />
AWG and the output is acquired by the 16-bit resolution<br />
ADC. Figure 5 reports the link transfer characteristic of a 3channel<br />
opto-hybrid at the lowest laser driver gain value.<br />
Output voltage (V)<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
-0.2<br />
-0.4<br />
-0.6<br />
OH12Ch0<br />
OH12Ch1<br />
OH12Ch2<br />
-0.8 -0.4 0.0 0.4 0.8<br />
Differential input voltage (V)<br />
Figure 5: Output voltage as a function of the differential<br />
input voltage for a typical 3-channel opto-hybrid.<br />
The link gain G is then calculated from a linear<br />
regression fit over a range extending from –300 up to 300<br />
mV. Values between 0.9 and 1.2 have been found for tested<br />
opto-hybrids. These values are higher than the<br />
specifications, but the receiver used has a greater gain<br />
respect to its final version.<br />
2)RMS-Noise<br />
The RMS-noise dY(X) has been measured with the<br />
oscilloscope for each level of the voltage ramp generated<br />
by the AWG. For ease of comparison the measured RMS<br />
noise is referred to the link input to obtain the Equivalent<br />
Input Noise (EIN):<br />
dY(X)<br />
EIN =<br />
G<br />
Specifications on the optical link require 2.4 mV as the<br />
maximum value for EIN in the interval between –300 and<br />
300 mV. Figure 6 reports the EIN for the 5 tested optohybrids.<br />
Equivalent Input Noise (V)<br />
5.0x10 -3<br />
4.5<br />
4.0<br />
3.5<br />
3.0<br />
2.5<br />
2.0<br />
1.5<br />
1.0<br />
-1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8<br />
Differential input voltage (V)<br />
Figure 6: Equivalent Input Noise for the measured optohybrid<br />
channels.<br />
Almost all of tested opto-hybrids have the EIN value inside<br />
the specification. A 2-channel opto-hybrid shows a higher<br />
noise<br />
3)Deviation from linearity<br />
The link transfer characteristic measurement is also<br />
used to determine the deviation from linearity of the optohybrid<br />
response. The linear regression fit is used to<br />
calculate the Equivalent Input Non Linearity, EINL(X),<br />
defined as:<br />
Y(X) − GX<br />
EINL(X) =<br />
G<br />
where Y(X) is the measured output voltage and GX is the<br />
linear fit. The specification limits to EINL are stated as<br />
follows: 9 mV in any 100 and 200 mV window within –300<br />
and 300 mV, 18 mV in any 400 mV window within the<br />
same range. The equivalent input non linearity (EINL) for<br />
some of the tested opto-hybrid channels is reported in<br />
figure 7. Results are within the specifications given above.<br />
4)Bandwidth<br />
The bandwidth of the optical link has been measured by<br />
a network analyser. Values found for the tested optohybrids<br />
are around 90 MHz, slightly lower than<br />
specifications. These results are, nevertheless, in agreement<br />
with those found for the laser driver itself. Higher values of<br />
the bandwidth will be achieved in the future with a speeder<br />
laser driver.
4.0x10 -2<br />
Equivalent Input Non Linearity (V)<br />
3.5<br />
3.0<br />
2.5<br />
2.0<br />
1.5<br />
1.0<br />
0.5<br />
0.0<br />
-0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5<br />
Input voltage range (V)<br />
Figure 7: Equivalent Input Non Linearity for the measured<br />
opto-hybrid channels.<br />
B. Thermal qualification<br />
The nominal operating temperature of the CMS tracker<br />
is –10 °C. The thermal qualification of the opto-hybrid<br />
equipped with all its parts follows some <strong>preliminary</strong><br />
validation tests on single components. As an example, the<br />
choice of the resin type for the gluing of the laser driver<br />
and laser diodes to the substrate is heavily affected by its<br />
thermal response. Outgassing during the resin cure time is<br />
strongly unwanted, since it likely damages optoelectronic<br />
components.<br />
A total of 100 consecutive cycles has been currently<br />
used for a <strong>preliminary</strong> test on the CTE (Coefficient of<br />
Thermal Expansion) matching between the opto-hybrid<br />
substrate, the resin and the components. At this stage, only<br />
dummy optoelectronic components have been used. The<br />
temperature cycle generated by the climatic chamber used<br />
for the thermal qualification is reported in Figure 8. The<br />
thermal qualification of the fully populated opto-hybrid<br />
circuit will be the scope of future tests.<br />
Figure 8: Temperature cycle generated by the climatic chamber.<br />
C. Irradiation qualification<br />
The radiation environment corresponding to 10 years of<br />
LHC life has to be reproduced in irradiation facilities with<br />
beams of nucleons and gammas in order to check the<br />
radiation hardness of the opto-hybrid circuit. A first survey<br />
was done by irradiating with low energy neutrons (
Status Report of the ATLAS SCT Optical Links.<br />
D.G.Charlton, J.D.Dowell, R.J.Homer, P.Jovanovic, T.J. McMahon, G.Mahout<br />
J.A.Wilson<br />
School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK<br />
N. Kundu, R.L.Wastie, A.R.Weidberg<br />
Physics Department, Oxford University, Keble Road, Oxford, OX1 3RH, UK,<br />
t.weidberg@physics.ox.ac.uk<br />
S.B. Galagedera, J. Matheson, C.P. Macwaters, M.C.Morrissey,<br />
CLRC Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 OQX, UK<br />
Abstract<br />
The ATLAS SCT optical links system is reviewed. The<br />
assembly and testing of prototype opto-harnesses are<br />
described. Results are also given from a system test of the<br />
SCT barrel modules, including optical readout.<br />
I. INTRODUCTION<br />
Optical links will be used for the readout of the ATLAS<br />
SCT and Pixel detectors [1]. The specifications for the links<br />
are summarised briefly in section II. The radiation hardness<br />
of the system is briefly reviewed in section III. The<br />
assembly and test results of the prototype barrel optoharness<br />
are described in section IV and a similar discussion<br />
is given for the forward fibre harness is given in section V.<br />
Some results from the SCT barrel system test are given in<br />
section VI. Conclusions and future prospects are discussed<br />
in section VII.<br />
A.Rudge, B. Skubic<br />
CERN, CH-1211, Geneva 23, Switzerland<br />
M-L.Chu, S.C.Lee, P.K.Teng, M.J.Wang, P.Yeh<br />
Institute of Physics, Academia Sinica, Taipei, Taiwan 11529.<br />
II. LINKS SPECIFICATIONS<br />
The SCT links transfer digital data from the SCT<br />
modules to the off-detector electronics (RODs) at a rate of<br />
40 Mbits/s. Optical links are also used to transfer Timing,<br />
Trigger and Control (TTC) data from the RODs to the SCT<br />
modules. Biphase mark encoding is used to send the 40<br />
Mbit/s control data for a module on the same fibre as the 40<br />
MHz bunch crossing clock.<br />
The architecture illustrated in Figure 1 below, contains<br />
immunity to single point failure to maximise the system<br />
robustness[1]. If a TTC link to a module fails, the TTC data<br />
can be routed to a module from the opto-flex of its<br />
neighbour module. 12 modules are connected in a<br />
redundancy loop. One data link reads out the data from the<br />
6 ABCDs on one side of the module. If one of the two data<br />
links for a module fails, the corresponding data can be<br />
routed through the other data link.
Figure 1 SCT links architcture.<br />
III. RADIATION HARDNESS<br />
The radiation hardness and lifetime after irradiation of<br />
the PIN diodes has been demonstrated up to a fluence of<br />
10 15 1MeVneq/cm 2 [2].The radiation hardness and lifetime<br />
after irradiation of VCSELs produced by Truelight have<br />
been tested with good results[3]. The radiation hardness<br />
and lifetime of the front-end ASICs VDC and DORIC4A<br />
have been shown to be sufficient for the SCT<br />
application[4]. The pure silica core step index fibre has<br />
been shown to be extremely radiation hard[5]. The effects<br />
of Single Event Upsets on the system have been studied<br />
and shown to be acceptable for the SCT operation in<br />
LHC[6].<br />
IV. BARREL OPTO HARNESS<br />
The barrel opto-harness provides all the electrical and<br />
optical services for 6 barrel SCT modules. A harness<br />
contains 6 opto-flex kapton cables connected to 6 sets of<br />
low mass Aluminium tapes to bring in the electrical power.<br />
The VCSEL/PIN opto-package and the DORIC4A and<br />
VDC ASICs[4] are die bonded to the opto-flex and then<br />
wire bonded as shown in Figure 2 below. The single fibres<br />
from the pig-tailed opto-package are protected by 900 um<br />
diameter furcation tubing. The two data fibres from each<br />
opto-flex are fusion spliced to a 12 way ribbon and the 6<br />
TTC fibres are fusion spliced into a 6 way ribbon. The data<br />
and TTC ribbons are terminated with MT 12 and MT8<br />
connectors.<br />
Figure 2 Opto-package and ASICs wire bonded to optoflex.<br />
Figure 3 Opto-flex and Aluminium low mass cables with<br />
three fibres in furcation tubing.<br />
The completed harness is shown in Figure 4 below.<br />
Figure 4 A prototype barrel opto-harness.<br />
The average coupled power of the VCSELs was measured<br />
as a function of the drive current and the results are shown<br />
in Figure 5 below.
Figure 5 LI curves for VCSELs on a opto-harness. The<br />
mean DC power is measured at 50% duty cycle.<br />
The BER of the data links were measured by sending 40<br />
Mbits/s psuedo-random data to the VDC ASICs[4] and<br />
receiving the optical signal with 4 channel PIN arrays and<br />
the DRX-4 receiver ASIC. The results for the BER scan as<br />
a function of the DAC value, which controls the DRX<br />
discriminator level, is shown in Figure 6 below. From this<br />
it can be seen that there is a wide range of DAC values for<br />
which the system can be operated without any errors.<br />
Figure 6 BER scan for the 12 data links on a harness as<br />
function of the DAC value that sets the discriminator<br />
value for the DRX-4 ASIC.<br />
The BER of the TTC links was measured in a similar<br />
way. The BPM-4 ASIC was used for biphase mark<br />
encoding of the 40 Mbits/s control data signal with the 40<br />
MHz clock and used to drive VCSELs. The optical signal<br />
was taken to the PIN diode on the opto-package and the<br />
resulting electrical signal decoded by the DORIC4A<br />
ASIC[4] on the opto-flex cable. The BER was measured by<br />
comparing the recovered data with the sent data. The BER<br />
was measured as a function of the DAC value, which<br />
controls the amplitude of the optical signal. The results for<br />
one harness are shown in Figure 7 and demonstrate that<br />
there is a large range of DAC values for which the system<br />
works reliably.<br />
Figure 7 BER scan for the 6 TTC links on a optoharness<br />
as a function of the DAC value which sets the<br />
current for the VCSELs.<br />
Four such prototype barrel opto-harnesses have been<br />
assembled and tested. These opto-harnesses are being used<br />
in the barrel SCT system test at CERN (see section VI).<br />
V. FORWARD FIBRE HARNESS<br />
The services for one of the forward SCT disks is<br />
illustrated in Figure 8 below.<br />
Figure 8 Forward SCT services<br />
The electrical and optical services for the forward SCT<br />
are separated. The optical services consist of 6 optopackages<br />
assembled on a PCB with a 6 pin connector. The<br />
PCB plugs into a connector on the main forward SCT<br />
hybrid and the DORIC4A and VDC ASICs are mounted on<br />
the hybrid. A photograph of one of these forward opto<br />
plug-in packages is shown in Figure 9 below.
Figure 9 photograph of a forward SCT plug-in optopackage.<br />
The individual fibres are protected by the same<br />
furcation tubing as for the barrel. The individual fibres are<br />
spliced into 12 way and 6 way ribbons in the same way as<br />
for the barrel opto-harness. A photograph of a completed<br />
forward fibre harness is shown in Figure 10 below.<br />
Figure 10 A forward fibre harness containing 6 plug-in<br />
opto-packages.<br />
Six of these forward fibre harness have been assembled.<br />
Equivalent tests as those performed for the barrel fibre<br />
harness have been performed and they are all fully<br />
functional.<br />
VI. BARREL SYSTEM TEST<br />
The four barrel opto-harnesses have been used in the<br />
SCT barrel system test at CERN. A photograph of 15 barrel<br />
SCT modules mounted on a carbon fibre sector with three<br />
of the the four opto-harnesses is shown in Figure 11 below.<br />
Figure 11 The barrel SCT system test.<br />
The system test has been used to perform many studies<br />
and full information is available[7]. One of the key tests<br />
performed was to measure the noise of modules in the<br />
system test and compare this with the noise values<br />
measured for individual modules on an electrical test stand.<br />
The results shown in Figure 12 below show no evidence for<br />
any excess system noise.<br />
Figure 12 Measured noise for modules measured with<br />
optical readout at the system test compared with<br />
measurements of the same modules on an electrical test<br />
stand.<br />
One of the key performance specifications for a binary<br />
system is the noise occupancy. The results of noise<br />
occupancy measurements for the 12 ASICs on the 15 barrel<br />
modules are shown in Figure 13 below and are generally<br />
lower than the system specification of 5 10 -4 .
Figure 13 Measured noise occupancy for the 15 barrel<br />
SCT modules in the system test as a function of chip<br />
number on the module.<br />
Another interesting measurement from the point of view<br />
of the optical links is the use of the redundant TTC links.<br />
This requires sending the TTC signals to a relatively long<br />
way to a neighbour module. Since these lines run parallel to<br />
the silicon strips there is a potential pick-up problem. To<br />
test this 12 modules were mounted on neighbouring<br />
harnesses. The redundant TTC links were used for 8 out of<br />
the 12 modules (those for which the redundant TTC links<br />
were functional). The noise was measured for this<br />
configuration and compared with the noise measured with<br />
the modules receiving their normal TTC data (local TTC<br />
data). The data shown in Figure 14 below show no<br />
evidence for any significant increase in noise.<br />
Figure 14 Measured difference in noise for modules<br />
read out using the redundant TTC links compared to<br />
the noise measured using the normal TTC links.<br />
VII. CONCLUSIONS AND OUTLOOK<br />
Prototype barrel and forward SCT harnesses have been<br />
successfully assembled and tested. The barrel harnesses<br />
have been used in the barrel SCT system test at CERN. The<br />
results are very encouraging for the operation of the<br />
system. Slightly modified prototype harnesses are now<br />
being assembled to take into account the new round cooling<br />
pipe. A further round of system tests will be required for<br />
these harnesses as well as a forward SCT system test.<br />
The prototyping for the on-detector components should<br />
be completed this autumn and production started early in<br />
2002.<br />
VIII. ACKNOWLEDGEMENTS<br />
Financial help from the UK Particle Physics and<br />
Astronomy Research Council is acknowledged.<br />
IX. REFERNCES<br />
1. ATLAS Inner Detector TDR, CERN/LHCC/97-<br />
16.<br />
2. J.D. Dowell et al., Radiation Hardness and<br />
Lifetime Studies of the Photodiodes for the<br />
Optical Readout of the ATLAS SCT, Nucl. Instr.<br />
Meth. A 456 (2000) 292.<br />
3. Information available on www at url:<br />
http://hepmail.phys.sinica.edu.tw/~atlas/radiation.<br />
html<br />
4. D.J. White et. al., Radiation Hardness Studies of<br />
the Front-end ASICs for the Optical Links of the<br />
ATLAS SemiConductor Tracker, Nucl. Instr.<br />
Meth. A457 (2001) 369.<br />
5. G. Mahout et al, Irradiation Studies of multimode<br />
optical fibres for use in ATLAS front-end links,<br />
Nucl. Instr. Meth. A 446 (2000) 426.<br />
6. J.D. Dowell et. al., Single Event Upset Studies<br />
with the optical links of the ATLAS<br />
semiconductor tracker, accepted for publication in<br />
Nucl. Instr. Meth. A.<br />
7. Information available on www at url:<br />
http://asct186.home.cern.ch/asct186/systemtest.ht<br />
ml.
Prototype Analogue Optohybrids for the CMS Outer Barrel and Endcap Tracker<br />
J. Troska, M.-L. Chu † , K. Gill, A. Go ∗ , R. Grabit, M. Hedberg, F. Vasey and A. Zanet<br />
CERN, CH-1211 Geneva 23, Switzerland<br />
∗ Department of Physics, National Central University, Taiwan<br />
†<br />
High Energy Physics Laboratory, Institute of Physics, Academia Sinica, Taiwan<br />
jan.troska@cern.ch<br />
Abstract<br />
Prototype analogue optohybrids have been designed and<br />
built for the CMS Tracker Outer Barrel and End Cap<br />
detectors. The total requirement for both types in CMS is<br />
12900 that will be assembled between 2002 and 2004. Using<br />
very close to final optoelectronic and electronic components<br />
several optohybrids have been assembled and tested using<br />
standardised procedures very similar to those to be<br />
implemented during production. Analogue performance has<br />
met the specifications in all cases when operated in isolation<br />
and when inserted into the full prototype optical readout<br />
system.<br />
I. INTRODUCTION<br />
The CMS Tracker readout system consists of ~10 million<br />
individual detector channels that are time-multiplexed onto<br />
~40000 uni-directional optical links for transmission between<br />
the detector and ~65m distant counting room[1]. Data are<br />
transmitted in analogue fashion for digitisation at the Front-<br />
End Driver (FED) Boards located in the shielded counting<br />
room. Thus only the transmitting elements of the analogue<br />
optical links are located in the radiation area of the<br />
experiment. All electro-optical components of the optical<br />
transmission system have been proven to function within<br />
specifications after exposure to radiation levels beyond those<br />
expected in the CMS Tracker[2,3]. A system-level diagram<br />
of the CMS Tracker readout system is shown in Figure 1.<br />
Hybrid circuits are required to carry the electro-optical<br />
components (linear laser driver and 2 or 3 laser diodes) to be<br />
situated in close proximity to the detector hybrids distributed<br />
throughout the CMS Tracker. A block diagram highlighting<br />
the required interfaces of the analogue optohybrid is shown in<br />
Figure 2. The requirement for such Optohybrids matches the<br />
number of detector hybrids one-to-one, yielding a total<br />
number for the whole Tracker of approximately 17000.<br />
Responsibility for the design and procurement of optohybrids<br />
has been split according to the destination of the optohybrids<br />
within the Tracker: HEPHY Wien (Austria) has the<br />
responsibility to supply the 12900 optohybrids for Tracker<br />
Outer Barrel (TOB) and Tracker EndCap (TEC); while INFN<br />
Perugia (Italy) will supply the remaining fraction for Tracker<br />
Inner Barrel (TIB) and Tracker Inner Disks (TID).<br />
The Tracker Optical Links group at CERN has developed<br />
several prototype optohybrids suitable for use by TOB and<br />
TEC. The prototype development was carried out by CERN<br />
as the responsibility for the final supply was only recently<br />
apportioned. This paper will describe the prototype circuit,<br />
the test methods used to characterise its performance and the<br />
results obtained from this characterisation.<br />
Patch<br />
Panels<br />
Control<br />
2 or 3<br />
12<br />
Optohybrid Detector PLHybrid L Delay<br />
12<br />
~ 65m<br />
8 x 12<br />
Receiver Module<br />
Link<br />
Interface<br />
R L<br />
TTC<br />
TT<br />
MUX<br />
2:1 amplifiers<br />
pipelines<br />
128:1 MUX<br />
APV<br />
Front End<br />
(Radiation zone)<br />
Back End<br />
(Counting Room)<br />
FED<br />
ADC processing<br />
buffering<br />
compression<br />
Control<br />
VME<br />
Timing DAQ Control<br />
Detector<br />
Readout<br />
Memory<br />
Figure 1: CMS Tracker Readout System, with Analogue Optical<br />
Link highlighted on left-hand side.<br />
Differential<br />
signal inputs<br />
from<br />
APV 2<br />
Mux<br />
Electrical<br />
connector<br />
i/o control<br />
hybrid (I 2 C) Reset<br />
Driver<br />
Lasers<br />
Fibre clamp<br />
0.3 to 3m<br />
Fibre bundle to<br />
distributed patch panel<br />
Figure 2: Analogue optohybrid block diagram.<br />
II. ANALOGUE OPTOHYBRID DESIGN<br />
The major design-driver of the optohybrid substrate layout<br />
was the physical size of the final object, as the optohybrid<br />
must integrate into the mechanical structure of the Tracker<br />
system were space is at a premium. Effort was placed in<br />
achieving a design which meets both TOB and TEC<br />
requirements and a solution was found which is adapted to<br />
both sub-systems by simple differential assembly – the<br />
electrical interface connector is mounted on the componentside<br />
for the TOB and the reverse-side for the TEC. In all<br />
other respects the two optohybrid types are identical.<br />
The electrical design and layout of the prototype<br />
TOB/TEC optohybrid was done at CERN and the PCBs were<br />
subsequently produced in Taiwanese industry. The board<br />
layout is shown in Figure 3, which also shows the dimensions
of the PCB: 23 × 30 mm. The PCB thickness is 0.5mm,<br />
leading to an overall thickness of a populated TOB optohybrid<br />
of 3mm and of a populated TEC optohybrid of 6mm. The<br />
prototype optohybrid design is specific to the first version of<br />
linear laser driver ASIC realised in 0.25µm technology, but<br />
can host two types of laser diode from candidate<br />
manufacturers. Attachment points have been added to meet<br />
the requirements of TOB and TEC in terms of cooling as well<br />
as mechanical restraint.<br />
Of the 30 optohybrid substrate PCBs produced, 10 were<br />
populated with passive components in Taiwan, while the<br />
remaining 20 of the prototype batch were fully assembled at<br />
CERN. In all cases the ASIC and laser diodes were glued and<br />
wire-bonded at CERN. Examples of fully populated TOB and<br />
TEC optohybrids are shown in Figure 4.<br />
1<br />
NAIS Header<br />
VDD<br />
____<br />
Iout<br />
Laser Driver<br />
A<br />
A C<br />
A C<br />
C A<br />
C A<br />
C<br />
C<br />
A<br />
Laser<br />
Figure 3: Layout of CERN-design Analogue Optohybrid, showing<br />
connector header on left-hand side, driver ASIC in the center and<br />
lasers on right-hand side.<br />
Figure 4: TOB (top) and TEC (bottom) optohybrids with MU optical<br />
connector.<br />
Half of the total prototype run of 30 optohybrid substrates<br />
have been assembled with connector sockets for use as TECtype<br />
optohybrids, with the other half having been assembled<br />
with connector headers for use as TOB-type optohybrids.<br />
Eleven TEC-type optohybrids have been assembled with a<br />
full complement of three laser diodes using four different<br />
prototype configurations of laser diode- and optical connector<br />
type. The remaining four TEC-type optohybrids were used as<br />
mechanical samples. Eight TOB-type optohybrids have to<br />
date been assembled with three laser diodes of the latest<br />
generation of close-to-final laser packaging configuration and<br />
MU-type optical connectors as shown in Figure 4.<br />
It should be noted that the optohybrids produced could be<br />
used in CMS but for the fact that the ASIC design has<br />
changed sufficiently to require a PCB layout change. This is<br />
partly due to the fact that the laser driver will be placed in a 5<br />
× 5 mm LPCC package, which will simplify the assembly of<br />
the optohybrid by reducing the number of wire-bonds<br />
required during assembly and allow pre-testing of the ASIC.<br />
III. TEST METHODS<br />
Testing of the prototype optohybrids has been carried out<br />
in-system by measuring the performance via pre-prototype<br />
optical receivers. The test methods used have previously<br />
been described in detail[4], but will be outlined here.<br />
Static characterisation of optical links containing<br />
optohybrids is carried out as follows: (Refer to Figure 5)<br />
1. A fast ramp (staircase) is injected at the optohybrid<br />
input<br />
2. This ramp is measured at the output of the optical<br />
receiver using a 12-bit ADC to obtain the transfer<br />
characteristic<br />
3. A slow ramp (staircase) is injected at the optohybrid<br />
input<br />
4. At each DC level of the ramp the AC-coupled output is<br />
sampled with an oscilloscope to obtain the noise<br />
GPIB<br />
Arbitrary<br />
Wavefunction<br />
Generator<br />
AOH<br />
Computer<br />
MU<br />
MPO<br />
Rx<br />
Trigger 1<br />
Trigger 2<br />
GPIB<br />
VME<br />
ADC<br />
Oscilloscope<br />
Figure 5: Analogue OptoHybrid (AOH) Static Characterisation setup.<br />
The linearity performance of the optohybrid under test can<br />
be calculated as the deviation from a straight-line fit to the<br />
static transfer characteristic. To assess the performance the<br />
deviation from linearity is referred to the input of the optical<br />
link to yield the Equivalent Input Non-Linearity (EINL). The<br />
system specification for EINL is better than 12mV, which<br />
corresponds to 2% integral non-linearity over the input<br />
operating range of 600mV.<br />
In order to assess the noise performance of the optohybrid<br />
under test the measured raw noise is also referred to the input<br />
to yield the Equivalent Input Noise (EIN). System<br />
specification for EIN is better than 2.4mV over the optical<br />
link input range 600mV, to allow for a system peak Signal to<br />
Noise Ratio >256:1.<br />
Dynamic characterisation of the prototype optohybrids<br />
was carried out by measuring the pulse response of the optical<br />
link system. A periodic input pulse train of ±400mV at<br />
10MHz was used for this test. The rise time of the input<br />
signal was below 1ns. The rise time of the output signal as<br />
well as the output pulse shape was used to infer the dynamic<br />
response of the optohybrid.
Crosstalk between channels on the prototype optohybrid<br />
was measured by injecting the same signal used for dynamic<br />
characterisation into one of the three channels on the<br />
optohybrid under test and measuring the output of the other<br />
channels using a separate receiver. In this way any receiver<br />
crosstalk effects are removed from the measurement results.<br />
All measurements (unless otherwise stated) were carried<br />
out with the optohybrid under test located in a temperature<br />
controlled chamber at 25°C.<br />
The experience gained from optohybrid testing has been<br />
used to define the test sequences for use during final<br />
production and to provide a basis for the implementation of an<br />
automated production test station for optohybrids.<br />
IV. RESULTS<br />
Of the 20 optohybrids fully populated with laser diodes to<br />
date, 11 have been characterised using the methodology<br />
described above. The same receiver channel has been used<br />
throughout the characterisation series to facilitate comparison<br />
between optohybrids without possible variations due to the<br />
receiver. It should be noted that the receiver used for this<br />
characterisation was a previous prototype design based on<br />
discrete components[5] whereas the amplifier array foreseen<br />
for use within CMS is a 12-channel ASIC. The gain of the<br />
receiver used for these comparative tests is higher than that of<br />
the final one, so that comparisons of the gain values obtained<br />
here with the nominal optical link system gain of 0.8V/V must<br />
be undertaken with caution. A further minor point is that the<br />
voltage output of the prototype can be both positive and<br />
negative and has a widely variable offset. In contrast the final<br />
receiving amplifier has more limited offset adjustment and<br />
only outputs positive signals.<br />
The static characteristics of each optohybrid were<br />
measured for all four possible gain settings (5.0mS, 7.5mS,<br />
10.0mS & 12.5mS) of the laser driver. These measurements<br />
yield a results-set such as the one shown in Figure 6 for each<br />
optohybrid measured. The transfer curve Figure 6 (top) is<br />
fitted with a straight line over the operating input range<br />
(±300mV) and the resulting deviation of the data from the fit<br />
computed to yield the non-linearity plotted in Figure 6<br />
(middle), referred to the input by division by the measured<br />
gain. The input referred noise (Figure 6 – bottom) completes<br />
the basic results-set. The shaded areas in Figure 6 represent<br />
the operating range (±300mV) and maximum input range<br />
(±400mV) of the optical link, thus showing that the<br />
measurements are carried out over a wider input range. The<br />
nominal input to the optical link is 100mV per Minimum<br />
Ionising Particle and the APV operating range is 600mV.<br />
Figure 6 (bottom) shows the effect of gain on this<br />
computed measurement: the measured raw noise being very<br />
similar in magnitude for all gain settings. It is therefore<br />
advantageous in terms of noise to operate the laser driver at<br />
higher gain settings.<br />
Link Output (V)<br />
EINL (mV)<br />
EIN (mV)<br />
1.2<br />
0.8<br />
0.4<br />
0.0<br />
-0.4<br />
-0.8<br />
-1.2<br />
12<br />
10<br />
8<br />
6<br />
4<br />
2<br />
0<br />
4.0<br />
3.5<br />
3.0<br />
2.5<br />
2.0<br />
1.5<br />
1.0<br />
0.5<br />
0.0<br />
5.0mS<br />
7.5mS<br />
10.0mS<br />
12.5mS<br />
-0.8 -0.4 0.0 0.4 0.8<br />
Differential Input (V)<br />
5.0mS<br />
7.5mS<br />
10.0mS<br />
12.5mS<br />
-0.8 -0.4 0.0 0.4 0.8<br />
Differential Input (V)<br />
5.0mS<br />
7.5mS<br />
10.0mS<br />
12.5mS<br />
-0.8 -0.4 0.0 0.4 0.8<br />
Differential Input (V)<br />
Figure 6: Typical results-set for static characterisation of an analogue<br />
optohybrid: (top) transfer curve; (middle) Equivalent Input Nonlinearity;<br />
and (bottom) Equivalent Input Noise.<br />
In order to more easily represent and compare the<br />
performance of many optohybrids the static characteristics are<br />
used to compute four figures of merit:<br />
1. Link Gain: the slope of the straight line fit to the<br />
data within the operating range.<br />
2. Link linear range: the input range over which the<br />
EINL is below the specified value of 12mV.<br />
3. Average EIN: the mean value of EIN over the<br />
operating range.<br />
4. Input range within noise spec: the input range<br />
starting at Vin = -300mV before EIN exceeds the<br />
specified value of 2.4mV<br />
The last figure of merit picks out channels which show<br />
spikes in the noise characteristic (e.g. Figure 9) even where<br />
the average EIN is below the specified value of 2.4mV.<br />
Figure 7 shows the four figures of merit for the complete<br />
results-set for eleven optohybrids at the four different gain<br />
settings. Also marked on the figures are the relevant<br />
specification levels for linear range, average EIN and input
ange within noise spec. It is clear that the majority of the<br />
channels and gain settings measured meet the specified target<br />
levels of performance and that good performance has thus<br />
been achieved for this first prototype analogue optohybrid<br />
design.<br />
Link Gain (V/V)<br />
Average EIN (mV)<br />
4<br />
3<br />
2<br />
1<br />
0<br />
3.0<br />
2.5<br />
2.0<br />
1.5<br />
1.0<br />
0.5<br />
0.0<br />
1 2 3<br />
AOH channel number<br />
1 2 3<br />
AOH channel number<br />
Link linear range (V)<br />
Input range within noise spec. (V)<br />
1.4<br />
1.2<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
1.4<br />
1.2<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
5.0mS 10.0mS<br />
7.5mS 12.5mS<br />
1 2 3<br />
AOH channel number<br />
1 2 3<br />
AOH channel number<br />
Figure 7: Figures of Merit for eleven prototype analogue optohybrids<br />
measured using standardised test procedures.<br />
Dynamic measurements carried out on the prototype<br />
analogue optohybrids show that the pulse response of the laser<br />
driver ASIC is not degraded by its placement on the<br />
optohybrid. Rise times in the range 3.8 – 4.2 ns were<br />
obtained from pulse response measurements carried out.<br />
These values translate to slightly lower bandwidth values than<br />
the target of 90MHz, but are consistent with those carried out<br />
on the laser driver ASIC itself. The speed of the laser driver<br />
which will be used to equip future versions of the optohybrid<br />
will be increased.<br />
The measurement of crosstalk on the prototype optohybrid<br />
yielded results comfortably within the specified value<br />
(adjacent channel) of -55dB. The values obtained were<br />
consistently below -60dB for the nearest neighbour channel<br />
and dropped to below -70dB for the furthest neighbour<br />
channel.<br />
V. REFERENCE CHAIN OPERATION<br />
In addition to the standalone characterisation of the<br />
prototype analogue optohybrid described in the previous<br />
section, some examples were included in a test of the full<br />
optical link chain. In this investigation the prototype<br />
optohybrids were inserted into a reference chain that<br />
contained the final type- and number of optical connections<br />
and approximately the final fibre lengths as foreseen for the<br />
optical link system in CMS (Figure 1). Four TEC-type<br />
optohybrids were placed on a carrier board which was placed<br />
inside an environmental chamber so that the ambient<br />
temperature during operation could be varied (see Figure 8).<br />
The twelve optical channels (MU optical connectors) were<br />
connected to a single-fibre to ribbon fan-in (sMU to MPO<br />
connectors), then via ~100m ribbon fibre to a pre-final<br />
prototype 12-way receiver module (MPO receptacle) housed<br />
on a VME card.<br />
Figure 8: Four optohybrids (ringed) on test board during reference<br />
chain operation.<br />
The static measurements described earlier were carried out<br />
for all channels at both room temperature (25°C) and the<br />
nominal optical link operating temperature when it is installed<br />
in the CMS Tracker (-10°C). The results-set is shown in<br />
Figure 9. For these measurements the gain setting at the laser<br />
driver was chosen to give an overall channel gain as close to<br />
the nominal value of 0.8 as possible, with the laser bias setting<br />
then chosen to match the gain setting. The laser bias setting is<br />
chosen so as to operate the optical link above the laser<br />
threshold while keeping the bias current as low as possible.<br />
Minimising the bias current ensures that the noise<br />
performance over the entire input range of the optical link<br />
(±400mV) is adequate – higher bias currents leading in<br />
general to higher laser noise. The bias setting was revised to<br />
take into account the lower laser threshold when operating at -<br />
10°C, although for ease of comparison the gain setting was<br />
not varied as the temperature changed.<br />
Overall, the figures of merit obtained from the reference<br />
chain measurements (Figure 10) are very encouraging. The<br />
noise measured in the final-form optical link is lower than for<br />
the measurements presented in the previous section. Linearity<br />
remains good for all cases.<br />
The figure of merit most systematically affected by the<br />
change in operating temperature is the gain, which increases<br />
for all channels at lower temperature. This is believed to be<br />
largely due to changes in the coupling efficiency between<br />
laser die and optical fibre within the small form-factor laser<br />
diode package. It is clear that very good noise performance is<br />
achieved especially at the lower temperature, while the gain<br />
spread is tolerable[6].<br />
As well as obtaining the same typical values of risetime as<br />
for the measurements of the prototype optohybrids described<br />
in the previous section, the final system was exercised using a<br />
simulated APVMUX data-stream. An arbitrary waveform<br />
generator was used to mimic the data that will be transmitted<br />
through the final optical link in the CMS Tracker, where each<br />
optical channel will be used to transmit the output stream of<br />
one APVMUX channel. The data-stream as transmitted via a<br />
typical optical link channel of the reference optical link chain<br />
is shown in Figure 11. In the transition between temperatures<br />
the laser bias setting was changed as described above.
Link output (V)<br />
EINL (mV)<br />
EIN (mV)<br />
1.4<br />
1.2<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
12<br />
10<br />
8<br />
6<br />
4<br />
2<br />
0<br />
4.0<br />
3.5<br />
3.0<br />
2.5<br />
2.0<br />
1.5<br />
1.0<br />
0.5<br />
0.0<br />
25°C<br />
-10°C<br />
-0.8 -0.4 0.0 0.4 0.8<br />
Differential Input(V)<br />
25°C<br />
-10°C<br />
-0.8 -0.4 0.0 0.4 0.8<br />
Differential Input(V)<br />
25°C<br />
-10°C<br />
-0.8 -0.4 0.0 0.4 0.8<br />
Differential Input(V)<br />
Figure 9: Static characteristics of all reference chain channels at<br />
room and operating temperatures: (top) transfer curve; (middle)<br />
Equiv. Input Non-Linearity; and (bottom) Equiv. Input Noise.<br />
Link Gain (V/V)<br />
Average EIN (mV)<br />
1.4<br />
1.2<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
3.0<br />
2.5<br />
2.0<br />
1.5<br />
1.0<br />
0.5<br />
0.0<br />
2 4 6 8 10 12<br />
Receiver channel number<br />
2 4 6 8 10 12<br />
Receiver channel number<br />
Link linear range (V)<br />
Input range within noise spec. (V)<br />
1.4<br />
1.2<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
1.4<br />
1.2<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
25°C<br />
-10°C<br />
2 4 6 8 10 12<br />
Receiver channel number<br />
2 4 6 8 10 12<br />
Receiver channel number<br />
Figure 10: Figures of Merit for all reference chain channels.<br />
V out (V)<br />
1.4<br />
1.2<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
0<br />
1<br />
2<br />
3<br />
4<br />
Time (µs)<br />
25°C<br />
-10°C<br />
Figure 11: Simulated APVMUX data-stream at the output of the<br />
prototype optical link with very close to final components.<br />
VI. CONCLUSIONS<br />
Approximately 20 prototype analogue optohybrids<br />
suitable for use in the Outer Barrel and Endcap of the CMS<br />
Tracker have been assembled from a common PCB design.<br />
They have been populated with close-to-final electronic and<br />
opto-electronic components and tested using standardised<br />
procedures which will form the basis of production testing of<br />
the final quantity of 12900.<br />
The performance of the prototype optohybrids has been<br />
reported to be within specifications for the majority of cases,<br />
while the outliers of the distribution are not far from meeting<br />
the required criteria. Figures of merit have been used to<br />
reduce the large raw dataset to ease the comparison of many<br />
devices. These provide an immediate overview of the key<br />
analogue optohybrid performance parameters of gain,<br />
linearity and noise.<br />
With the successful testing of the first prototype analogue<br />
optohybrids confidence has been gained that the transmitting<br />
components of the analogue optical link for the CMS Tracker<br />
can be successfully embedded into the overall system. Future<br />
testing will put the prototypes described here into a larger test<br />
of the full readout system including detector modules in their<br />
final mechanical structures.<br />
VII. REFERENCES<br />
[1] Addendum to the CMS Tracker TDR, CERN/LHCC 2000-<br />
016, CMS TDR 5 Addendum 1 (2000)<br />
[2] “Radiation Damage and Annealing in 1310nm<br />
InGaAsP/InP Lasers for the CMS Tracker”, K.Gill et al.,<br />
SPIE Vol. 4134 (2000)<br />
[3] “Radiation effects in commercial off-the-shelf singlemode<br />
optical fibres”, J.Troska et al., SPIE Vol. 3440,<br />
p.112 (1998)<br />
[4] “Evaluation and selection of analogue optical links for the<br />
CMS tracker - methodology and application”, F.Jensen et<br />
al., CMS Note 1999/074<br />
[5] “A 4-channel parallel analogue optical link for the CMS-<br />
Tracker”, F.Vasey et al., Proc. 4 th LEB Workshop, Rome,<br />
pp. 344-348 (1998)<br />
[6] “A model for the CMS tracker analog optical link”,<br />
T.Bauer et al., CMS Note 2000/056<br />
5<br />
6<br />
7<br />
8
An Optical Link Interface for the Tile Calorimeter in ATLAS<br />
C. Bohm, D. Eriksson, K. Jon-And, J. Klereborn and M. Ramstedt<br />
Abstract<br />
An optical link interface has been developed in Stockholm<br />
for the Atlas Tile-Calorimeter. The link serves as a readout for<br />
one entire TileCal drawer, i.e. with up to 48 front-end<br />
channels. It also contains a receiver for the distribution of<br />
TTC clocks and messages to the full digitizer system.<br />
Digitized data is serialized in the digitizer boards and supplied<br />
with headers and CRC control fields. Data with this protocol<br />
is then sent via G-link to an S-link destination card where it is<br />
unpacked and parallelised with a specially developed Altera<br />
code. The entire read-out part of the interface has been<br />
duplicated for redundancy with two dedicated output fibers. The<br />
TTC distribution has also been made redundant by using two<br />
receivers (and two input fibers), both capable of distributing<br />
the TTC signal. To decrease the sensitivity to radiation, the<br />
complexity of the interface has been kept at a minimum. This<br />
is also beneficial to the system cost. To facilitate the<br />
mechanical installation, the interface has been given an Lshape<br />
so that it can be mounted closely on top of one of the<br />
digitizer boards without interfering with its components.<br />
I. INTRODUCTION<br />
Particle<br />
Fiber<br />
Module<br />
Tile Calorimeter<br />
Drawer<br />
Figure 1: Schematic diagram of the Tile Calorimeter<br />
A. The Tile Calorimeter Digitizer<br />
The Atlas Tile Calorimeter Digitizer [1] digitizes signals<br />
from about 10000 PMTs, which record the light generated<br />
when particles are absorbed in the calorimeter. The detector<br />
electronics is located in "drawers" in the base of each of the<br />
256 calorimeter modules (Fig 1). Each drawer is responsible<br />
for reading out 45 PMT channels (32 in the outer calorimeter<br />
University of Stockholm, Sweden, October 2001<br />
PMTs<br />
daniel.eriksson@physto.se<br />
sections). The PMT channels are digitized by 8 digitizer<br />
boards, each capable of reading out 6 channels.<br />
To achieve a high degree of fault tolerance for the readout,<br />
the digitizers use Low Voltage Differential Swing, LVDS for<br />
the data transmission, and are electrically organized in a star<br />
formation, though mechanically mounted sequentially to<br />
reduce the number of loose cables (Fig. 2). The 8 boards are<br />
mounted so that they are read out from the end towards the<br />
middle where the optical read-out interface is located. The<br />
boards are connected with purpose designed flat cables made of<br />
flexible capton films, "flex foils", in order to provide<br />
impedance matched transmission lines.<br />
TTC-rx<br />
Tile_DMU Tile_DMU<br />
TTC-rx<br />
Tile_DMU Tile_DMU<br />
TTC-rx<br />
Tile_DMU Tile_DMU<br />
interface<br />
optical fiber<br />
TTC-rx<br />
Tile_DMU Tile_DMU<br />
Tile_DMU Tile_DMU Tile_DMU Tile_DMU Tile_DMU Tile_DMU Tile_DMU Tile_DMU<br />
TTC-rx TTC-rx TTC-rx TTC-rx<br />
Figure 2: The data flow along a chain of digitizer boards<br />
This, of course, makes the interface a very vulnerable part<br />
of the read-out chain.<br />
B. The Optical Link<br />
An optical link read-out interface (Fig. 3) has been<br />
designed and tested at Stockholm University. It is designed to<br />
fit the over-all design philosophy of the digitizer system as<br />
well as the mechanical constraints of the TileCal drawers.<br />
Its task is to read out the data from 8 boards and to receive<br />
and distribute TTC signals [2] to the digitizer boards. Each<br />
board has 4 serial output lines and one clock input.<br />
V CSEL<br />
O/ E rec<br />
V CSEL<br />
Amplifier<br />
Fan-out<br />
Amplifier<br />
O/ E rec Fan-out<br />
TTC-r x<br />
TTC-r x<br />
MUX<br />
Select ion<br />
G-link MUX<br />
G-link MUX<br />
Figure 3: Schematic design of the link<br />
LVDS LV DS<br />
receiver rec<br />
LVDS LV DS<br />
receiver<br />
rec<br />
From digit izers 1-4<br />
To digit izers 1- 4<br />
From digit izers 5-8<br />
To digit izers 5- 8<br />
To 3- in-1<br />
In order to match the fault tolerance it is necessary to avoid<br />
single point failure modes in the design. Duplicating the<br />
interface board was considered too expensive and not consistent<br />
with the space requirements. However, the boards are designed<br />
with two parallel systems that operate independently and have
separate fibers for transmission and reception. Both channels<br />
transmit continuously, leaving the decision of which channel<br />
to use to the link destination card. This design is based on the<br />
dual G-link [7] concept developed for the Liquid Argon<br />
Calorimeter.<br />
The only functions that are not duplicated are the LVDS<br />
receivers, the TTC multiplexers and the decision mechanism<br />
that decides which of the two outgoing TTC channels should<br />
be transmitted to the digitizers. One error among the LVDS<br />
receivers will at worst kill the data from one digitizer board<br />
since each receiver serves one board. An error among the TTC<br />
multiplexers will at worst kill four boards. For the TTC<br />
decision mechanism, the intended solution is to use the PLL<br />
lock signal in one of the modules. There are provisions for<br />
testing this method in the present design. There will also be a<br />
circuit to ensure a long absence of lock before switching. An<br />
error in this mechanism will at worst freeze the choice of TTC<br />
channel.<br />
II. READOUT<br />
The readout design is intentionally made very simple. The<br />
principle is to move the logic components away from the<br />
interface card into the receiver card, whenever possible, thus<br />
avoiding costly radiation tolerance. The receiver is responsible<br />
for all operations on the data content, such as unpacking the<br />
data, checking the CRC, choose which channel to use, and<br />
reformatting the data for the appropriate application.<br />
The readout uses the HDMP-1032 G-link from Agilent<br />
Technologies. This is a 16 bit serial interface for transmission<br />
rates up to 1.12 Gbit/s (32 bits at 35 MHz). This corresponds<br />
to a transmission rate of 1.4 Gbaud, since the G-link adds 4<br />
encoding bits. The link is presently run at 800 MBaud (20.04<br />
MHz), which is the likely readout speed. However, it may be<br />
possible that it will run at 1.6 GBaud (40.08 MHz), which<br />
exceeds the specifications, but this has not yet been tested.<br />
The data is first received by 8 LVDS to TTL converters,<br />
each chip receiving 4 bits, corresponding to two TileDMUs.<br />
This is the only connection between the redundant readout<br />
channels, and a failure here will affect both channels. But since<br />
there are 8 chips for the LVDS reception, a chip failure will at<br />
most affect only two TileDMUs.<br />
The data is then split to both readout channels, where it is<br />
multiplexed to the 16 bit input of the G-link. By latching the<br />
data on both rising and falling edge, there is no need for a<br />
faster clock. The output from the G-link is designed to<br />
transmit PECL swings directly into 50 ohm.<br />
The transmission lengths between the TileDMUs and the<br />
interface differ from board to board. To synchronize the data,<br />
the TTC clocks on each digitizer board is shifted by using the<br />
TTCrx [2] clock deskew function.<br />
III. TTC DISTRIBUTION<br />
The interface card is also responsible for distributing the<br />
TTC signal to the digitizers. This distribution uses the same<br />
concept as the readout. Two channel redundancy with separate<br />
receivers and fibers, and a low level of complexity.<br />
The TTC signal received by each channel is first split in<br />
four by a 4-port LVDS repeater. One signal for the 3-in-1<br />
system [3], one for the channel’s own TTCrx, and two for the<br />
digitizers. The latter signals are then split again, also using 4port<br />
LVDS repeaters, two for each channel, for transmission<br />
to the 8 digitizer boards. By using the high impedance feature<br />
of the repeaters, the transmission lines can be driven by either<br />
of the two channels.<br />
For the channel selection, an analog band pass filter with<br />
discrete components was first intended, and implemented on<br />
the card, but a second solution, using the TTCrx READY<br />
signal will also be tested. The READY signal is set when the<br />
TTCrx PLL locks on to the incoming signal. With the second<br />
solution, channel switching will only occur if the TTCrx<br />
cannot establish lock. This solution may also include a delay<br />
to avoid unnecessary switching, depending on how robust the<br />
TTCrx READY signal is.<br />
IV. IMPLEMENTATION<br />
There are a number of constraints to be considered in the<br />
design of the interface card. It has to fit into the drawer, use<br />
only 3.3 V and have a low cost.<br />
Since the interface is mounted on top of a digitizer, there<br />
is a height limitation. Adding to the limitation are the 6 PMT<br />
connectors on the digitizer boards. To be able to mount the<br />
interface card directly on top of the digitizer, granting the<br />
interface more height, and still have easy access to the PMT<br />
connectors, the interface has been given an L shape (Fig. 4).<br />
The direct mounting of the card has the added advantage of<br />
eliminating one flex foil connection.<br />
Figure 4: L-shaped card<br />
The disadvantage is that it creates a heat problem, since<br />
both the interface card and the digitizer boards have<br />
components radiating large amounts of heat, especially the<br />
TTCrx and the G-link. To avoid a heat pocket between the<br />
boards, the critical components on the interface card are placed<br />
on the topside, where they can make better use of the cooling<br />
system in the drawer.<br />
A. The platform card<br />
The L shape also creates a problem with the optical<br />
transceivers. The shape is too narrow for standard commercial<br />
components, and there are no cheap miniature transceivers for<br />
3.3 V, and none at all for 1300 nm reception and 850 nm<br />
transmission. To solve this, separate receivers and transmitters<br />
have been specially built for the interface, with one end of the<br />
card dedicated to the optical communication, with the diodes<br />
mounted in ST houses, chosen for size, prize and availability.
Not using integrated components means that the<br />
connection between diode/VCSEL and amplifier can be up to 1<br />
cm in length. This is a critical point, for while the circuits on<br />
the PCB are matched for impedance, the diode pins are not. To<br />
bring the amplifiers closer to the headers, making the diode<br />
pins as short as possible, a platform card (Fig. 5) containing<br />
the amplifiers, is mounted on top of the interface card. Using a<br />
platform card also makes it easier to surface mount the diode<br />
pins, greatly improving the impedance match.<br />
Figure 5: Platform card<br />
B. The transmitter<br />
The transmitter is a standard solution from MAXIM, using<br />
the MAX3286 laser amplifier. The MAX 3286/96 series is<br />
optimized to drive lasers packaged in standard TO-46 headers,<br />
and consequently the VCSEL used, Zarlinks MF444, is<br />
packaged in a TO-46 header. This solution is compact and<br />
inexpensive.<br />
The G-link output is designed to deliver PECL into 50<br />
ohm. Since the differential MAX 3286 input accepts PECL<br />
swings, the connection between these circuits is a straight<br />
forward AC coupling with 100 ohm differential termination.<br />
C. The receiver<br />
The receiver is a PIN diode, Zarlink MF432, with a<br />
transimpedance amplifier (TIA), Philips TZA3033. Since the<br />
connection between the PIN diode and the TIA is critical to the<br />
performance, the preferable solution would have been to use a<br />
PIN-TIA combination, i.e. a PIN diode with a TIA mounted<br />
inside the header. Since there are no PIN-TIAs for this<br />
application, the PIN diode and TIA are separate. Since the<br />
input current is about 7-8 mA, and obviously very sensitive to<br />
noise, special attention has been paid to the layout around the<br />
input pin of the TIA. The input capacitance is minimized by<br />
surface mounting the diode pins and by removing the power<br />
planes beneath the input pins of the TIA.<br />
Also critical to capacitance is the reverse voltage across the<br />
diode. Presently the diode uses a reference voltage from the<br />
TZA 3033, which is only 2.25 V. Tests will be made with the<br />
VCC pin directly connected to the power plane. This will<br />
increase the reverse voltage but may introduce too much noise.<br />
The TZA features an automatic gain control loop, AGC,<br />
which maintains a stable output at 110 mV for a wide range of<br />
input currents. This means that no extra amplification is<br />
needed and that the output can be directly fed to the LVDS<br />
repeater, using only biased termination for the DC level.<br />
V. THE LINK DESTINATION CARD<br />
The currently used link destination card (LDC) is a single<br />
G-link simplex with an S-link output format [4]. This card is<br />
only designed for 20.04 MHz operation and can only receive<br />
one channel. For full speed testing, a double G-link simplex<br />
with 40.08 MHz capacity is needed. Also, it must have a large<br />
FPGA, since much of the link data processing has been moved<br />
to the destination card. Since there are no commercially<br />
available solutions for 40.08 MHz readout, a new destination<br />
card would have to be developed if a full speed link is desired.<br />
However, if readout speed remains 20.04MHz, the ODIN<br />
double G-link destination card [5] could be used, provided that<br />
it is fitted with a large enough FPGA.<br />
VI. PROJECT STATUS.<br />
The interface link is presently being used in the testing of<br />
the digitizer boards. For these tests, there is no need for the<br />
link destination card to have two channel capacity. The<br />
operation has been reliable during these production runs, but<br />
further tests will be made to determine the optimum<br />
performance, including bit error tests and radiation tests.<br />
As of September 2001, a Chicago built interface card is<br />
baseline link for TileCal, making the Stockholm interface card<br />
an alternative solution. However, a certain development of the<br />
Stockholm interface will continue until the Chicago solution<br />
is proven. This may include building a version using the<br />
CERN developed Gigabit Optical Link (GOL). Such a design<br />
would lead to further simplification (eliminating the input<br />
multiplexers) better radiation tolerance (the GOL is made in<br />
DMILL [6]) and 40 MHz operation.<br />
VII. ACKNOWLEDGEMENTS<br />
We would like to thank Stefan Rydström and Mark Pearce<br />
at the Royal Institute of Technology, Section of Experimental<br />
Particle Physics, for their help with the design of the<br />
transmitter.<br />
VIII. REFERENCES<br />
1 The ATLAS Tile Calorimeter Digitizer, S. Berglund, C. Bohm,<br />
M Engström, S-O. Holmgren, K. Jon-And, J. Klereborn, B.<br />
Selldén, S. Silverstein,, K. Anderson, A. Hocker, J. Pilcher,<br />
H. Sanders, F. Tang and H. Wu, Proceedings of the Fifth<br />
Workshop on Electronics for LHC Experiments, Snowmass,<br />
Colorado, 1999, p. 255.<br />
2 http://www.cern.ch/TTC/intro.html<br />
3 Front-end electronics for ATLAS Tile Calorimeter, K.<br />
Andersson, J.Pilcher, H.Sanders, F.Tang, S.Berglund,<br />
C.Bohm, S-O.Holmgren, K.Jon-And, G.Blanchot,<br />
M.Cavalli-Sforza, Fourth Workshop on Electronics for LHC<br />
Experiments, Rome 1998, p. 239.<br />
4 http://hsi.web.cern.ch/HSI/s-link/products.html#S- LINK<br />
5<br />
http://hsi.web.cern.ch/HSI/s-link/devices/odin/<br />
6 Final acceptance of the DMILL Technology Stabilized at<br />
TEMIC/MHS, M. Dentan, et. al., Fourth Workshop on<br />
Electronics for LHC Experiments, Rome 1998, p. 79.<br />
7 Redundancy or GaAs? Two different approaches to solve the<br />
problem of SEU(Single event upset ) in a Digital Optical<br />
Link, B. Dinkespiler, R. Stroynowski, S. Xie, J. Ye, M-L.<br />
Andrieux, L. Gallin-Martel, J. Lundquist, M. Pearce, S.<br />
Rydstrom, and F. Rethore
Development and a SEU Test of a TDC LSI for the ATLAS Muon Detector<br />
Yasuo Arai 1 , Yoshikazu Kurumisawa 2 and Tsuneo Emura 2<br />
Abstract<br />
A new TDC LSI (AMT-2) for the ATLAS Muon detector<br />
has been developed. The AMT-2 chip is a successor of the<br />
previous prototype chip (AMT-1). The design of the chip was<br />
polished up for aiming mass production of 20,000 chips in<br />
year 2002. Especially, power consumption of the chip was<br />
reduced to less than half of the previous chip by introducing<br />
newly developed LVDS receivers.<br />
The AMT-2 was processed in a 0.3 µm CMOS Gate-Array<br />
technology. It achieved 300 ps timing resolution and includes<br />
several data buffers, trigger matching circuit, JTAG interface<br />
and so on.<br />
First SEU test by using a proton beam was recently<br />
performed. Although the test results are very <strong>preliminary</strong> at<br />
present stage, we get very low SEU rate safely used in<br />
ATLAS environment.<br />
I. INTRODUCTION<br />
ATLAS precision muon tracker (MDT) requires highresolution,<br />
low-power and radiation-tolerant TDC LSIs<br />
(called AMT: ATLAS Muon TDC). Total number of TDC<br />
channels required is about 370 kch.<br />
The AMT chip is developed in a 0.3 µm CMOS Gate-<br />
Array technology (TC220G, Toshiba Co.). Block diagram of<br />
the chip is shown in Fig. 1. It contains 24 input channels, 256<br />
words level 1 buffer, 8 words trigger FIFO and 64 words<br />
readout FIFO. Both leading and trailing edge timings can be<br />
recorded. The recorded data are matched to trigger signal<br />
timing, and the matched data are transferred through 40 Mbps<br />
serial line. By using an asymmetric ring oscillator [1] and a<br />
Phase Locked Loop (PLL) circuit, it achieved 300 ps RMS<br />
timing resolution.<br />
A prototype chip, AMT-1, was successfully tested and<br />
reported in the last LEB workshop [1]. Already 500 AMT-1<br />
chips were produced and mounted in front-end PC boards<br />
with ASD (Amp/Shaper/Discri) chips [2]. These boards are<br />
being tested with MDT chambers in several laboratories.<br />
AMT-2 chip is a successor of the AMT-1 chip and<br />
regarded as a prototype for mass production. Major<br />
improvement of the AMT-2 is reduced power consumption.<br />
The AMT-1 consumes about 800 mW of which 470 mW is<br />
consumed in LVDS receivers. We have developed a new lowpower<br />
LVDS receiver and proceeded to low power design of<br />
logics. Thus the power consumption of the AMT-2 is reduced<br />
to 360 mW.<br />
1 KEK, National High Energy Accelerator Research Organization,<br />
Institute of Particle and Nuclear Studies, (yasuo.arai@kek.jp)<br />
2 Tokyo University of Agriculture and Technology<br />
In addition, the chip testability was enhanced. This is very<br />
important for mass production. Mass production chips will be<br />
mainly tested at LSI testes of the manufacturer. Only very<br />
small fraction of the chip will be tested in our laboratory.<br />
Since the LSI testers runs only at 10 MHz, special care must<br />
be taken to verify 40 MHz operation. We think reduced<br />
voltage test will certify the operation. To verify the stability of<br />
the PLL, internal counter is prepared to count 80 MHz PLL<br />
clock. After a fixed time, the counted value will be checked.<br />
Photograph of the AMT-2 chip is shown in Fig. 2. The<br />
chip is packaged in a 144 pins plastic (ceramic) QFP with 0.5<br />
mm pin pitch. About 110k gates are used in a 6 mm by 6 mm<br />
die.<br />
The chip must be qualified to have adequate radiation<br />
tolerance in ATLAS environment. Gamma-ray irradiation to<br />
measure Total Ionization Damage (TID) was already done to<br />
the same process [3]. Recently we executed a first experiment<br />
of the Single Event Upset (SEU) test by using a proton beam.<br />
Preliminary results are described in section IV.<br />
Fig. 1 Block diagram of the AMT-2 chip.
Fig. 2 Open view of the AMT-2chip. The die size is about 6 mm by 6<br />
mm. The photograph is ceramic packaged chip. Plastic packaged<br />
ones are used in circuit tests and beam tests.<br />
II. AMT-2 CIRCUIT DESCRIPTION<br />
Main specification of the AMT-2 chip is summarized in<br />
Table. 1. Since the detailed description of the AMT chip is<br />
available in other documents [4, 5], only new features are<br />
presented here after brief explanation of the chip operation.<br />
Table. 1 AMT-2 Specification (@40MHz System Clock)<br />
Least Time Count 0.78125 ns/bit<br />
Time Resolution 300 ps RMS<br />
Dynamic range 13 (coarse) + 4 (fine) = 17 bit<br />
Max. Trigger Latency 16 bit (51 µsec)<br />
Int./Diff. Non Linearity < 80 ps RMS<br />
No. of Channels 24 Channels<br />
Level 1Buffer 256 words<br />
Read-out FIFO 64 words<br />
Trigger FIFO 8 words<br />
Double Hit Resolution 99.8%@400kHz(two edge)<br />
Hit Input Level LVDS<br />
Data output LVDS Serial (10 - 80 Mbps)<br />
or 32 bit parallel.<br />
CSR access JTAG or 12 bit control bus.<br />
General I/O ASD control l (5 pins) and general out<br />
(12 pin) and in (3 pin).<br />
Power 3.3+-0.3V, ~360 mW<br />
Process 0.3 µm CMOS Sea-of-Gate<br />
Package 144 pin plastic QFP<br />
A. AMT-2 Operation<br />
The asymmetric ring oscillator produces a double<br />
frequency clock (80 MHz) from a LHC beam clock (40MHz).<br />
By dividing the 12.5 ns clock period into 16 intervals in the<br />
oscillator, a time bin size of 0.78 ns is obtained.<br />
A hit signal is used to store fine time and coarse time<br />
measurement in individual channel buffers. The time of both<br />
leading and trailing edge of the hit signal (or leading edge<br />
time and pulse width) can be stored. Each channel has a 4word<br />
buffer where measurements are stored until they can be<br />
written into the common level 1 (L1) buffer.<br />
The L1 buffer is 256 hits deep and is written into like a<br />
circular buffer. Reading from the buffer is random access<br />
such that the trigger matching can search for data belonging to<br />
the received triggers.<br />
Trigger matching is performed as a time match between a<br />
trigger time tag and the time measurements them selves. The<br />
trigger time tag is taken from the trigger FIFO and the time<br />
measurements are taken from the L1 buffer. Hits matching the<br />
trigger are passed to the read-out FIFO.<br />
The data are transferred to a Chamber Service Module<br />
(CSM) [6] through a serial data interface. The serial interface<br />
supports both DS-protocol and simple data-clock output. The<br />
data transfer speed is selectable between 10 MHz to 80 MHz<br />
(40 MHz will be used in the MDT).<br />
There are 15 control registers and 6 status registers. These<br />
registers are accessible from JTAG interface. A total parity of<br />
the control registers is stored in the status register. If a SEU<br />
occurs in the control register, a parity error is caused and<br />
notified through an Error signal or an Error packet.<br />
The chip has JTAG boundary scan circuit, which is used<br />
to scan I/O pins, the control and status registers, internal<br />
circuit registers for debugging purpose, and BIST (Built-In<br />
Self-Test) for the level 1 buffer and FIFOs. The channel<br />
buffer and the level 1 buffer have a parity bit for each word to<br />
detect SEU. In the AMT-2, ASD control function through the<br />
JTAG is also added (see section II-C).<br />
B. New LVDS Receiver<br />
We used existing Toshiba design of the LVDS receiver in<br />
the AMT-1. The power consumption of the LVDS receiver<br />
(15.5 mW) does not cause much problem if the number of the<br />
receiver is small. However we need 26 (30 in AMT-1)<br />
receives, and the total power consumption becomes large<br />
(470mW). Thus a new low-power LVDS receiver was<br />
required.<br />
Although the available transistor size is very limited in I/O<br />
pad area in a Gate-Array, a low-power LVDS receiver was<br />
successfully developed while keeping adequate performance.<br />
Fig. 3 shows performance of the previous and new LVDS<br />
receivers. The propagation time is even improved while<br />
reducing the constant current. This was mainly achieved by<br />
reducing the size of input transistors.
Idd[mA]<br />
Tpd[ns]<br />
7<br />
6<br />
5<br />
4<br />
3<br />
2<br />
1<br />
0<br />
0.0<br />
6<br />
5<br />
4<br />
3<br />
2<br />
1<br />
0<br />
0.0<br />
(a)<br />
(b)<br />
0.5<br />
0.5<br />
Old<br />
New<br />
Old<br />
New<br />
1.0 1.5<br />
Vicm[V]<br />
1.0 1.5<br />
Vicm[V]<br />
∆Vin=100mV<br />
2.0<br />
Tpd(LH)<br />
Tpd(HL)<br />
2.0<br />
2.5<br />
2.5<br />
Fig. 3 Comparison of previous and new LVDS receiver<br />
characteristics (simulation). (a) Constant current (Idd) vs. input<br />
common mode voltage (Vicm), (b) propagation delay (Tpd) vs. Vicm.<br />
LH and HL denotes Low to High and High to Low transition of the<br />
output.<br />
C. ASD Control<br />
The ASD chip has many registers to keep shaping time,<br />
threshold DAC value etc. A Xilinx chip is used to control<br />
these ASD registers from JTAG signal in the present frontend<br />
board.<br />
This Xilinx chip consumes additional power and area, and<br />
it also must be qualified for radiation. Since the power and the<br />
area are very tight, it was decided to move the ASD control<br />
function into the AMT-2. Fig. 4 shows the principle<br />
connection between the AMT and the ASD. Five signal lines<br />
are used between the ASD and the AMT-2.<br />
Main protocol is almost same as JTAG boundary scan cell.<br />
Data are shifted from ASDOUT to ASDIN through 3 ASD<br />
chip. The shifted data are stored in shift cells and copied to<br />
shadow cells when ASDLOAD signal is asserted.<br />
In addition to the ASD control, 12 output and 3 input pins<br />
are prepared as a general purpose I/O pins. These pins can be<br />
used when additional control lines are needed in the front-end<br />
board.<br />
Fig. 4 ASD internal circuit and control signals from the AMT-2.<br />
A. PLL<br />
III. MEASUREMENT RESULTS<br />
Jitter of the ring oscillator was measured by time<br />
distribution between input clock edge and PLL clock edge<br />
(Fig. 5). The jitter at operating point (3.3V, 80MHz) is 150 ps<br />
RMS. This value is sufficiently low compared with<br />
digitization error of 225 ps. Total timing resolution for a<br />
single edge measurement will be about 300 ps. This satisfies<br />
the required resolution of 0.5 ns in the MDT.<br />
Fig. 6 shows the jitter variation to the oscillating<br />
frequency (Fosc) and the power supply voltage (Vdd). This<br />
indicates enough margins around the operating point.<br />
Fig. 5 PLL output clock (upper curve, 80MHz) and timing<br />
distribution (histogram) triggered with external clock (lower curve,<br />
40MHz).
Jitter[ps]<br />
2.5<br />
300<br />
250<br />
200<br />
150<br />
100<br />
50<br />
0<br />
0<br />
2.7<br />
20<br />
2.9<br />
40<br />
Vdd[V]<br />
3.1 3.3<br />
60 80<br />
Fosc[MHz]<br />
3.5<br />
Operating Point<br />
100<br />
3.7<br />
vs Fosc<br />
vs Vdd<br />
Fig. 6 PLL jitter variation for oscillating frequency (@Vdd=3.3V)<br />
and supply voltage (@Fosc = 80 MHz).<br />
B. Power Consumption<br />
Total power consumption of the chip is measured at<br />
several operating conditions and shown in Fig. 7. In a very<br />
severe condition (100kHz hit rate and 100 kHz Trigger rate)<br />
power consumption is about 15 mW/ch. Compared with<br />
previous chip (AMT-1) we could reduce 18 mW/ch. This is<br />
achieved mainly by using the new LVDS receiver and<br />
reducing the number of LVDS receiver from 30 to 26.<br />
Power(mW/ch)<br />
40<br />
30<br />
20<br />
10<br />
0<br />
LVDS Only<br />
100 Ohm T<br />
10MHz Clk<br />
AMT-1<br />
AMT-2<br />
20MHz Clk<br />
30MHz Clk<br />
40MHz Clk<br />
Start<br />
-18mW<br />
Fig. 7 Power consumption of the AMT-1 and the AMT-2 chips for<br />
different operating conditions. Conditions are changed from left to<br />
right; powered to LVDS circuit only, 100 ohm termination resistors<br />
were connected to LVDS drivers, external clock (10-40 MHz) is<br />
applied, measurement started, 100 kHz hit signals are applied, then<br />
finally a 100kHz trigger signal is applied.<br />
IV. SEU TEST<br />
Radiation tolerance of the present process for Total<br />
Ionization Damage (TID) was already measured and shows<br />
adequate tolerance to gamma ray [3]. Furthermore CMOS<br />
process is not sensitive to neutrons in estimated level of the<br />
MDT environment.<br />
100kHz Hit<br />
120<br />
100kHz Trig<br />
3.9<br />
140<br />
Remaining issue is Single Event Effects (SEE) caused by<br />
energetic hadrons (> 20 MeV) [7]. To measure the SEE, we<br />
need to perform beam test to the chip.<br />
There are several single event effects in which Single<br />
Event Latch up (SEL) and Single Event Upset (SEU) are<br />
important for CMOS process. We have done first test of SEU<br />
by using a proton beam.<br />
We used an AVF cyclotron at the Cyclotron and<br />
Radioisotope Center (CYRIC) of the Tohoku University,<br />
Japan. The Cyclotron was recently upgraded and has<br />
maximum proton energy of 90 MeV.<br />
We have done a first beam test by using a 50 MeV proton<br />
beam, and irradiated 2 AMT-2 chips. Beam intensity was<br />
around 1 nA and the beam size was monitored visually to<br />
have 2 cmφ. During the irradiation, beam intensity was<br />
monitored with two plastic scintillators, but no special device<br />
was used to measure its distribution.<br />
AMT-2 has 180 bits in the CSR registers, and a total of<br />
11,360 bits in the L1, the trigger and the readout buffers. The<br />
CSR register was composed of Flip-Flops, and the buffer<br />
memories are composed of 6 transistors static memory cell.<br />
Both circuits has almost complimentary circuit for positive<br />
and negative logic signals, so we assume the SEU rate may be<br />
same regardless of the contents of memory.<br />
Unfortunately we cannot directly read and write the<br />
contents of the buffers. Instead we used Built-in Self-Test<br />
(BIST) circuit to detect the SEU. The BIST circuit performs<br />
two kinds of 13N marching pattern test. The results are<br />
compressed in a 36-bit Linear Feedback Shift Register. If one<br />
or more error occurs in this test sequence, final result has<br />
different value.<br />
We step forward the BIST sequence until writing all '1's<br />
(or '0's) to all memory location. Then irradiate the chip to the<br />
beam. After the irradiation, we continue the BIST sequence<br />
and read out the final value. These measurements were<br />
repeated several times. Since the SEU rate is very low, it is<br />
very rare to occur more than 2 SEU during one measurement.<br />
We have observed 3 SEU in 18 measurements.<br />
As for the CSR, the contents are directly written and read<br />
through JTAG lines. Before the irradiation '0's and '1's were<br />
written to the CSRs, then after the irradiation the contents<br />
were checked for SEU. We have not observed any bit flip in<br />
the CSR test.<br />
Fig. 8 shows leakage currents of the chip for gamma ray<br />
and proton. Horizontal scale is adjusted to fit both curves.<br />
Since the proton irradiation was paused during the<br />
measurement, annealing was occurred during the each<br />
measurement. Therefore the proton data are discontinuous at<br />
the boundary of measurement. Assuming the leakage current<br />
depend only on energy deposit in Si, proton flux is estimated<br />
to be about 2x10 9 protons/sec/cm 2 . This is consistent with the<br />
value estimated from the beam intensity and its size.<br />
The SEU test results are summarized in Table. 2.<br />
Assuming a Poisson distribution of the SEU event, we get<br />
upper limits of the cross section in 90% confidence level.
σ SEU (CSR) < 5.6x10 -15 cm 2 /bit<br />
σ SEU (buffer) < 2.7x10 -16 cm 2 /bit<br />
Calculated fluence of hadrons with an energy >20MeV is<br />
~10 10 1/cm 2 /10year in MDT location [7]. Applying number of<br />
bit in total MDT system, SEU rates (R SEU) will be,<br />
R SEU (CSR) < 0.1 upset/day<br />
RSEU (buffer) < 0.3 upset/day.<br />
Thus less than 1 upset in the CSR for 10 days of operation<br />
in MDT system. Furthermore there was no latch up during the<br />
experiments for both chips.<br />
Although these results are very <strong>preliminary</strong>, we feel<br />
relieved in very low SEU rate in both control registers and<br />
data buffers. This is mainly because the transistor size is<br />
relatively large in Gate-Array, so large charge is required to<br />
upset the memory.<br />
Although the leakage current was useful to estimate the<br />
proton flux, but the large leakage current might cause damage<br />
to the chip. Therefore it is better to irradiate less fluence to a<br />
chip, and accumulate more statistics by irradiating many chips.<br />
Further experiment are being planed.<br />
Idd[mA]<br />
0<br />
300<br />
250<br />
200<br />
150<br />
100<br />
50<br />
0<br />
0<br />
100<br />
Gamma Irrad.<br />
(Upper Axis)<br />
200<br />
Gamma Dose[krad]<br />
200<br />
Proton Irrad.<br />
(Lower Axis)<br />
400<br />
600<br />
800<br />
Proton Irradiation time[sec]<br />
300<br />
Annealing<br />
1.4x10 12<br />
protons/cm 2<br />
1000<br />
1200<br />
Fig. 8 Leakage current of the AMT-2 chip for gamma ray and proton<br />
irradiation. Data for proton is not continuous since the irradiation<br />
was stopped in each measurements and annealing was occur in each<br />
measurement.<br />
Table. 2 Number of observed SEU in the proton irradiation<br />
experiment.<br />
Proton<br />
Fluence<br />
Chip 1 1.4x10 12<br />
Chip 2 1.0x10 12<br />
Chip 1+2 2.4x10 12<br />
No. of<br />
meas.<br />
SEU<br />
in CSR<br />
SEU in<br />
buffers<br />
σ SEU<br />
buffers<br />
8 0 1 6.3x10 -17<br />
10 0 2 1.8x10 -16<br />
18 0 3 1.1x10 -16<br />
V. SUMMARY<br />
A production prototype chip (AMT-2) was successfully<br />
developed for ATLAS MDT detector. The chip fulfils<br />
required performance and being tested with MDT chambers.<br />
A new LVDS receiver is developed and the power<br />
consumption of the AMT-2 chip is reduced to 45% of the<br />
previous chip.<br />
Preliminary test of SEU rate by using a proton beam was<br />
performed and shows low enough rate to be safely used in the<br />
MDT. Latch up was not observed in this test.<br />
VI. ACKNOWLEDGEMENTS<br />
I would like to thank to O. Umeda, I. Sakai, M. Takamoto,<br />
K. Tsukamoto and T. Takada (Toshiba Co.) for their technical<br />
support. I also thank to T. Shinozuka (CYRIC) who managed<br />
to prepare beam time for us and M. Fujita (CYRIC) who gave<br />
us many technical support for the beam test.<br />
VII. REFERENCES<br />
[1] Y. Arai and T. Emura, " Development of a 24 ch TDC LSI<br />
for the ATLAS Muon Detector", Proceedings of the 6th<br />
Workshop on Electronics for LHC Experiments,<br />
CERN/LHC/2000-041, pp. 471-475.<br />
[2] C. Posch, E. Hazen, and J. Oliver, "MDT-ASD, CMOS<br />
front-end for ATLAS MDT", ATLAS Note, ATL-COM-<br />
MUON-2001-019.<br />
[3] Y. Arai, "Performance and Irradiation Tests of the 0.3 µm<br />
CMOS TDC for the ATLAS MDT", Proceedings of the<br />
Fifth Workshop on Electronics for LHC Experiments,<br />
Snowmass, 1999. CERN/LHCC/99-33, pp 462-466.<br />
[4] Y. Arai, "Development of Frontend Electronics and TDC<br />
LSI for the ATLAS MDT", Nucl. Instr. Meth. A. Vol. 453,<br />
pp. 365-371 (2000).<br />
[5] ATLAS-Japan TDC group web page.<br />
http://www-atlas.kek.jp/tdc/.<br />
[6] CSM Design & User Manual,<br />
http://atlas.physics.lsa.umich.edu/docushare/<br />
[7] Atlas policy on radiation tolerant electronics.<br />
http://www.cern.ch/Atlas/GROUPS/FRONTEND/radhard.<br />
htm
Anode Front-End Electronics for the Cathode Strip Chambers of the CMS Endcap Muon<br />
Detector<br />
N. Bondar* a), T. Ferguson**, A. Golyash*, V. Sedov*, N. Terentiev**.<br />
*) Petersburg Nuclear Physics Institute, Gatchina, 188350, Russia<br />
**) Carnegie Mellon University , Pittsburgh, PA, 15213, USA<br />
a) bondar@fnal.gov<br />
Abstract<br />
The front-end electronics system for the anode signals of<br />
the CMS Endcap Muon Cathode Strip Chambers has been<br />
designed. Each electronics channel consists of an input<br />
protection network, amplifier, shaper, constant-fraction<br />
discriminator, and a programmable delay with an output<br />
pulse width shaper. The essential part of the electronics is an<br />
ASIC consisting of a 16-channel amplifier-shaperdiscriminator<br />
(CMP16). The ASIC was optimized for the<br />
large cathode chamber size of up to 3.4 x 1.5 m 2 and for the<br />
large input capacitance (up to 200 pF). The ASIC combines<br />
low power consumption (30 mW/channel) with excellent<br />
time resolution (~2 ns). The second ASIC provides a<br />
programmable time delay which allows the alignment of<br />
signals with an accuracy of 2.5 ns. The pre-production<br />
samples of the anode front–end boards with CMP16 chips<br />
have been successfully tested and the mass production has<br />
begun.<br />
I. INTRODUCTION<br />
The purpose of the anode front-end electronics described<br />
in this paper is to receive and prepare the anode wire signals<br />
of the Cathode Strip Chamber (CSC) of the CMS Endcap<br />
Muon (EMU) system for further logical processing in order<br />
to find the location of charged particle with a time accuracy<br />
of one bunch crossing (25 ns) [1].<br />
Special features of the CSC are a six-plane twocoordinate<br />
measuring proportional chamber, a large chamber<br />
size (the largest one is 3.4 x 1.5 m 2 ) and a large detector<br />
capacitance (up to 200pF), created by joined together anode<br />
wires. Expected anode signal rate is about 20 kHz/channel.<br />
Total number of anode channels is more than 150000. The<br />
electronics is spread over the EMU detector with limited<br />
maintenance access. Estimated radiation dosage integrated<br />
over 10 LHC years is about 1.8 kRad for ionizing particles<br />
and about 10 12 neutrons per cm 2 [1].<br />
The electronics must satisfy the following requirements:<br />
-Able to determine the timing of the track hit with an<br />
accuracy of one bunch crossing with a high efficiency;<br />
-Match the chamber features to achieve optimal detector<br />
performance.<br />
-Be reliable during for 10 LHC years<br />
-Have sufficient radiation hardness<br />
-Have low power consumption [1].<br />
II. ANODE ELECTRONICS STRUCTURE<br />
A. Anode electronics specification<br />
To achieve accordance between a large detector size and a<br />
large detector capacitance on one hand and high sensitivity<br />
and time accuracy on the other hand, a relatively large<br />
shaping time of 30 ns for the anode signals, together with<br />
two-threshold constant-fraction discriminator, were proposed.<br />
This shaping time allows us to collect about 12% of the initial<br />
charge. Together with the discriminator threshold as low as<br />
20 fC, the efficiency plateau starts at 3.4kV [2]. The nominal<br />
operating point of the chamber is set at 3.6 kV and the<br />
average collected anode charge is about 140 fC.<br />
To achieve a minimum stable threshold level of anode<br />
electronics as low as 20 fC with the minimum possible crosstalk,<br />
a standard structure of the anode electronics channel was<br />
split into three parts located on three different boards. See<br />
Figure 1 for reference. Also, the amplifier-chamber signal<br />
connection and the chamber grounding and shielding were<br />
carefully planned and executed.<br />
Figure 1: Anode electronics structure. Protection board (Prot.) is a<br />
part of the chamber assembly, AFEB - Anode Front-End Board is a<br />
16-channel board, ALCT –Anode Local Charge Track finder logic<br />
board.<br />
Two 16-channel ASICs were designed to produce the<br />
anode structure. The first one is a 16-channel amplifiershaper-<br />
discriminator with an LVDS driver for output signals,<br />
named CMP16, and the second one is an LVDS-receiver -<br />
control-delay - pulse-width-shaper, named DEL16. This<br />
solution allows us to simplify the electronics board and<br />
increase the electronics reliability and maintainability, as well
as minimise power consumption. Standard BiCMOS and<br />
CMOS technologies were used for designing the ASICs,<br />
giving us a relatively low price and sufficient radiation<br />
hardness.<br />
B. Chamber ground and shielding. Protection<br />
Board.<br />
The chamber anode wires, cathode planes, protection<br />
boards and even the cathode amplifier input connections are<br />
all parts of the anode amplifier input circuit. To obtain<br />
optimal performance of the chamber, we have to observe the<br />
following rules: The anode amplifier input impedance must<br />
be close to the anode wire structure characteristic impedance<br />
(~50 Ohm). The cathode input impedance must be close to<br />
the characteristic impedance of the cathode strip structure.<br />
The detector-amplifier ground connection should be as short<br />
as possible and as wide as possible in order to have the<br />
minimum possible inductance for the connection [3].<br />
Each chamber plane has a solid metal cathode plate. This<br />
plate is a natural chamber signal ground for the plane. Both<br />
anode and cathode amplifiers input ground terminals are<br />
connected to this plane. The chamber’s outer copper foil<br />
along with the chamber metal frame and side covers create<br />
the detector RF case. The RF case and the signal ground are<br />
connected together at the amplifier side of the chamber to<br />
avoid a ground loop through the signal ground plate and<br />
along the amplifier input ground circuit.<br />
The protection board (PB) has two functions. The first<br />
one is to fan-in the chamber anode signals and adapt them to<br />
a standard 34-pin connector. The protection board collects<br />
signals from two chamber planes (8+8) and provides a proper<br />
ground connection between the chamber signal ground and<br />
the amplifier input ground The second function of the PB is<br />
to protect the inputs of the amplifier against accidental sparks<br />
in the chamber. The full protection network consists of two<br />
resistor-diode stages. The first protection stage is placed on<br />
the protection board in order to minimize the emergency<br />
current loop for better protection, and the second stage is on<br />
the input of the anode front-end board.<br />
C. Anode Front-End Board (AFEB)<br />
On the basis of the CMP16 chip, the 16-channel Anode<br />
Front-End Board (AFEB) AD16 is designed. This board<br />
receives the anode signals from the chamber wire groups,<br />
amplifies the signals, selects the signals over the preset<br />
threshold with precise time accuracy and transmits the logic<br />
level signals to the further stage with the LVDS levels<br />
standard. Since the EMU system contains almost 10,000<br />
AD16 boards, we have designed the AFEB in the simplest<br />
and cheapest way. There is only one CMP16 chip with the<br />
necessary minimum service components on it as well as a<br />
small voltage regulator to keep the “on-board voltage” stable,<br />
well filtered and independent of the power supply voltage.<br />
The board has a 34-pin input connector and a 40-pin output<br />
connector. Normally, this board is connected to the<br />
chamber’s protection board and fixed on the chamber side<br />
cover with a special bracket, providing a reliable and proper<br />
junction. A 20-pair twisted-pair cable connects the AFEB<br />
with the ALCT board. Since the functions to serve the AFEB<br />
are delegated to the ALCT board, this cable is used both to<br />
transmit output signals to the ALCT and to supply the board<br />
with power voltage, threshold voltage and test pulses. The<br />
DEL16 ASIC is a signal receiver at the very input of the<br />
ALCT .The ALCT provides the following AFEB services: a<br />
power supply voltage distribution circuit, a “power-ON/OFF”<br />
command driver for each AFEB, a threshold voltage source<br />
for each AFEB and a few test pulse generators to test the<br />
AFEB through its internal capacitance or through a special<br />
test strip on the cathode plane to inject input charge directly<br />
onto the anode wires [4].<br />
III. AMPLIFIER ASIC CMP16<br />
he main component of the AFEB is an amplifierdiscriminator<br />
ASIC. The chip parameters are specially<br />
optimized for the Endcap EMU CSC to obtain optimal<br />
performance from the chamber. The ASIC has the following<br />
electrical characteristics:<br />
Input impedance 40 Ohm<br />
Transfer function 7 mV/fC<br />
Shaper peaking time 30 ns<br />
Shaped waveform Semi-gaussian with<br />
Two-exponent tail cancellation<br />
Amplifier input noise 0.5 fC @Cin=0<br />
1.7 fC @Cin=200 pF<br />
Non-linearity
Input<br />
Figure 2: Schematic diagram of an anode amplifier-discriminator channel.<br />
IV. DELAY ASIC DEL16<br />
The anode pulses to the ALCT have a big variation of<br />
phase for various reasons, including different length of cables<br />
to the input of ALCT. The total time variation may be up to<br />
20 ns. To align the input pulse phases, a special 16-channel<br />
control delay chip was designed. The structure of one channel<br />
of this chip is presented in Figure 3. Each channel consists of<br />
an input LVDS-to-CMOS level converter; four stages of<br />
delay with 1, 2, 4, and 8 steps; an output width pulse shaper.<br />
Also, the chip has the possibility to generate a test level at<br />
each output. This option is used for testing chip-to-chip<br />
connections. The chip has a serial interface to control the<br />
delay and set the output test level.<br />
Id<br />
LVDS/TTL<br />
Pinp<br />
Out<br />
Ninp<br />
CLRB<br />
CLK<br />
DIN<br />
CHSB<br />
SHIFT_REG<br />
T1<br />
T2<br />
T3<br />
T4<br />
DOUT<br />
DOUT<br />
Vg - Artificial Ground<br />
Ofst - voltage reference<br />
Treshold - relative threshold voltage<br />
+<br />
A1<br />
I_discr - initial discriminator current<br />
+<br />
-<br />
3.4pF<br />
10K<br />
-<br />
2.185pF<br />
-<br />
+<br />
Preamplifier Detector tail<br />
+<br />
120K<br />
1.46pF<br />
-<br />
+<br />
-<br />
+<br />
6K<br />
-<br />
A1<br />
+<br />
Cancellation<br />
Delay channel parameters:<br />
Input signal level LVDS standard<br />
Output signal 3.3 V CMOS<br />
Minimum delay 20 ns<br />
Delay step 2 ns (adjustable with an external<br />
current)<br />
Delay steps 15 maximum<br />
Delay nonlinearity +/- 1 ns<br />
Output pulse width 40 ns (adjustable with an external<br />
current)<br />
Power supply voltage 3.3 V<br />
Power consumption 0.2 W<br />
Diff_d<br />
d26<br />
Id<br />
Inp Out d28<br />
Id<br />
Inp Out d30<br />
Id<br />
Inp Out d32<br />
Id<br />
Inp Out d34 Diff_W d36<br />
Id<br />
T_out<br />
In Out<br />
T1<br />
T-del<br />
Cnt C_t<br />
d27 T2<br />
T-del<br />
Cnt C_t<br />
d29 T3<br />
T-del<br />
Cnt C_t<br />
d31 T4<br />
T-del<br />
Cnt C_t<br />
d33<br />
C_t<br />
0.03pF<br />
75K<br />
+<br />
-<br />
-<br />
+<br />
14K<br />
-<br />
+<br />
1.0pF<br />
7K<br />
-<br />
+<br />
-<br />
+<br />
-<br />
30K<br />
"Two Pole" shaper<br />
0.1pF<br />
0.5pF<br />
-<br />
+<br />
Vg<br />
+<br />
-<br />
A1<br />
+<br />
30K<br />
-<br />
1.0pF<br />
"Bipolar signal"<br />
shaper<br />
Amplifier x 6 High level discriminator<br />
With base line<br />
restorer (A_p)<br />
"Enable" pulse<br />
0.232pF<br />
0.506pF<br />
Figure 3: Schematic diagram of one delay channel<br />
12K<br />
+<br />
+<br />
-<br />
+<br />
-<br />
-<br />
+<br />
0.5pF<br />
30K<br />
30K<br />
A1<br />
+<br />
-<br />
-<br />
+<br />
75K<br />
-<br />
10K<br />
A_p<br />
+<br />
-<br />
-<br />
+<br />
Vg<br />
+<br />
A1<br />
Ofst<br />
30K<br />
-<br />
x6<br />
Threshold<br />
10pF<br />
-<br />
+<br />
A2<br />
"Zero threshold"<br />
discriminator<br />
With zero threshold<br />
restorer (A_p)<br />
+<br />
-<br />
A2<br />
A_p<br />
I_discr<br />
AD<br />
Id<br />
AD<br />
Id<br />
Iw<br />
Enable<br />
Output buffer<br />
LVDS levels<br />
In_p<br />
Out_buff_d<br />
0.5pF<br />
OutN<br />
OutP<br />
D<br />
Out<br />
d37<br />
+<br />
-<br />
OutN<br />
OutP<br />
Out
The chip was designed using the AMI CMOS 0.5-micron<br />
technology. The chip was made in the AMI foundry via the<br />
MOSIS Service. The chip is packaged in a plastic 64-pin<br />
Quad Flat Pack with a pin pitch of 0.5 mm. The chip size is<br />
10 x 10 mm 2 .<br />
V. ANODE ELECTRONICS TEST<br />
A. Chamber performance<br />
The CSC with the anode electronics has been tested on<br />
the Cosmic Muon Stand at FNAL. We have reached a<br />
minimum discriminator threshold for the anode electronics<br />
installed on the chamber as low as 10 fC. We assume that 20<br />
fC threshold is a normal operational value. A standard CSC<br />
in that case has an efficiency plateau with a gas mixture of<br />
Ar+CO 2 +CF 4 =40+50+10, starting at 3.4 kV.<br />
A full-scale prototype CSC, completely equipped with<br />
electronics, was tested in a beam at CERN at the Gamma<br />
Irradiation Facility. The CSC performance was within the<br />
baseline requirements. In Figure 4, the final result of the<br />
bunch tagging efficiency is shown [2].<br />
Figure 4: Bunch crossing tagging efficiency (in 25 ns gate) vs GIF<br />
rate.<br />
B. Reliability test<br />
To measure the reliability of the AFEB AD16, we put 100<br />
AD16 boards (1,600 amplifier-discriminator channels) into<br />
an oven at a temperature of 110 O C. We assume that for each<br />
20 degrees the failure rate increases about two. The boards<br />
were supplied with power and the thresholds on the boards<br />
were set to minimum to start self-oscillation. Total test time<br />
in the oven was 4000 hours. This time corresponds to about 7<br />
years of real operation at 30 O C. Every two weeks we<br />
measured the board parameters. During the test, we have no<br />
failures and there were no visible changes in the electrical<br />
characteristics.<br />
C. Radiation test<br />
Since the AFEB contains both BiCMOS and bipolar<br />
components we had to test the Total Ionizing Dose (TID),<br />
Displacement and Single Effect Event (SEE) damages. [3] A<br />
few samples of the AFEB were irradiated with a 63 MeV<br />
proton beam at the University of California, Davis to test the<br />
electronics for TID and SEE damages. No latch-up or spikes<br />
or any changes in the static parameters were observed. At the<br />
required TID level of 5-6 kRad all changes of gain and<br />
slewing time were practically negligible [5].<br />
To test the electronics for possible Displacement damages<br />
the same boards were irradiated with 1 MeV neutrons from a<br />
reactor at Ohio State University. Total neutron fluence up to<br />
2x10 12 n/cm 2 was accompanied with a significant flux. The<br />
boards also received a TID of 50-60 kRad. Two boards were<br />
found working 40 days after the irradiation and others after<br />
one week of heating in an oven at 100 O C [5].<br />
D. AFEB mass production test<br />
A special automated test setup and test methods has been<br />
developed to measure the CMP16 chip and the AD16 board<br />
parameters, as well as delay chip DEL16 parameters. The test<br />
stand schematic structure is illustrated in Figure 5.<br />
Figure 5: Test stand block structure.<br />
There is a specially designed pulse generator for<br />
producing test pulses with the necessary accuracy and shape.<br />
The second main unit is a LeCroy 3377 TDC. We have<br />
designed three special adapters to match the different devices<br />
to be tested with the test setup.<br />
We use the following procedure for testing the parameters of<br />
the AFEB: Discriminator threshold is set at one of two<br />
standard levels, about 20 or 40 fC. The input pulse amplitude<br />
is increased in steps to supply the amplifiers with an input<br />
charge from 0 fC to 100 fC for threshold testing and from 0<br />
fC to 500 fC for time slewing testing. The generator sends<br />
400 pulses at each step. The TDC measures the number of<br />
AFEB’s output pulses and propagation time of the CMP16<br />
versus the amplitude of the input signal. The resulting curves<br />
of “output pulse count versus input amplitude” for two<br />
different thresholds (threshold test) are used to derive the<br />
required CMP16 parameters. Amplifier noise is calculated<br />
from the curve slope. For the amplifier gain calculation and<br />
getting threshold calibration, we use two curves, one at 150<br />
mV threshold voltage, and the second at 400 mV. The
esulting curve of “propagation time versus input amplitude”<br />
(timing test) is used to estimate the CMP16 slewing time.<br />
We use a multi-step test procedure for AFEB verification.<br />
The first step is a selection of good chips for assembly on the<br />
board. A special clamp-shell adapter for two chips is used.<br />
The chip under test has normal power voltage of 5V,<br />
threshold voltage of 150 mV (about 20 fC of input charge);<br />
the amplifier input capacitor of 0 pF. The test pulse amplitude<br />
is ramped up to provide an input charge through the chip’s<br />
internal capacitance from 0 fC to 200 fC. A good chip must<br />
satisfy the following requirements: a noise level less than 0.8<br />
fC @ Cin=0 pF; a threshold uniformity better than +/-10%; a<br />
deviation of propagation time should be within 4ns for all<br />
channels of the chip for input signals from 50 fC to 200 fC.<br />
The assembly company performs a test on the assembled<br />
board according to our test procedure and using our<br />
equipment.<br />
All assembled boards are put through a burn-in procedure.<br />
We keep the boards for 75 hours in an oven at 100 C with the<br />
power on and with an input test pulse. After the burn-in<br />
procedure, all boards are given a final test, calibration and<br />
certification.<br />
A special adapter for testing and calibrating boards was<br />
designed. The adapter has a special injection circuit and 200<br />
pF input capacitance for each amplifier’s input. Injection<br />
circuit accuracy is better than 2% after calibration. The final<br />
test and calibration procedure has four test runs with the<br />
following conditions:<br />
1 -low threshold, external injection circuit,<br />
2 -high threshold, external injection circuit,<br />
3 -low threshold, the chip internal capacitance as an injection<br />
circuit,<br />
4 -low threshold, time measurement.<br />
The following parameters are collected from the data:<br />
- Threshold level as a function of threshold voltage,<br />
- Threshold uniformity for each chip,<br />
- Noise level at Cin=200 pF,<br />
- Propagation time as a function of the input signal<br />
amplitude,<br />
- Propagation time uniformity,<br />
- Chip time resolution,<br />
- Chip’s internal test injection capacitance.<br />
The raw test data and the final results are stored in a<br />
database. We intend to keep the board calibration and<br />
certification results in a database for further experimental<br />
needs<br />
E. Delay chip mass production test<br />
A special clamp-shell adapter for two chips was designed<br />
in order to use the existing test setup for delay chip testing.<br />
The following test procedure is used: The test program scans<br />
the delay code in the DEL16 chip in steps of “one” from<br />
delay code “0” to the maximum delay code “15”. The test<br />
generator sends 100 input pulses for each delay step, and the<br />
propagation time for each step is measured by the TDC. The<br />
output test level generating option is measure by switchingon<br />
this option for each channel and measuring the chip output<br />
voltage for that channel.<br />
A good chip must satisfy the following conditions: the<br />
control interface can switch on a test level at the chip outputs,<br />
the maximum delay and output pulse width should meet the<br />
specifications, and the delay step variation between channels<br />
must be less than half of the delay step.<br />
VI. CONCLUSION<br />
The anode front-end electronics for the Endcap Muon<br />
CSC and the electronics layout on the chamber were carefully<br />
designed and arranged to obtain the best chamber<br />
performance. The CSC test results show us that the chamber<br />
equipped with the electronics meets the baseline<br />
requirements.<br />
Special equipment and necessary procedures were<br />
designed for testing and calibrating the electronics at every<br />
stage of the mass production and the CSC final assembly.<br />
The electronics calibration and test results will be available<br />
for the duration of the experiment.<br />
The electronics mass production has started.<br />
VII. REFERENCES<br />
1. CMS The Muon Project Technical Design Report<br />
CERN/LHCC 97-32 CMS TDR 3 1997.<br />
2. D. Acosta et al, “Large CMS Cathode Strip Chambers:<br />
design and performance.” Nucl. Instrum. Meth. A453:182-<br />
187, 2000.<br />
3. N. Bondar, “Design of the Analog Electronics for the<br />
Anodes of the Proportional Chambers for the Muon Detector<br />
of the GEM Unit” Preprint EP-3-1994-1945, PNPI, 1994.<br />
4. J. Hauser et al, “Wire LCT Card” at http://wwwcollider.physics.ucla.edu/cms/trigger/wirelct.html.<br />
5. T. Ferguson, N. Terentiev, N. Bondar, A. Golyash, V.<br />
Sedov “Results of Radiation Tests of the Anode Front-End<br />
Boards for the CMS End-Cap Muon Cathode Strip<br />
Chambers”, in these proceedings.
Results of Radiation Tests of the Anode Front-End<br />
Boards for the CMS Endcap Muon Cathode Strip Chambers<br />
Abstract<br />
We report the results of several radiation tests on preproduction<br />
samples of the anode front-end boards for the CMS<br />
endcap muon system. The crucial components tested were the<br />
16-channel amplifier-shaper-discriminator ASIC (CMP16) and<br />
the voltage regulator TK112B. The boards were exposed to<br />
doses up to 80 kRad in a 63 MeV proton beam, and to a neutron<br />
fluence up to 2x10 12 n/cm 2 from a nuclear reactor. The static<br />
and dynamic characteristics were measured versus the radiation<br />
dose. The boards were found operational up to a total ionizing<br />
dose (TID) of 60 kRad.<br />
I. INTRODUCTION<br />
The Anode Front-End Boards (AFEB) [1] are designed for<br />
the Cathode Strip Chambers (CSC) [2] of the CMS Endcap<br />
Muon System [3]. The AFEB amplifies and discriminates<br />
signals from the CSC anode wires which are grouped in bunches<br />
from 5 to 16. Their main purposes are to acquire precise muon<br />
timing information for bunch crossing number identification at<br />
the Level-1 trigger and to provide a coarse radial position of the<br />
muon track for the offline analysis. Radiation tolerance and<br />
reliability are important issues for the CMS electronics,<br />
including the endcap muon CSC anode front-end electronics.<br />
The peak luminosity of LHC, 10 34 cm -2 s -1 , combined with the 7<br />
TeV beam energy, will create a very hostile radiation<br />
environment in the detector experimental hall. The most severe<br />
conditions in the CMS muon endcap region are in the vicinity of<br />
the ME1/1 CSC chambers. Here, the neutron fluence and the<br />
total ionizing dose (TID) accumulated during 10 years of LHC<br />
operation (5x10 7 s) are expected to be about 6-7x10 11 n/cm 2 (at<br />
En>100 keV) and 1.8-2 kRad, respectively [4-5]. For locations<br />
other than the ME1/1 chambers the doses are at least 10 times<br />
lower.<br />
As BiCMOS devices, the AFEB’s ASIC chip and voltage<br />
regulator TK112B are affected by exposure to both ionizing<br />
radiation (TID) and to neutrons (Displacement damage),<br />
yielding degraded performance and even failure if the doses are<br />
T. Ferguson, N. Terentiev a)<br />
Carnegie Mellon University, Pittsburgh, PA, 15213, USA<br />
N. Bondar, A. Golyash, V. Sedov<br />
Petersburg Nuclear Physics Institute, Gatchina, 188350, Russia<br />
a) teren@fnal.gov<br />
sufficiently high. The corresponding effects are cumulative. The<br />
other major category is the Single Event Effects (SEE) which<br />
are caused by the nuclear reactions of charged hadrons and<br />
neutrons. From these, the relevant effect is Single Event Latchup<br />
(SEL) which results in a destructively large current draw.<br />
The plan of our measurements [6-7] was to test the<br />
performance of the anode front-end boards, with pre-production<br />
chips, CMP16F (1.5 micron BiCMOS AMI technology), on<br />
them, up to a level of 3 times the doses mentioned above, and to<br />
observe the presence of single-event effects such as latch-up, at<br />
higher doses. The boards were irradiated with a 63 MeV proton<br />
beam at the University of California, Davis in June, 2000 to test<br />
them for TID and SEL effects. The results are presented in<br />
Section II. The purpose of the test with 1 MeV neutrons from a<br />
reactor at the Ohio State University was to expose the boards to<br />
possible displacement damage (Section III). The radiation test<br />
results are summarized in Section IV.<br />
II. TESTS WITH 63 MEV PROTONS<br />
The description of the 63 MeV proton beam test facility can<br />
be found in [8]. The beam current can be regulated from 2 pA<br />
up to 100 nA, with a profile almost flat over a radius of 35 mm.<br />
Four powered anode front-end 16-channel boards with CMP16F<br />
chips on them were tested at an incident beam angle of 0<br />
degrees with respect to the normal to the board. The beam<br />
covered all elements of the board including the ASIC chip<br />
itself, the input protection diodes, the voltage regulator and<br />
passive elements. Boards #8 and #9 received 7 successive<br />
exposures for approximately 1 min each, for a total TID of<br />
14 kRad. Two other boards (#5 and #7) received<br />
correspondingly 7 (10) successive exposures for a total TID of<br />
80 kRad (74 kRad) for approximately 1-2 minutes per<br />
exposure. One more board was placed out of the beam and<br />
tested in parallel with the irradiated boards to provide<br />
monitoring of the test conditions.<br />
The static parameters (voltages on the amplifier and<br />
discriminator of the ASIC and on the regulator TK112B) were
measured during each exposure. The measurements of the<br />
threshold, noise, gain, discriminator offset, resolution time and<br />
slewing time were done during 10-20 minutes after each<br />
exposure with the use of the ASIC test stand [9]. For each chip<br />
the results were averaged over all channels and normalized to<br />
their initial values obtained before the first exposure.<br />
No latch-ups or spikes or any changes in the static<br />
parameters were observed. However, the dynamic parameters<br />
such as gain, offset, threshold and slewing time were slightly<br />
sensitive to the radiation. The observed threshold Qthr measured<br />
in terms of input charge decreased with the TID (Figure 1) due<br />
Figure 1: Normalized threshold versus dose.<br />
Figure 2: Normalized gain versus dose.<br />
to changes in the amplifier gain (Figure 2) and the discriminator<br />
offset 1 (Figure 3). The overall effect for Qthr is rather small,<br />
about 15% for a TID of 60 kRad. The noise increased by less<br />
than 10% from its initial value of about 1.7 fC. The resolution<br />
time of 1 ns was not affected. The slewing time ST showed a<br />
maximum increase of 40% at a TID of 60 kRad (Figure 4). At<br />
the required 3 times level of TID (5-6 kRad), all changes were<br />
practically negligible. However, at a TID of 65-70 kRad, two<br />
boards failed (no output signal) showing large changes in the<br />
amplitude and the shape of the pulse after the shaper. About a<br />
month later, though, these boards had become operational again.<br />
Figure 3: Normalized discriminator offset versus dose.<br />
Figure 4: Normalized slewing time versus dose.
III. NEUTRON IRRADIATION OF THE ANODE BOARDS<br />
Six boards with CMP16F chips on them were exposed in<br />
October, 2000 to a reactor neutron fluence up to 2x10 12 n/cm 2<br />
at a neutron energy of around 1 MeV. The exposure was 14 min<br />
long. The boards also received a TID of about 50-60 kRad from<br />
γ’s [10] which accompanied the reactor neutrons. Prior to the<br />
test with neutrons, in August 2000, the same boards have been<br />
irradiated in the 63 MeV proton beam at the UC Davis with a<br />
total ionizing dose of 5 kRad delivered during 2.5 min. The<br />
boards were powered in both exposures. The static parameters<br />
of the boards were monitored during the irradiation tests. No<br />
changes of static parameters were recorded in the 63 MeV<br />
proton beam. In the neutron irradiation, the board regulator<br />
output voltage and the voltages of the preamplifier and<br />
discriminator increased by only 2-5% at the end of the<br />
exposure.<br />
The dynamic characteristics of the boards were measured<br />
on the ASIC test stand at Fermilab [9] prior to the test in the<br />
proton beam, and before and after the neutron irradiation. The<br />
set of data obtained after the neutron irradiation includes five<br />
measurements made at intervals of one to two weeks, with the<br />
first measurement taken about 40 days after the neutron<br />
irradiation. The last three tests included two periods of one<br />
week each and one period of four weeks of heating the boards<br />
in an oven at 110 O C. The corresponding changes, averaged<br />
over 16 channels, of Qthr , gain and discriminator offset relative<br />
to their initial values for the six boards are presented in Figures<br />
5 – 7. The initial values of Qthr, gain and offset were in the<br />
ranges of 12 – 22 fC, 9 – 10 mV/fC and 10 – 80 mV<br />
respectively. The noise increased by less than 10% from its<br />
initial value of 2 fC.<br />
Figure 5. Normalized Qthr versus time. The arrows show the<br />
days of proton and neutron irradiations. The dash line presents<br />
the days of heating.<br />
Figure 6. Normalized gain versus time.<br />
Figure 7. The discriminator offset changes versus time.<br />
All boards survived unchanged after the irradiation by the 63<br />
MeV proton beam. This confirms the results obtained earlier in<br />
the proton beam. However, only two boards (68,70) from the<br />
six were working in the first test taken 40 days after the neutron<br />
irradiation. The rest came to life after one week of heating in the<br />
oven at 110 O C. All boards showed moderate changes in their<br />
dynamic characteristics after irradiation by neutrons. Note that<br />
these changes are opposite to the effects observed during proton<br />
irradiation (Section II). Five more weeks of heating brought the<br />
parameters of the boards closer to their values measured before<br />
the proton test.
IV. CONCLUSIONS<br />
The radiation tests of the anode front-end boards performed<br />
in a 63 MeV proton beam show that the boards are operational<br />
up to TID of 60 kRad. At the required 3 times level of TID (5-6<br />
kRad) the dynamic characteristics of the boards remain<br />
unchanged.<br />
The measurements with the neutrons were complicated by<br />
the presence of a significant γ flux. In addition to the nominal<br />
neutron fluence up to 2x10 12 n/cm 2 , the boards received a TID<br />
of 50 – 60 kRad. Only two boards from the six were working in<br />
the test taken 40 days after the neutron irradiation 2 . The rest<br />
become operational after one week of heating in the oven at<br />
110 O C. From our results, we can roughly estimate that for the<br />
test doses given above the annealing time is about a few months<br />
at room temperature. Since the LHC rate of real radiation<br />
exposure is much slower than this, and assuming that the<br />
observed effects were cumulative, we can conclude that the<br />
anode front-end boards should not show 2<br />
any significant<br />
radiation damage during the 10 years of normal LHC operation.<br />
V. ACKNOWLEDGEMENTS<br />
We would like to thank M. Tripathi and B. Holbrook of the<br />
University of California, Davis, and B. Bylsma and T.Y. Ling<br />
of the Ohio State University for their valuable help.<br />
This work was supported by the U.S. Department of<br />
Energy.<br />
VI. REFERENCES<br />
1. N. Bondar, T. Ferguson, A. Golyash, V. Sedov, N.<br />
Terentiev,"Anode Front-End Electronics for the Cathode Strip<br />
Chambers of the CMS Endcap Muon Detector", in these<br />
proceedings.<br />
2. D. Acosta et al, "Large CMS Cathode Strip Chambers:design<br />
and performance." Nucl.Instrum.Meth. A453:182-187, 2000.<br />
3. CMS Technical Design Report - The Muon Project,<br />
CERN/LHCC 97-32 (1997).<br />
4. M. Huntinen, CMS COTS Workshop, Nov. 1999.<br />
Calculations are posted at<br />
http://cmsdoc.cern.ch /~huu /tut1.pdf.<br />
5. F. Faccio, M. Huntinen, G. Stefanini, "A global radiation<br />
test plan for CMS electronics in HCAL, Muons and<br />
Experimental Hall",<br />
http://cmsdoc.cern.ch/~faccio/ presprop. pdf.<br />
6. T.Y. Ling. "Radiation tests for EMU electronics",<br />
http://www.physics.ohio-state.edu/~ling/elec/rad_emu_proc.pdf<br />
7. B. Bylsma, L.S. Durkin, J. Gu, T.Y. Ling, M. Tripathi,<br />
"Results of Radiation Test of the Cathode Front-End Board for<br />
CMS Endcap Muon Chambers", Proceedings of the Sixth<br />
Workshop on Electronics for LHC Experiments, 231-235,<br />
Cracow, Poland, 11-15 Sep. 2000, CERN/LHCC/2000-041<br />
8. R.E. Breedon, B. Holbrook, W. Ko, D. Mobley, P. Murray,<br />
M. Tripathi, "Performance and Radiation Testing of a Low -<br />
Noise Switched Capacitor Array for the CMS Endcap Muon<br />
System", Proceedings of the Sixth Workshop on Electronics for<br />
LHC Experiments, 187-191, Cracow, Poland, 11-15 Sep.<br />
2000, CERN/LHCC/2000-041<br />
9. N. Bondar, A. Golyash, "ASIC Test Stand", a talk given by<br />
A.Golyash on EMU meeting at Fermilab, Feb. 19-20, 1999,<br />
http://www-hep.phys.cmu.edu/cms/TALKS/talks.html<br />
10. B. Bylsma, private communication.<br />
VII. FOOTNOTES<br />
1. The gain and offset were calculated from the following<br />
equation: Gain x Qthr + Offset = Ud , where Ud is the<br />
discriminator setting.<br />
2. Two other boards with CMP16F chips were found working<br />
after neutron irradiation up to 1.2x10 12 n/cm 2 and 1.8x10 12 n/cm 2<br />
in a <strong>preliminary</strong> test made in July 2000.
CMOS front-end for the MDT sub-detector in the ATLAS Muon Spectrometer -<br />
development and performance<br />
C. Posch 1 , S. Ahlen 1 , E. Hazen 1 , J. Oliver 2<br />
1 Boston University, Physics Department, Boston, USA<br />
2 Harvard University, Department of Physics, Cambridge, USA<br />
Abstract<br />
Development and performance of the final 8-channel<br />
front-end for the MDT segment of the ATLAS Muon<br />
Spectrometer is presented. This last iteration of the readout<br />
ASIC contains all the required functionality and meets<br />
the design specifications. In addition to the basic<br />
"amplifier-shaper-discriminator"-architecture, MDT-ASD<br />
employs a Wilkinson ADC within each channel for<br />
precision charge measurements on the leading fraction of<br />
the muon signal. The data will be used for discriminator<br />
time-walk correction, thus enhancing the spatial resolution<br />
of the tracker, and for chamber performance monitoring<br />
(gas gain, ageing etc.). It was also demonstrated that this<br />
data can be used for performing particle identification via<br />
dE/dX. A programmable pulse injection system which<br />
allows for automated detector calibration runs was<br />
implemented on the chip. Results of performance and<br />
functionality tests on prototype ASICs, both in the lab and<br />
on-chamber, are presented.<br />
I. INTRODUCTION<br />
The ATLAS muon spectrometer is designed for standalone<br />
measurement capability, aiming for a P T resolution<br />
of 10% for 1 TeV muons. This target corresponds to a<br />
single tube position resolution of < 80 �m which translates<br />
into a signal timing measurement resolution of < 1 ns. The<br />
maximum hit rate is estimated 400 kHz per tube.<br />
The ATLAS Monitored Drift Tube (MDT) system is<br />
composed of about 1200 chambers with each chamber<br />
consisting of several layers of single tubes. In total, there<br />
are about 370'000 drift tubes of 3 cm diameter, with<br />
lengths varying from 1.5 to 6 m.<br />
The active components of the MDT on-chamber readout<br />
electronics are the MDT-ASD chip, which receives<br />
and processes the induced anode wire current signal, the<br />
AMT time-to-digital converter (TDC), which measures the<br />
timing of the ASD discriminator pulse edges, and a data<br />
concentrator/multiplexer/optical-fiber-driver (CSM) which<br />
merges up to 18 TDC links into one fast optical link and<br />
transmits the data to the off-detector readout driver<br />
(MROD).<br />
II. CIRCUIT DESIGN<br />
The MDT-ASD is an octal CMOS Amplifier-Shaper-<br />
Discriminator which has been designed specifically for the<br />
ATLAS MDT chambers [5]. System aspects and<br />
performance considerations force an implementation as an<br />
ASIC. A standard commercial 0.5�m CMOS process is<br />
used for fabrication.<br />
The analog signal chain part of the MDT-ASD has been<br />
described and presented previously [3] and will therefore<br />
be addressed only superficially in this article.<br />
The MDT-ASD signal path is a fully differential<br />
structure from input to output for maximum stability and<br />
noise immunity. Each MDT connects to an "active" preamplifier<br />
with an associate "dummy" pre-amp. The input<br />
impedance of the pre-amps is 120 �, the ENC of the order<br />
of 6000 e -<br />
RMS, with a contribution of 4000 e -<br />
from the<br />
tube termination resistor [2].<br />
Following the pseudo-differential pair of pre-amps is a<br />
differential amplifier which provides gain and outputs a<br />
fully differential signal to two subsequent amplifier stages.<br />
These amplifiers supply further gain and implement the<br />
pulse shaping. In order to avoid active baseline restoration<br />
circuitry and tuneable pole/zero ratios, a bipolar shaping<br />
function was chosen [8][6].<br />
The shaper has a peaking time of 15 ns and area balance<br />
of < 500 ns. The sensitivity at the shaper output is<br />
specified 3 mV/primary e -<br />
, or 12 mV/fC, with a linear<br />
range of 1.5 V or 500 primary e -<br />
. The nominal<br />
discriminator threshold is 60 mV, corresponding to 20<br />
primary e -<br />
or 6 �noise .<br />
The bipolar shaping function in conjunction with the<br />
tube gas Ar/CO2 93/7 with its maximum drift time of 800<br />
ns and significant "R-t" non-linearity can cause multiple<br />
discriminator threshold crossings from a single traversing<br />
particle. The MDT-ASD uses an "artificial deadtime"scheme<br />
to suppress these spurious hits.<br />
In addition to the basic amplifier-shaper-discriminatorarchitecture,<br />
the MDT-ASD features one Wilkinson<br />
charge-to-time converter per channel, programmability of<br />
certain functional and analog parameters along with a<br />
JTAG interface, and an integrated pulse injection system.<br />
1� )<br />
1� *<br />
2HA=� F<br />
2 H A = � F<br />
* E = I � 9<br />
2 H A = � F 5 D = F A H<br />
6DHAID��@<br />
, ) , ) , ) ! , ) "<br />
6 E� E� C , EI? HE� E� = J� H<br />
0 OIJAHAIEI<br />
9 E�� E� I� � ) , +<br />
1�4K�@�M �<br />
6 D H A I D � � @ / = JA<br />
1� JA C H= JE� � / = JA/<br />
A � A H = J � H<br />
, A=@JE� A<br />
� KJFKJ � KN<br />
� 8 , 5<br />
+ D = H C A ) , + � K J F K J<br />
Figure 1. MDT-ASD channel block diagram<br />
+ D=��A� � �@A<br />
+ DEF � �@A<br />
The shaper output is fed into the discriminator for<br />
leading edge timing measurement and into the Wilkinson<br />
ADC section for performing a gated charge measurement<br />
on the leading fraction of the tube signal (Figure 1). The<br />
information contained in the MDT-ASD output pulses,<br />
namely the leading edge timing and the pulse width<br />
encoded signal charge, are read and converted to digital<br />
data by a TDC [1].<br />
� 7 6 )<br />
� 7 6 *
A. Wilkinson ADC<br />
The Wilkinson dual-slope charge-to-time converter<br />
operates by creating a time window of programmable<br />
width at the threshold crossing of the tube signal,<br />
integrating the signal charge onto a holding capacitor<br />
during that gate time, and then discharging the capacitor<br />
with a constant current. The rundown current is variable in<br />
order to adjust to the dynamic range of the subsequent<br />
TDC.<br />
The Wilkinson cell operates under the control of a gategenerator<br />
which consists of all-differential logic cells. It is<br />
thus highly immune to substrate coupling and can operate<br />
in real time without corrupting the analog signals.<br />
The main purpose of the Wilkinson ADC is to provide<br />
data which can be used for the correction of time-slew<br />
effects due to signal amplitude variations. Time slewing<br />
correction eventually improves the spatial resolution of the<br />
tracking detector and is necessary to achieve the specified<br />
80�m single tube resolution. In addition, this type of<br />
charge measurement provides a useful tool for chamber<br />
performance diagnostics and monitoring (gas gain, tube<br />
ageing etc). Measurements of the Wilkinson conversion<br />
characteristics as well as the noise performance and nonsystematic<br />
charge measurement errors of the Wilkinson<br />
ADC are shown in sections III.C and III.D.<br />
The feasibility of the MDT system to perform particle<br />
identification via dE/dX measurement using the Wilkinson<br />
ADC was evaluated. The results of a simulation study on<br />
energy separation capability of the MDT system are<br />
published in [4].<br />
B. Programmable parameters<br />
It was found crucial to be able to control certain analog<br />
and functional parameters of the MDT-ASD, both at<br />
power-up/reset and during run time. A serial I/O data<br />
interface using a JTAG type protocol plus a number of<br />
associated DACs were implemented on the chip.<br />
1) Timing discriminator<br />
The threshold of the main timing discriminator is<br />
controllable over a wide range (up to > 4 times nominal)<br />
with 8-bit resolution. The discriminator also has adjustable<br />
hysteresis from 0 to 1/3 of the nominal threshold.<br />
2) Wilkinson converter control<br />
The integration gate width can be set from 8 ns to 45 ns<br />
in steps of 2.5 ns (4-bit). This setting controls what<br />
fraction of the leading part of the signal is used for<br />
conversion. The nominal gate width is 15 ns which<br />
corresponds to the average peaking time t p of the preamplifier.<br />
It can be demonstrated that the time slewing is<br />
only correlated to the leading edge charge and not to the<br />
total signal charge of the MDT signal. ADC measurements<br />
with a gate > 2 � t p thus can not be used to further improve<br />
the spatial resolution of the system [6][7]. However for<br />
dE/dX measurements for particle identification, longer<br />
gates are desirable [4]. The current controlling the gate<br />
width is set by a binary-weighted switched resistor string.<br />
The discharge (rundown) current of the integration<br />
capacitors is controlled by a 3-bit current DAC. This<br />
feature allows the ADC output pulse width to be adjusted<br />
to the dynamic range of the TDC (e.g. 200 ns @ at a<br />
resolution of 0.78125 ns for AMT-1 [1]).<br />
The end of one Wilkinson conversion cycle is triggered<br />
by a second variable-threshold discriminator. The setting<br />
of this threshold also affects the width of the Wilkinson<br />
output pulse but in principle does not influence the ADC<br />
performance significantly and is primarily implemented<br />
for troubleshooting and fine-tuning purposes.<br />
3) Functional parameters<br />
The deadtime setting defines an additional time window<br />
after each hit during which the logic does not accept and<br />
process new input. It can be set from 300 to 800 ns in<br />
steps of 70 ns (3 bit). The nominal setting is 800 ns<br />
corresponding to the maximum drift time in the MDT.<br />
This feature is used to suppress spurious hits due to<br />
multiple threshold crossings in the MDT signal tail and<br />
thus reducing the required readout bandwith.<br />
A number of set-up bits are designated to control global<br />
settings for single channels and the whole chip. For<br />
diagnostic (boundary scan interconnect testing etc.) and<br />
troubleshooting purposes, the output of each channel can<br />
be tied logic HI or LO. The chip itself can be set to work<br />
either in ToT (Time-over-threshold) or ADC mode (the<br />
output pulse contains the pulse-width encoded charge<br />
measurement information).<br />
Table 1 summarizes the programmable parameters.<br />
Table 1. MDT-ASD programmable parameters<br />
PARAMETER NOMINAL RANGE LSB UNIT<br />
DISC1 Threshold 60 -256 � 256 2 mV<br />
DISC1 Hysteresis 10 0 � 20 1.33 mV<br />
Wilkinson integration gate 14.5 8 � 45 2.5 ns<br />
DISC2 Threshold 32 32 � 256 32 mV<br />
Wilkinson discharge current 4.5 2.4 � 7.3 0.7 �A<br />
Dead-time 800 300 � 800 70 ns<br />
Calibration channel mask – – – –<br />
Calibration capacitor select – 50 � 400 50 fF<br />
Channel mode ON ON, HI, LO – –<br />
Chip mode ADC ADC, ToT – –<br />
C. Calibration pulse injection<br />
In order to facilitate chip testing during the design phase<br />
as well as to perform system calibration and test runs with<br />
the final detector assembly, a differential calibration/test<br />
pulse injection system was implemented on the chip. It<br />
consists of two parallel banks of 8 switchable 50 fF<br />
capacitors per channel and an associated channel mask<br />
register. The mask register allows for each channel to be<br />
selected separately whether or not it will receive test<br />
pulses. The capacitors are charged with external voltage<br />
pulses, nominal 200 mV swing standard LVDS pulses,<br />
yielding an input signal charge range of 10 � 80 fC. The<br />
pulse injection system enables fully automated timing and<br />
charge conversion calibration of the MDT sub-detector.<br />
Calibration runs are required for example after changes in<br />
certain setup parameters.
III. TEST RESULTS<br />
The MDT-ASD has been prototyped extensively. The<br />
last iteration, ASD01A, is a fully functional 8-channel<br />
prototype and is considered to be the final production<br />
design. Results of functionality and performance tests on<br />
this prototype, indicate that the ATLAS MDT front-end is<br />
ready for mass-production 1 .<br />
A. Pre-amp - Shaper: Sensitivity<br />
Figure 2 shows oscilloscope traces of the shaper output<br />
at the threshold coupling point. The measurements were<br />
taken with a calibrated probe using well defined input<br />
charges. The peaking time of the delta pulse response<br />
(time between the arrows) is 14.4 ns. There is a probe<br />
attenuation of 10:1 which is not accounted for in the peak<br />
voltage values in the left hand column. Due to the<br />
differential architecture, the voltages have to be multiplied<br />
by a factor 2 to obtain the total gain (Figure 3).<br />
Figure 2. Shaper output for 40, 60, 80 and 100 fC input charge.<br />
The peak voltages translate into the sensitivity curve below by<br />
multiplying with a factor of two (single-ended to differential)<br />
and taking into account a probe attenuation of 10:1.<br />
shaper peak voltage [mV]<br />
1000<br />
800<br />
600<br />
400<br />
200<br />
20<br />
40<br />
60<br />
input charge [fC]<br />
W_coef={27.067,10.042}<br />
W_sigma={5.47,0.0882}<br />
Gain = 10.042 mV/fC<br />
Figure 3. Sensitivity of the analog signal chain (Pre-amp to<br />
shaper) for the expected input signal range. The gain amounts to<br />
10 mV/fC, exhibiting good linearity.<br />
1<br />
Aspects of radiation tolerance have not been addressed in this<br />
article, however results of radiation tests on the process and the<br />
prototype chips indicate that ATLAS requirements are met.<br />
80<br />
100<br />
B. Discriminator time slew<br />
Due to the finite rise time of the signal at the<br />
discriminator input, different signal amplitudes with<br />
respect to the threshold level produce different threshold<br />
crossing times. This effect is called time slew. Figure 4<br />
shows the time slew as measured for a constant threshold<br />
by varying the input charge. The time slew over the<br />
expected muon charge range (~ 20 – 80 fC) is of the order<br />
2 ns. Comparing this number to the requirements, it<br />
becomes obvious that slew correction through charge<br />
measurement is an essential feature of the MDT-ASD.<br />
time slew [ns]<br />
10<br />
8<br />
6<br />
4<br />
2<br />
0<br />
20<br />
40<br />
60<br />
input charge [fC]<br />
Figure 4. Time slew of the MDT-ASD signal chain. The data<br />
display the timing of the discriminator 50% point of transition as<br />
a function of input signal amplitude for a 20 mV threshold.<br />
C. Wilkinson ADC performance<br />
The transfer characteristic of the Wilkinson charge<br />
ADC is plotted in Figure 5. The traces show the non-linear<br />
relation between input charge and output pulse width for 4<br />
different integration gates. The advantage of this<br />
compressive characteristic is that small signals which<br />
require a higher degree of time slew correction gain from a<br />
better charge measurement resolution. The disadvantage is<br />
an increased number of calibration constants. The dynamic<br />
range spans from 90 ns (8 ns gate) to 150 ns (45 ns gate).<br />
Wilkinson pulse width [ns]<br />
200<br />
150<br />
100<br />
50<br />
20<br />
40<br />
60<br />
input charge [fC]<br />
80<br />
80<br />
Gate time:<br />
8 ns<br />
15 ns<br />
25 ns<br />
45 ns<br />
Figure 5. Wilkinson ADC output pulse width as a function of<br />
input charge for 4 different integration gate widths<br />
100<br />
100
D. Noise performance and non-systematic<br />
measurement errors<br />
The timing information carried by the ASD output signal<br />
is recorded and converted by the AMT (Atlas Muon TDC)<br />
time-to-digital converter. The AMT can be set to provide a<br />
dynamic range for the pulse width measurement of 0 - 200<br />
ns with a bin size of 0.78 ns [1]. If the ASD is<br />
programmed to produce output pulses up to a maximum of<br />
200 ns, then the combination of the ASD and the AMT<br />
chip represents a charge-ADC with a resolution of 7 - 8<br />
bits.<br />
Non-systematic errors in the timing and charge<br />
measurement due to electronic noise in the ASDs and<br />
AMTs and quantization errors set a limit to the<br />
performance of the system. The following two sections<br />
present test results on the noise performance of the MDT-<br />
ASD and determine how the noise introduces error and<br />
degrades the accuracy of the measurements.<br />
1) Time measurement<br />
Figure 6 shows the measured RMS error of the leading<br />
edge time measurement at the output of the ASD as a<br />
function of signal charge. The lower curve gives the noise<br />
for floating pre-amplifier inputs while the upper curve<br />
includes the effect of the 380 � tube termination resistor.<br />
The threshold is set to its nominal value of 60 mV<br />
(corresponding to ~ 5 fC). The horizontal axis gives the<br />
charge of the input signal applied through the test pulse<br />
injection system. Typical muon signals are expected to be<br />
in the range of 20 - 80 fC, resulting in a RMS error of the<br />
order of 200 ps.<br />
The time-to-digital conversion in the AMT shows a<br />
RMS error of 305 ps, including 225 ps of quantization<br />
error [1]. The resulting total error of the time<br />
measurement, covering all internal noise sources from the<br />
front-end back to the A/D conversion, will typically be of<br />
the order of 360 ps RMS.<br />
σ (leading edge) [ns]<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
20<br />
40<br />
60<br />
input charge [fC]<br />
no termination<br />
380 Ohm TR<br />
Figure 6. RMS error of the leading edge timing measurement<br />
vs. input charge for a fixed discriminator threshold (set to its<br />
nominal value of 60 mV or 5 fC). Typical muon signals will be<br />
of the order of 40 - 50 fC. Bottom curve: floating pre-amp input,<br />
top curve: with 380 � tube termination resistor.<br />
80<br />
100<br />
2) Charge measurement<br />
Measurement errors in the pulse width at the ASD<br />
output are typically below 600 ps RMS, depending on<br />
signal amplitude and integration gate width. Figure 7<br />
shows the ASD Wilkinson noise versus signal amplitude<br />
in percent of the measured charge for 3 short integration<br />
gate widths. The pulse width conversion (two independent<br />
pulse-edge conversions) in the AMT exhibits a RMS error<br />
of 430 ps including quantization error. Hence, the<br />
resulting total error, covering all internal noise sources<br />
from the front-end back to the A/D conversion, stays in<br />
the range of under 800 ps RMS. This number corresponds<br />
to a typical error of well below 1% of the measured charge<br />
for the vast majority of signals.<br />
The effect of the tube termination resistor can be seen in<br />
Figure 8. Contributing about 4000 e - ENC, this termination<br />
resistor constitutes the dominant noise source of the readout<br />
system.<br />
σ (pulse width) [% of measured charge]<br />
2.0<br />
1.8<br />
1.6<br />
1.4<br />
1.2<br />
1.0<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.0<br />
20<br />
40<br />
60<br />
input charge [fC]<br />
Integration Gate [ns]<br />
80<br />
380 Ohm TR<br />
11<br />
13.75<br />
17.5<br />
Figure 7. RMS error of Wilkinson pulse width at the output of<br />
the ASD as a function of input signal charge for a fixed<br />
discriminator threshold (nominal), given in percent of the<br />
measured charge. Note the decrease in noise for growing<br />
integration gate widths.<br />
σ (pulse width) [% of measured charge]<br />
2.0<br />
1.5<br />
1.0<br />
0.5<br />
0.0<br />
20<br />
40<br />
integration gate: 11 ns<br />
60<br />
input charge [fC]<br />
no termination<br />
380 Ohm TR<br />
Figure 8. Effect of the 380 � tube termination resistor on the<br />
charge measurement error.<br />
80<br />
100<br />
100
All systematic charge measurement errors e.g. due to<br />
converter non-linearities or channel-to-channel variations<br />
can be calibrated out using the ASD`s programmable testpulse<br />
injection system.<br />
IV. ON-CHAMBER TESTS WITH A COSMIC RAY<br />
TEST SETUP<br />
A cosmic ray test stand has been set up at Harvard<br />
University. The system with one Module-0 endcap<br />
chamber (EIL type) and a trigger assembly of 4 scintillator<br />
stations records > 1 GeV cosmic muons. The read-out<br />
electronics employs an earlier 4-channel prototype of the<br />
ASD, mounted on "mezzanine" boards, each of which<br />
services 24 tubes. This earlier ASD version does not<br />
contain a Wilkinson ADC or a test-pulse circuit, but for<br />
the purposes of this test it is functionally equivalent to the<br />
latest prototype. An extensive description of this test stand<br />
and presentation of the analysis methods and results are<br />
the subject of a forthcoming ATLAS note by S. Ahlen.<br />
A histogram of TDC values for single-muon 8-tube<br />
events is shown in Figure 9. The maximum drift time is<br />
seen to be about 1000 channels (780 ns).<br />
Figure 9. TDC spectrum produced on the cosmic ray test stand.<br />
A track fitting program to evaluate chamber resolution<br />
has been developed. The procedure first obtains fits using<br />
the four tubes of each multilayer. These fits determine the<br />
most likely position of the global trajectory relative to the<br />
drift tube wire by considering all 16 possibilities for each<br />
multilayer. Then a global 8-tube straight-line-fit is done<br />
using this information, and then the two most poorly fit<br />
tubes are rejected and a final 6-tube fit is accomplished.<br />
This last step rejects delta rays, poor fits for near-wire hits,<br />
and large multiple scatters. With no additional data cuts a<br />
single tube tracking resolution of about 100 µm (and<br />
nearly 100% efficiency) is obtained.<br />
By requiring consistency of the slopes of the 4-tube fits<br />
in the two multilayers (4 mrad) more multiple scatters and<br />
delta rays are rejected. The result of this cut is that the<br />
single tube spatial resolution improves to about 70 µm<br />
with about 45% efficiency.<br />
Figure 10. shows the distribution of the residuals<br />
representing the distances from the fitted track line to the<br />
time circles around the wires.<br />
Figure 10. Spatial resolution of the EIL chamber on the cosmic<br />
ray test stand (horizontal axis in mm)<br />
More detailed studies of the MDT resolution are<br />
underway at several sites, but these initial results suggest<br />
that the ASD-based front-end electronics can provide the<br />
required precision under operational conditions.<br />
V. CONCLUSIONS<br />
Development, design and performance of the 8-channel<br />
CMOS front-end for the MDT segment of the ATLAS<br />
Muon Spectrometer has been presented. The device is<br />
implemented as an ASIC and fabricated using a standard<br />
commercial 0.5 �m CMOS process. Irradiation data on the<br />
fabrication process and on the prototype chip exist and<br />
indicate that ATLAS radiation hardness standards are met.<br />
Results of functionality and performance tests, both in<br />
the lab and on-chamber demonstrate that the ATLAS<br />
MDT front-end is ready for mass-production.<br />
VI. REFERENCES<br />
[1] Y.Arai, Development of front-end electronics and TDC<br />
LSI for the ATLAS MDT, NIM in Physics Research A<br />
453 (2000) 365-371, 2000.<br />
[2] J. Huth, A. Liu, J. Oliver, Note on Noise Contribution of<br />
the Termination Resistor in the MDTs, ATLAS Internal<br />
Note, ATL-MUON-96-127, CERN, Aug. 1996.<br />
[3] J. Huth, J. Oliver, W. Riegler, E. Hazen, C. Posch, J.<br />
Shank, Development of an Octal CMOS ASD for the<br />
ATLAS Muon Detector, Proceedings of the Fifth<br />
Workshop on Electronics for LHC Experiments,<br />
CERN/LHCC/99-33, Oct. 1999.<br />
[4] G. Novak, C. Posch, W. Riegler, Particle identification<br />
in the ATLAS Muon Spectrometer, ATLAS Internal<br />
Note, ATL-COM-MUON-2001-020, CERN, June 2001.<br />
[5] C. Posch, E. Hazen, J. Oliver, MDT-ASD, CMOS frontend<br />
for ATLAS MDT, ATLAS Internal Note, ATL-<br />
COM-MUON-2001-019, CERN, June 2001.<br />
[6] W. Riegler, MDT Resolution Simulation - Front-end<br />
Electronics Requirements, ATLAS Internal Note,<br />
MUON-NO-137, CERN, Jan. 1997.<br />
[7] W. Riegler, Limits to Drift Chamber Resolution, PhD<br />
Thesis, Vienna University of Technology, Vienna,<br />
Austria, Nov. 1997.<br />
[8] W. Riegler, M. Aleksa, Bipolar versus unipolar shaping<br />
of MDT signals, ATLAS Internal Note, ATL-MUON-<br />
99-003, March 1999.
"The MAD", a Full Custom ASIC<br />
for the CMS Barrel Muon Chambers Front End Electronics<br />
Abstract<br />
To meet frontend electronics needs of CMS barrel muon<br />
chambers a full custom ASIC, named "The MAD", has been<br />
first developed by INFN of Padova and then produced in<br />
80.000 pieces to equip the 180.000 drift tubes[1].<br />
The task of this IC is to amplify signals picked up by<br />
chamber wires, compare them against an external threshold<br />
and transmit the results to the acquisition electronics.<br />
The chip, built using 0.8 µm BiCMOS technology,<br />
provides 4 identical chains of amplification, discrimination<br />
and cable driving circuitry. It integrates a flexible channel<br />
enabling/disabling feature and a temperature probe for<br />
monitoring purposes.<br />
The working conditions of the detector set requirements<br />
for high sensitivity and speed combined with low noise and<br />
little power consumption. Moreover, as the basic requirement<br />
for the frontend is the ability to work at very low threshold to<br />
improve efficiency and time resolution, a good uniformity of<br />
amplification between channels of different chips and very<br />
low offset for the whole chain are needed.<br />
The ASIC has been extensively and deeply tested resulting<br />
in good performances; particularly, big effort was put in<br />
testing radiation (neutron, gamma rays and ions) tolerance and<br />
ageing effects to check behaviour and reliability in LHC<br />
environment.<br />
A. General<br />
I. ASIC DESCRIPTION<br />
The analog frontend electronics for the muon chambers of<br />
CMS barrel has been integrated in a full custom ASIC, named<br />
"The MAD", developed by INFN of Padova using 0.8 µm<br />
BiCMOS technology from Austria Mikro Systeme. Each chip<br />
provides the signal processing for 4 drift tubes in a 2.5x2.5 mm 2<br />
die, housed in a TQFP44 package.<br />
Figure 1 shows the block diagram of the ASIC: the 4<br />
identical analog chains are made of a charge preamplifier<br />
followed by a simple shaper with baseline restorer, whose<br />
output is compared against an external threshold by a latched<br />
discriminator; the output pulses are then stretched by a<br />
programmable one-shot and sent to an output stage able to<br />
drive long twisted pair cables with LVDS compatible levels.<br />
Control and monitoring features have been included in the<br />
chip: to mask noisy wires each channel can be disabled at the<br />
F. Gonella and M. Pegoraro<br />
University and INFN Sez. of Padova, 35131 Padova, Italy<br />
franco.gonella@pd.infn.it, matteo.pegoraro@pd.infn.it<br />
shaper input resulting in little crosstalk to neighbours. A fast<br />
disable/enable feature, controlled via LVDS levels acting on<br />
the output driver of left and right channel pairs, allows the<br />
simulation of tracks perpendicular to the detector. An absolute<br />
temperature probe has been integrated in order to detect<br />
electronics failures and monitor environmental changes.<br />
Two separate power supplies (5 V and 2.5 V) are used in<br />
order to reduce power drain and minimize interference<br />
between input and output sections. The layout and routing<br />
have been particularly cured and many pins have been<br />
reserved for power, input ground and analog ground.<br />
To prevent latch-up events and improve crosstalk<br />
performances guard ring structures have been largely used to<br />
isolate sensitive stages like the charge preamplifier or<br />
complementary MOS devices.<br />
In1<br />
A_EN1<br />
A_EN2<br />
In2<br />
GNA<br />
VCC<br />
GNA<br />
In3<br />
A_EN3<br />
GNA VCC T_OUT T_EN W_CTRL GND D_ENL1 D_ENL2 VDD<br />
PA<br />
PA<br />
PA<br />
OUTP1<br />
OUTN1<br />
BYP<br />
A_EN4<br />
In4 PA<br />
BLR<br />
SHAPER<br />
DISCR LATCH<br />
ONE<br />
SHOT<br />
LVDS<br />
DRIV<br />
GND<br />
OUTN4<br />
OUTP4<br />
GNA<br />
VCC<br />
SHAPER<br />
BLR<br />
BLR<br />
SHAPER<br />
TEMP<br />
PROBE<br />
SHAPER<br />
BLR<br />
<br />
VTH<br />
DISCR<br />
DISCR<br />
DISCR<br />
Figure 1: Block diagram of the ASIC.<br />
B. Analog section<br />
LATCH<br />
LATCH<br />
WIDTH<br />
CTRL<br />
LATCH<br />
ONE<br />
SHOT<br />
ONE<br />
SHOT<br />
ONE<br />
SHOT<br />
LVDS<br />
DRIV<br />
LVDS<br />
DRIV<br />
BIAS<br />
LVDS<br />
DRIV<br />
VREF GND D_ENR1 D_ENR2 VDD<br />
GND<br />
OUTN2<br />
OUTP2<br />
OUTP3<br />
OUTN3<br />
The preamplifier uses a single gain stage with a GBW<br />
product in excess of 1 GHz (from simulation) and a feedback<br />
time constant of 33 ns. Input pads, derived from standard<br />
ones, were modified to enhance ESD protection by integrating<br />
series resistor and large diodes connected to analog ground.<br />
Power dissipation for this stage is about 2.5 mW.<br />
The shaper is a low gain integrator with a small time<br />
constant: the noninverting input is connected to the<br />
preamplifier while the inverting one allows to put this stage<br />
inside the feedback loop of a low offset OTA; this<br />
combination implements a time invariant baseline restorer
acting as a high pass filter for the signal path. Tests performed<br />
on this circuit show that the quiescent level of the shaper<br />
output can be set anywhere between 1.0 and 3.5 V even in the<br />
presence of worst-case parameters for fabrication process and<br />
operating conditions of the IC. The pin VREF, common to the<br />
four OTAs, is used to control the quiescent level from outside.<br />
The output of the shaper is directly connected to the<br />
noninverting input of a fully differential discriminator with 2<br />
gain stages. The other input of the comparator is connected to<br />
the external threshold pin VTH, common to all channels. The<br />
input section uses no special technique, except a careful<br />
layout, to obtain low offset with high speed; a hysteresis of<br />
about ±1 mV helps in avoiding autoscillation and in speeding<br />
up commutation with slow input signals. Common mode input<br />
voltage ranges from 1.2 to 3.8 V. Finally, a buffer prevents<br />
switching noise from the following sections to propagate<br />
backwards to sensitive paths.<br />
All previously described blocks share a single +5 V supply<br />
for about 12 mW power drain.<br />
C. Output section<br />
The buffered output of the discriminator is capacitively<br />
coupled to a one-shot that is very similar to a classical astable<br />
multivibrator; its differential output, when active, stores the<br />
status of comparator in the latch so producing a non<br />
retriggerable pulse whose width is inversely proportional to<br />
the current sunk from W_CTRL pin, again shared by all<br />
channels. Critical parameters of this section are propagation<br />
delay, which sets the ability to catch narrow pulses produced<br />
by signals just over threshold, and the time it takes to fully<br />
recover after the falling edge of a pulse.<br />
The same lines that activate the latch are used to feed the<br />
output driver, again a differential one capable of driving a 100<br />
Ω load at voltage levels compatible with LVDS standard.<br />
Voltage driving has been chosen because NPN bipolar<br />
transistors are faster than PNP and PMOS devices and also<br />
because this turns out to be a power convenient choice when<br />
the load is a cable terminated at both ends. The working<br />
conditions of the driver are a compromise between speed and<br />
power drain: rise and fall time are below 2.5 ns and, to reduce<br />
external components, terminating resistors are integrated in<br />
the pads. To reduce consumption, the supply voltage is the<br />
lowest possible, 2.5 V, yielding power dissipation, including<br />
the one-shot, of about 12 mW.<br />
D. Temperature sensor, channels masking and<br />
biasing<br />
Temperature sensing is based on the voltage difference<br />
between base emitter junctions operated at different current<br />
densities. Voltage output is 7.5 mV/°K and power drain about<br />
1 mW from 5 V. The output is always available at pin <br />
while at pin T_OUT a unity gain buffer, enabled by a TTL<br />
high level (pin T_EN), allows the multiplexing of more chips<br />
on the same net.<br />
Each frontend channel can be masked by a TTL high level<br />
applied to pins A_EN(1-4), in this case recovery to normal<br />
operation requires about 10 μs. Channels 1 & 2 (left channels)<br />
can be enabled or disabled in about 30 ns by a differential<br />
signal, LVDS or 3.3 V PECL, applied to pins D_ENL(1-2);<br />
the same for right channels 3 & 4 via pins D_ENR(1-2).<br />
A bias circuit controls the current generators of the whole<br />
chip and supplies voltage for one-shot sections. Its output is<br />
connected to pin BYP for bypassing with a capacitor to GNA.<br />
II. ASIC PERFORMANCES<br />
We have verified ASIC performances on bare chips using<br />
a specific test board with minimal stray capacitance to reduce<br />
measurements errors. The same measurements have been<br />
carried out on chips mounted on FEB, the final boards in<br />
which the frontend electronics of the CMS barrel muon<br />
chambers is organized.<br />
First of all power dissipation is very low, about 25 mW/ch<br />
with little variation with temperature and input signal rate.<br />
A. General tests<br />
Since no test pads were foreseen in the chip, tests on the<br />
analog section are based on the statistics of output response to<br />
δ−like charge pulses, injected by a voltage pulse generator via<br />
a series capacitor of about 1pF.<br />
In order to measure the gain, two different values of<br />
charge, 3 and 9 fC, have been injected: the resulting threshold<br />
distributions for the bare chip have mean values of 10.7 and<br />
33.6 mV, respectively, with r.m.s. of 0.34 and 0.42 mV (see<br />
figure 2). This little spread is due to gain variations among<br />
chips, caused by the tolerance of feedback capacitor in charge<br />
preamplifier, and by the discriminator and baseline restorer<br />
offset.<br />
Resulting gain is 3.8 mV/fC in average, about 10% higher<br />
than simulated because of a process parameter (capacitor<br />
oxide thickness) out of specification in the preserie wafer<br />
(results shown are from preserie devices). Sensitivity is<br />
constant up to 500 fC input with less than 1% integral<br />
nonlinearity and saturation occurs at about 800 fC. Uniformity<br />
is very good, r.m.s. about 0.01 mV as chips belong to the<br />
same wafer.<br />
# channels<br />
18<br />
16<br />
14<br />
12<br />
10<br />
8<br />
6<br />
4<br />
2<br />
0<br />
events: 32<br />
mean: 3,77<br />
std.dev: 0,01<br />
min: 3,73<br />
max: 3,80<br />
3.70 3.70<br />
3.74 3.74<br />
3.78 3.78<br />
3.82 3.82<br />
3.86 3.86<br />
Sensitivity (mV/fC)<br />
# channels<br />
16<br />
14<br />
12<br />
10<br />
Figure 2: Sensitivity and noise of bare ASIC.<br />
8<br />
6<br />
4<br />
2<br />
0<br />
events: 32<br />
mean: 1318<br />
sdt.dev: 65<br />
min: 1230<br />
max: 1548<br />
1100 1100<br />
1250 1250<br />
1400 1400<br />
1550 1550<br />
1700 1700<br />
Noise (electrons)<br />
Other key characteristics for low threshold operation are<br />
noise and crosstalk: bare chips exhibit ENC 1320 electrons
(slope of 45 e/pF) and a value below 0.1% for the latter. Once<br />
mounted on the PCB these two figures increase to 1900<br />
electrons (slope of 60 e-/pF) and 0.2% because of the external<br />
protection network mounted on FEBs in order to withstand a<br />
full discharge of one drift tube.<br />
Figure 3 shows the time walk characteristics versus the<br />
input charge overdrive (amount of charge over the threshold)<br />
for the bare ASIC in two different configurations. In the first<br />
case the input pins are directly connected to the charge<br />
generator, while in the second one a 40 pF capacitor is added<br />
to simulate the detector capacitance (about 10 pF/m, so this<br />
value represents the worst case). Since the estimated average<br />
charge is about 50÷100 fC, the deterioration on time walk<br />
performances is acceptable.<br />
Delay (ns)<br />
8.00<br />
7.00<br />
6.00<br />
5.00<br />
4.00<br />
3.00<br />
2.00<br />
1.00<br />
0.00<br />
TIME WALK<br />
Cd=0, Thr=3fC<br />
Cd=40pF, Thr=4fC<br />
1 10 100 1000<br />
Input Charge overdrive (fC)<br />
Figure 3: Time walk of the bare chip with 2 input configurations.<br />
Temperature sensors integrated in the ASIC were also<br />
tested for chips mounted on the FEBs: at an ambient<br />
temperature of 23±1 °C, we obtain an average value of 27 °C,<br />
with a maximum error of ±3 °C and a heating of 4 °C respect<br />
to ambient.<br />
The conversion factor of 7.5 mV/°K has been measured<br />
for few ASICs in a climatic chamber in the range 0÷100 °C.<br />
In the same chamber we have verified specifications as a<br />
function of temperature. All performances, except for output<br />
width, exhibit little variations against temperature and the<br />
output levels comply with LVDS standard in the 0÷100 °C<br />
range and for supply voltage tolerances of ±10%.<br />
B. July 1999 and 2000 Test Beam results<br />
A prototype chamber with a final design of the cells and<br />
equipped with MAD chips, was installed inside the M1<br />
magnet of the CERN H2 zone and exposed to high-energy<br />
muon beams in July 1999[4] and in the same period of<br />
2000[6].<br />
The results obtained were very satisfactory: efficiency<br />
higher than 95.5% and resolution of 200 μm were reached in<br />
all operating conditions with safety values of drift tubes<br />
voltages.<br />
In detail figure 4 shows the meantimer resolution for<br />
different beam positions along the wires[5] and different wire<br />
amplification: degradation of signals coming from the<br />
opposite end of the wire (respect to FEB) account for a<br />
resolution worsening limited to 0.5 ns in real conditions.<br />
Figure 4: Meantimer resolution for different beam positions along<br />
the wires and different amplification.<br />
III. ASIC RELIABILITY<br />
About 45000 ASICs are located inside the gas volume of<br />
the CMS detector in a hardly accessible environment and<br />
must operate for a long time (at least 10 years) with no or<br />
minimal maintenance.<br />
Electronics reliability is therefore a crucial point and<br />
specific tests were performed to check the ASIC against<br />
radiations, ageing and HV discharges.<br />
A. Radiation tests<br />
Besides usual considerations about wear out of electronics,<br />
big concern must be devoted to radiation tolerance: in the<br />
barrel muon stations only the neutron flux (5 10 10<br />
n/cm2 for 10<br />
years of LHC activity, 10% thermal), can generate problems<br />
(the iron yoke is a shield against other radiation type) mainly<br />
due to SEU and SEL (Single Events Upsets and Latch-up).<br />
The first item accounts for trigger and readout noise while the<br />
second one can cause device burn out.<br />
The spectrum of neutron flux in CMS ranges from thermal<br />
up to high-energy. To meet these specifications, we have<br />
performed tests at INFN Legnaro National Laboratory (for<br />
thermal and fast neutrons up to 10 MEV) and at the Universitè<br />
Catholique of Louvain-la-Neuve laboratory (for fast neutrons<br />
up to 60 MeV)[2].<br />
Low energy neutrons were produced using a graphite<br />
moderator with a deuterium beam accelerated up to 7 MeV by<br />
the Van de Graaff accelerator of LNL while fast neutrons up<br />
to 10 MeV were obtained from the same beam on a thick<br />
beryllium target. In a same way, the Louvain facility provides<br />
a wide neutron spectrum, roughly flat in the range 20-60<br />
MeV, using a protons beam of energy up to 65 MeV.<br />
For all radiation tests a suitable acquisition system was<br />
implemented in order to monitor supply currents and<br />
temperature of the ASIC to detect latch-up events; SEU<br />
events at different threshold levels have been acquired to
verify the dependence using counters on single channels.<br />
Also, at the end of exposure, all devices were deeply re-tested<br />
to verify changes in static and dynamic characteristics.<br />
Being basically a charge sensitive device, this ASIC is<br />
naturally affected by SEU: we are interested in checking if the<br />
associated rate is compatible with detector efficiency.<br />
The neutron cross-section per readout channel is plotted in<br />
Figure 5 showing a roughly exponential dependence on the<br />
threshold. We can see the not-negligible contribution of slow<br />
neutrons to cross-section (only one measure at 20 fC<br />
threshold). From these results we can estimate few thousands<br />
spurious counts for the whole detector activity: a very safe<br />
value.<br />
Cross section (cm2/channel)<br />
1,E-07<br />
1,E-08<br />
1,E-09<br />
1,E-10<br />
Cross Section vs Threshold<br />
0,00<br />
1,E-06<br />
20,00 40,00 60,00 80,00 100,00<br />
MAD Louvain (60MeV) MAD LNL (10Mev)<br />
MAD LNL (Thermal)<br />
Threshold charge (fC)<br />
Figure 5: Fast and slow neutrons cross-section versus threshold.<br />
In all tests performed with neutrons no LATCH-UP events<br />
were detected. Also performances tests done after irradiation<br />
have shown no significant changes.<br />
For better SEL characterization of the chip, we decided for<br />
a test with heavy ions, as the energy deposition is quite<br />
larger[3]. Measurements have been performed in April 2000<br />
at the Tandem accelerator of INFN LNL. The ion beams were<br />
set to obtain fluxes (monitored on line with silicon diodes) of<br />
10 4 -10 5 ions/cm 2 s -1 in order to achieve an integrated fluence<br />
of some units/μm 2 in the die. Table 1 shows the ions used,<br />
selected to cover a useful range of LET values.<br />
Table 1: LET (MeVcm/mg) and energy<br />
(MeV) of ions beams used.<br />
Ion Energy LET<br />
79 Br 242 39,4<br />
79 Ag 267 54,7<br />
127 I 277 61,8<br />
The irradiated ASIC was a sample of the final production,<br />
housed in a ceramic package without cover in order to expose<br />
directly the silicon die.<br />
Figure 6 shows SEU cross-section versus threshold<br />
measured in 2 different condition: the largest figures have<br />
been obtained enabling the amplifying section of all channels,<br />
while in the second case only the output circuits were enabled<br />
resulting in much lower values, independent on threshold.<br />
We found little difference in cross-sections ranging from<br />
4·10 -4 to 7·10 -4 cm 2 /ch with ions type. Also in this case no<br />
latch-up event was detected.<br />
Hence results from irradiation with heavy ions show little<br />
ASIC sensitivity to SEU and immunity to LATCH-UP, in<br />
agreement with previous neutron tests.<br />
Cross Section (cm2/ch)<br />
0.0008<br />
0.0007<br />
0.0006<br />
0.0005<br />
0.0004<br />
0.0003<br />
0.0002<br />
0.0001<br />
0<br />
Heavy Ions Cross Section vs. Threshold<br />
Br (242MeV - 39MeVcm/gm) Br (Masks ON)<br />
Ag (267MeV - 54.7MeVcm/gm) Ag (Masks ON)<br />
I (277MeV - 61.8MeVcm/gm) I (Masks ON)<br />
0.0 50.0 100.0 150.0 200.0 250.0 300.0<br />
Threshold (fC)<br />
Figure 6: Heavy ions cross-section versus threshold.<br />
Last check was performed with gamma rays: few<br />
prototypes have been exposed to a Cobalt source up to 80<br />
Krad dose in Bologna facility: static and dynamic<br />
characteristics measured before and after irradiation have not<br />
shown any significant change.<br />
B. HV discharges test<br />
Whenever a spark occurs in a drift tube a very large<br />
charge, stored in the 470 pF decoupling capacitor, moves into<br />
the sensitive input pins of the chip. To preserve the ASIC<br />
from this potentially destructive event a protection circuit was<br />
added on FEB: a 39 Ω series resistor and a double diode to<br />
ground, all in parallel with 100 μm gap limiting voltage to<br />
about 500 V. Tests have shown that with this configuration<br />
the ASIC inputs can withstand more than 10 5 sparks at full<br />
wire potential (~3.6 KV) and still work.<br />
C. Ageing test<br />
Another important parameter concurring to system<br />
reliability is MTBF (mean time before failure). The practical<br />
method to measure it, is to let electronics operate in stress<br />
conditions (high temperature and supply values) for a long<br />
time, resulting in accelerated ageing, and detect failures<br />
versus time.<br />
A test was performed on 10 prototypes, keeping them into<br />
an oven at 125 °C for 2000 hours in order to simulate 10 years<br />
of CMS activity. Test ended with no faults; extensive tests on<br />
whole frontend electronics, FEBs and service boards, are now<br />
in progress having simulated several years of activity with no<br />
faults.<br />
IV. CONCLUSIONS<br />
The MAD ASIC, now produced in about 80.000 tested<br />
pieces, shows very good performances at low power<br />
consumption as summarized in the table below. Also<br />
temperature probe and masking features work properly.<br />
The chip was extensively and successfully tested with<br />
muon beam at H2 CERN facility.
COMPASS, a HEP experiment under construction at the<br />
Super Proton Synchrotron (SPS), has used some thousands<br />
MAD chips for its multiwires proportional chambers and<br />
<strong>preliminary</strong> results confirm good performances.<br />
The ASIC shows good MTBF characteristics, low SEU<br />
rate and immunity to latch-up events in spite of using a<br />
standard and not too expensive technology. Safe and reliable<br />
operation in CMS environment can be assumed with a<br />
reasonably low rate of background events and failures.<br />
V. REFERENCES<br />
[1] F. Gonella, M. Pegoraro, A prototype ASIC for the<br />
readout of the drift tubes of CMS Barrel Muon Chambers,<br />
Proceedings of the Fourth Workshop on Electronics for LHC<br />
experiments, CERN LHCC 98-36, 1998 p. 257.<br />
Table 2: Summary of ASIC performances (preserie).<br />
[2] S. Agosteo et al., First evaluation of neutron induced<br />
Single Event Effects on the CMS barrel muon electronics,<br />
CMS Note 2000/024.<br />
[3] L. Barcellan, F. Gonella, D. Pantano, M. Pegoraro and<br />
J. Wyss, Single Events Effects induced by heavy ions on the<br />
frontend ASIC developed for the muon chambers of CMS<br />
barrel, LNL Annual Report 2000, pag. 248.<br />
[4] M. Aguilar-Benítez et al., Construction and Test of the<br />
final CMS Barrel Drift Tube Muon Chamber Prototype,<br />
accepted for publication in Nucl. Instr. And Meth. A (2001).<br />
[5] S. Paoletti, A study of the CMS Q4 prototype chamber<br />
for different beam positions along the wires, CMS IN<br />
2000/021.<br />
[6] M. Cerrada et al., Results from the Analysis of the Test<br />
Beam Data taken with the Barrel Muon DT Prototype Q4,<br />
CMS Note 2001/041.<br />
power ≅ 25 mW/channel @ +5 V & +2.5 V threshold range 0÷500 fC with < 1% nonlinearity<br />
Zin ≅ 100 Ω (5÷200 MHz) crosstalk < 0.1%<br />
noise ≅ 1300 e - ± 5% @ C D = 0; slope ≅ 45 e - /pF propagation delay ≅ 4 ns<br />
sensitivity ≅ 3.77 mV/fC ± 0.5% time walk ≅ 3.5 ns @ C D = 0<br />
BLR + discriminator offset < 0.13 fC r.m.s. output pulse width 20÷200 ns (5% r.m.s. @ 50 ns)<br />
max input signal before saturation ≅ 800 fC one shot dead time ≅ 9 ns<br />
input rate without loss of accuracy > 2 MHz @ 800 fC output t r & t f < 2.5 ns<br />
No latch up events detected for neutrons up to 60 MeV<br />
Figure 7: Microphoto of the definitive die bonded to a ceramic case.
Status of the CARIOCA Project<br />
W. Bonivento 1,2 , D. Moraes 1,3 , P. Jarron 1 , W. Riegler 1 , F. dos Santos 1<br />
1 CERN, 1211 Geneva 23, Switzerland<br />
2 Instituto Nazionale di Fisica Nucleare,Sezione di Cagliari, Italy<br />
3 LAPE – IF/UFRJ, CP 68528 Cidade Univ., Ilha do Fundão BR-21945970 Rio de Janeiro, Brazil.<br />
Walter.Bonivento@cern.ch, Danielle.Moraes@cern.ch<br />
Abstract<br />
CARIOCA is an amplifier shaper discriminator chip,<br />
developed in IBM 0.25µm CMOS for the readout of the<br />
LHCb muon wire chambers. Four prototype chips were<br />
designed and fabricated over the last two years with a step<br />
by step approach testing the different functionalities of<br />
the chip. In this paper the design and test results of a<br />
positive polarity and negative polarity amplifier, as well<br />
as a shaper circuit, are discussed.<br />
I. INTRODUCTION<br />
The LHCb muon system will use 80,000 wire<br />
chamber channels of negative (wire readout) and positive<br />
(cathode readout) polarity. Figure 1 shows the block<br />
diagram for one readout channel.<br />
Figure 1: Block diagram of one readout channel<br />
The chamber signal, with a fast rising edge and a long<br />
1/t tail, is amplified and shaped to a unipolar narrow pulse<br />
in order to cope with the high rate expected in the<br />
experiment. A baseline restoration circuit is needed to<br />
compensate for baseline shifts and fluctuations. The<br />
circuit is fully differential from the shaper on.<br />
To this date the positive and negative amplifiers as<br />
well as shaper and discriminator have been designed and<br />
fabricated. The results of a 4-channel version of the<br />
positive polarity amplifier are presented in [1], together<br />
with the chip specifications. In this report we present<br />
measurements of a 14-channel positive amplifier chip and<br />
results of the negative polarity amplifier and the shaper.<br />
II. THE 14-CHANNEL POSITIVE POLARITY<br />
AMPLIFIER<br />
The main purpose of the 14-channel chip was to test chip<br />
uniformity and crosstalk. The individual channels show<br />
linearity up to an injected charge of 200fC (delta input),<br />
with a non-linearity error of about 1%. The measured<br />
sensitivity is 8mV/fC up to a detector capacitance of<br />
140pF and we find an equivalent noise charge of<br />
ENC=867e - + 36e - /pF. The sensitivities of all 14-channels<br />
were found to be within 10%. The noise and threshold<br />
variation was measured to be 7% R.M.S. The crosstalk is<br />
smaller than 1%. The power consumption of about 18mW<br />
per channel is dominated by LVDS driver.<br />
III. THE NEGATIVE POLARITY AMPLIFIER<br />
A. Design<br />
The design of the negative polarity amplifier follows<br />
closely that of the positive one [1]. Figure 2 shows a<br />
simplified schematic. The input stage is a cascode<br />
structure (N1) with a large input transistor that is<br />
followed by a voltage to current converter (N0) and a<br />
current mirror (N2). The mirror feeds the current to the<br />
output stage (N4) and back to the input stage. The output<br />
current is finally converted into voltage that is driven to<br />
the chip pad by an analog buffer. The size of the<br />
transistors N3 and N4 determines the current gain of<br />
about 6.<br />
Figure 2: Simplified schematic of the negative<br />
polarity amplifier.
From CADENCE simulation the bandwidth was<br />
found to be 16MHz and 23MHz for the negative and<br />
positive amplifier, respectively. The input impedance is<br />
below 50Ω within the bandwidth.<br />
B. Measurements<br />
The circuit displays linearity within 1% up to 200fC.<br />
Results of peaking time, sensitivity and noise<br />
measurements vs. detector capacitance, together with<br />
CADENCE simulation, are shown in Figure 3, 4 and 5,.<br />
The fit to the points of figure 5 gives a noise of<br />
ENC=951e-+31e-/pF.<br />
Figure 3: Measured (black dots) and simulated (white<br />
dots) peaking time vs. detector capacitance for the<br />
negative polarity amplifier.<br />
Figure 4: Measured (black dots) and simulated (white<br />
dots) sensitivity vs. detector capacitance for the negative<br />
polarity amplifier.<br />
Figure 5: Measured equivalent noise charge vs. detector<br />
capacitance for the negative polarity amplifier.<br />
C. Design<br />
IV. THE SHAPER<br />
The shaper is a differential amplifier in a folded cascode<br />
configuration with common mode feedback. A simplified<br />
schematic is shown in Figure 6. The shaper is designed<br />
with a speed such that the amplifier peaking time is not<br />
significantly degraded. Therefore the dominant high<br />
frequency pole is located at 160MHz. The 1/t tail<br />
cancellation is performed by a double pole/zero<br />
compensation network [2][3] displayed in Figure 7.<br />
Figure 6: Simplified schematic of the shaper circuit.
Figure 7: Schematic of the pole/zero compensation<br />
network.<br />
D. Measurements<br />
The prototype chip uses two positive polarity amplifiers<br />
followed by the shaper. The differential shaper output<br />
was equipped with two analog buffers and only one of<br />
them was read out during the tests.<br />
Figure 8 and 9 show the measured peaking time<br />
and noise versus detector capacitance for two different<br />
bias currents configurations. For high bias current a noise<br />
performance of ENC=1290e-+40e- is achieved.<br />
The offset ENC is higher than that of the<br />
positive amplifier alone due to the presence of two<br />
amplifiers at the shaper input. The slope is consistent with<br />
that of the positive amplifier chip but higher than that of<br />
the negative amplifier due to the large bandwidth of the<br />
shaper.<br />
Figure 8: Measured (black symbols) and simulated (white<br />
symbols) peaking time of the shaper chip as a function of<br />
detector capacitance for a delta input. The triangles and<br />
dots correspond to two different bias current<br />
configurations.<br />
Figure 9: Measured equivalent noise charge of the shaper<br />
chip as a function of detector capacitance. The triangles<br />
and dots correspond to two different bias current<br />
configurations.<br />
A quasi-1/t current injector, realised with four<br />
parallel RC circuits, was used to simulate a detector<br />
pulse. Figure 10 shows the output pulse from the negative<br />
amplifier and displays a long tail. Figure 11 shows the<br />
output pulse from the shaper with the same time scale of<br />
Figure 10 indicating that the tail is to large extent<br />
suppressed.<br />
Figure 12 shows the peaking time versus detector<br />
capacitance with the 1/t injector.<br />
The pulse width (at 20% amplitude) was found<br />
to be less than 30ns after shaping up to 200pF detector<br />
capacitance.<br />
Figure 10: Output of the negative polarity amplifier for a<br />
1/t current pulse input.
Figure 11: Shaper output for a 1/t current pulse input.<br />
Figure 12: Measured (black dots) and simulated (white<br />
dots) peaking time of the shaper chip as a function of<br />
detector capacitance for a 1/t current pulse input.<br />
V. CONCLUSIONS AND FUTURE PLANS<br />
Four prototype chips of the CARIOCA front-end have<br />
been produced during the last two years. Positive and<br />
negative amplifier and shaper circuits were tested and<br />
their characteristics satisfy the requirements for operation<br />
in LHCb. A final prototype, including a baseline<br />
restoration circuit, will be submitted in November 2001.<br />
The overall goal is to start chip production by the end of<br />
2002.<br />
VI. ACKNOWLEDGEMENT<br />
This work has been partially supported by Conselho<br />
Nacional de Desenvolvimento Científico e Tecnológico<br />
(CNPq-Brazil) and by the European Commission<br />
(contract CT1 * - CT94 - 0118).<br />
VII. REFERENCES<br />
[1] D.Moraes, “CARIOCA – a fast binary front-end<br />
implemented in 0.25um CMOS using a novel currentmode<br />
technique for the LHCb Muon detector”,<br />
presented at LEB2000.<br />
[2] R.A.Boie et al., “Signal shaping and tail cancellation<br />
for gas proportional detectors at high counting rates”,<br />
Nucl. Instr. and Meth. 192 (1982) 365.<br />
[3] M. Newcomer, “Progress in development of the<br />
ASDBLR ASIC for the ATLAS TRT”, presented at<br />
LEB1999.
TTCPR: A PMC Receiver for TTC<br />
John W. Dawson, David J. Francis*, William N. Haberichter,<br />
and James L. Schlereth<br />
Abstract<br />
The TTCPR receiver is a mezzanine card intended for use<br />
in distributing TTC information to Data Acquisition and Trigger<br />
Crates in the ATLAS Prototype Integration activities. An<br />
original prototype run of these ~cards was built for testbeam and<br />
integration studies, implemented in both the PMC and PCI form<br />
factors, using the TTCrx chips from the previous manufacture.<br />
When the new TTCrx chips became available, the TTCPR was<br />
redesigned to take advantage of the availability and enhanced<br />
features of the new TTCRX(1), and a run of 20 PMC cards was<br />
manufactured, and has since been used in integration studies and<br />
the testbeam. The TTCPR uses the AMCC 5933(2) to manage<br />
the PCI port, an Altera 10K30A(3) to provide all the logic so<br />
that the functionality may be easily altered, and provides a 4K<br />
deep FIFO to retain TTC data for subsequent DMA through the<br />
PCI port. In addition to DMA's which are mastered by the Add<br />
On logic, communication through PCI is accomplished via<br />
mailboxes, interrupts, and the pass-through feature of the 5933.<br />
An interface to the I2C bus of the TTCRX is provided so that<br />
internal registers may be accessed, and the card supports<br />
reinitialization of the TTCRX from PCI. Software has been<br />
developed to support operation of the TTCPR under both<br />
LynxOS and Linux.<br />
I. History of the TTCPR<br />
The TTCPR was developed in response to a need for<br />
TTC(4) information in the Data Acquisition from TileCal<br />
Modules in the ATLAS Test Beam. Specifically, it was desired<br />
to have EventID, Bunch Counter, and Trigger Type available<br />
from TTC in the data records. It was useful to have the TTC<br />
information available to processors in the Data Acquisition<br />
crates through PCI ports, and to have the data transferred to the<br />
processor's address space via an externally mastered DMA.<br />
Accordingly, the TTCPR was designed as a mezzanine card in<br />
the PMC form factor. The original cards utilized the older nonradhard<br />
version of the TTCRX, because the new radhard version<br />
was not available at that time.<br />
Argonne National Laboratory, Argonne, IL 60439 USA<br />
jwd@hep.anl.gov, wnh@hep.anl.gov, jls@hep.anl.gov<br />
*CERN, 1211 Geneva 23, Switzerland<br />
David.Francis@cern.ch<br />
When it became clear that the new TTCRX would be<br />
available soon and also that it would not be possible to obtain<br />
any more of the older TTCRX chips, the TTCPR was<br />
redesigned, and enhancements were added to take advantage of<br />
the features of the new TTCRX. This new TTCPR was<br />
produced and has been used successfully in data acquisition at<br />
the ATLAS Test Beam. The card has also been implemented in<br />
the PCI form factor. The TTCPR in the PMC version is shown<br />
in Figures 1 and 2.<br />
Figure 1. View of TTCPR.<br />
II. Architecture of the TTCPR<br />
A block diagram of the TTCPR is shown in Figure 2. The<br />
TTC information is received on a fiber by an optical receiver,<br />
amplified, and passed to the TTCRX. The TTCRX uses an onboard<br />
serial prom for initialization. All external signals<br />
available to the user from the TTCRX are passed to an Altera<br />
10k30A FPGA, which also configures from an on-board serial<br />
prom. The FPGA has the ability to read/write a bank of FIFO<br />
which is 4 bytes wide and 8k deep, and for versatility the FPGA<br />
writes on a 16-bit bus and reads on a 32-bit bus. The interface<br />
to PCI is managed by an AMCC 5933 PCI Controller, and<br />
hardware supports both add-on and pass through transfers, and
supports also add-on bus mastering for DMA's. The hardware<br />
supports also the ability to interact with the TTCRX registers<br />
via the I2C port using passthrough transfers.<br />
Figure 2: Block Diagram TTCPR.<br />
III. Programming for the TTCPR<br />
The TTCPR has the ability to access to all the TTC<br />
information received by the TTCRX. The operation of the<br />
TTCPR is governed by the configuration of the on-board FPGA<br />
and the user can choose any variation desired by configuring the<br />
FPGA. Configuration code for the FPGA is contained in the<br />
serial PROM on the card, and for our applications has been<br />
generated using the Altera MAX+II software package.<br />
Interaction between the host processor and the TTCPR<br />
utilizing the PCI port is through the AMCC 5933 PCI Bridge.<br />
The 5933 is initialized from the serial NVRAM, which must be<br />
programmed once as described in the AMCC 5933 Guide and<br />
contains the PCI vendor and device identification, the<br />
configuration space size and type, and other parameters. Data<br />
transfers through PCI may use the mailbox registers, the 5933<br />
FIFO's, or the pass-through data path. Software to support<br />
operation of the TTCPR in either a polled or interrupt driven<br />
mode has been developed in C++ at Argonne. This software<br />
has thus far been ported to LynxOS on PowerPC platforms, and<br />
to Linux on Intel platforms.<br />
IV. Operation of the TTCPR<br />
Our use of the TTCPR has been to bring TTC information<br />
to the data acquisition system for the TileCal setup in the<br />
ATLAS test beam. In this application the TTCPR initiates data<br />
transfer to a PCI target address specified by the user. The 5933<br />
utilizes Add-on initiated bus mastering to accomplish the<br />
transfer. The user supplies an event threshold count and a PCI<br />
target address which the Add-on logic in the FPGA stores until<br />
the requested number of events has been accumulated in the<br />
FIFO, and then initiates the transfer.<br />
In our application the TTCPR buffers the EventID, BCID,<br />
and trigger type associated with each L1Accept. These results<br />
are made available to the PCI bus when the event threshold<br />
count is reached. Interaction between PCI and the Add-on bus<br />
is mediated by writing and reading the 5933 mailboxes and<br />
registers. Commands such as Reinitialize the TTCRX and Clear<br />
Busy, and data such at the Event Threshold and PCI target<br />
address for the Add-on mastered DMA are passed by mailboxes.<br />
Configuration information and transfer parameters, such as PCI<br />
transfer count and Add-on interrupt source are passed by<br />
registers.<br />
V. Summary<br />
The TTCPR has been developed by the Argonne group and<br />
used to provide TTC information the Data Acquisition system in<br />
the TileCal setup in the ATLAS Test Beam. Our objective was<br />
to develop a module that could have general application in<br />
making available TTC information to processors in the LHC<br />
environment. Accordingly the module has access to all TTC<br />
information passed to the TTCRX, and may be adapted to<br />
transfer any of this information to PCI by reconfiguring the<br />
FPGA.<br />
VI. References<br />
[1]. TTCRX Reference Manual Version 3.2, J. Chrisiansen, A.<br />
Marchioro, P. Moreira, and T. Toifl, CERN-EP/MIC,<br />
February 2001<br />
[2]. AMCC PCI Products Data Book, Applied Micro Circuits<br />
Corporation, San Diego, CA.<br />
[3]. Altera Flex 10K Application Guide.<br />
[4]. http://ttc.web.cern.ch/TTC/intro.html<br />
VII. Acknowledgement<br />
The authors wish to acknowledge the advice and assistance<br />
of Paulo Moreira and Bruce Taylor of CERN.
A PROTOTYPE FAST MULTIPLICITY DISCRIMINATOR<br />
FOR ALICE L0 TRIGGER<br />
Leonid Efimov 1 , Vito Lenti 2 and Orlando Villalobos-Baillie 3 .<br />
FOR THE ALICE COLLABORATION<br />
1 JINR-Dubna, Russia<br />
2 Bari, Italy, Dipartimento di Fisica dell'Università and Sezione INFN<br />
3 Birmingham, United Kingdom, School of Physics and Astronomy, The University of Birmingham<br />
Presented by Vito Lenti<br />
Contact-person: Leonid.Efimov@cern.ch<br />
Abstract<br />
The design details and test results of a prototype<br />
Multiplicity Discriminator (MD) for the ALICE L0<br />
Trigger electronics are presented.<br />
The MD design is aimed at the earliest trigger decision<br />
founded on a fast multiplicity signal cut, in both options<br />
for the ALICE centrality detector: Micro Channel Plates<br />
or Cherenkov counters.<br />
The MD accepts detector signals with an amplitude<br />
range of plus-minus 2.5 V, base duration of 1.8 ns and<br />
rise time of 300-400 ps. The digitally controlled threshold<br />
settings give an accuracy better than 0.4% at the<br />
maximum amplitude of the accepted pulses. The MD<br />
internal latency of 15 ns allows for a decision every LHC<br />
bunch crossing period, even for the 40 MHz of p-p<br />
collisions.<br />
1. INTRODUCTION<br />
A functional scheme for the MD as an element of<br />
ALICE L0 Trigger [1,2] Front-End (F.E.) electronics is<br />
shown in Figure 1, for the proposed MCP based option.<br />
In the scheme shown in the figure [3,4,5], fast passive<br />
summator F.E. electronic units Σ [6], integrated in the<br />
detector, are used for linear summation of isochronous<br />
signals coming from pads belonging to an MCP disk<br />
sector. These signals, whose amplitude is proportional to<br />
the sampled multiplicity, are fed to the MD. The<br />
discriminator produces a multiplicity trigger PTM (Pre-<br />
Trigger on Multiplicity) according to programmable<br />
threshold codes delivered by a Source Interface Unit<br />
(SIU), through the ALICE Detector Data Link (DDL).<br />
Left Detector Disc Right Detector Disc<br />
Fast Sum<br />
Figure 1: General layout of the Multiplicity Discriminator (MD) within ALICE L0 Trigger Front-End<br />
Electronics (TTCR = the LHC Timing, Trigger and Control Receiver; TD = Time Discriminator; TDC = Time to<br />
Digital Converter; QDC = Charge to Digital Converter; PLD = Programmable Logic Device).<br />
PTMi<br />
1 PTTi<br />
2<br />
8<br />
∑<br />
TD<br />
M D<br />
PLD S<br />
hit I DDL<br />
TDC F U<br />
start<br />
I<br />
F<br />
QDC O<br />
T<br />
T<br />
C<br />
R<br />
from TTC
Each i-th FEE card has to produce PTM i in conjunction<br />
with another, time trigger signal PTT i (Pre-Trigger on<br />
Time of Flight) needed to provide a precise time mark for<br />
the measured particles collision time T0. Pipe-line<br />
memories are needed to store the T0 and charge<br />
information, for each MCP sector, at the 40 MHz rate of<br />
the LHC clock.<br />
All MD’s PTM i as well as all PTT i are collected<br />
together, within fast programmable logical units (not<br />
shown), to compare the signals from different sectors and<br />
produce a L0 centrality trigger.<br />
2. THE MD CONCEPTUAL DESIGN<br />
AND SCHEMATICS<br />
The functional scheme for the prototype MD is<br />
presented in Figure 2. The approach used in the MD<br />
design was to implement a leading edge discriminator by<br />
a proper combination of a voltage comparator and a<br />
digital-to-analog converter. Inputs InA and InB for analog<br />
signals with positive and negative polarities have been<br />
foreseen.<br />
InA<br />
UF Voltage Comparator<br />
+ threshold<br />
Figure 2: Functional scheme of the prototype MD.<br />
C1<br />
Input Digital-to-Analog<br />
ECL<br />
PTM<br />
NIM<br />
Code Converter<br />
Shaper<br />
Output<br />
InB<br />
- threshold<br />
UF Voltage Comparator<br />
C2<br />
An Ultra-Fast (UF) ECL-compatible voltage comparator<br />
AD96685BQ from Analog Devices [7] has been selected<br />
as the basic MD component. This comparator has a<br />
typical propagation delay of 2.5 ns and a high precision<br />
differential input stage with a common-mode signal range<br />
from -2.5 V to +5 V,.<br />
To provide the required accuracy for multiplicity<br />
discrimination, within the dynamic range of the fast<br />
preamplifier 0 ÷ ±2.5V, an 8 bit DAC, with control<br />
settings of 10 mV threshold resolution, is sufficient [1].<br />
This DAC, realized on the AD558KD-DACPORT base<br />
[8], delivers an output voltage from 0 to ±2.56V with an<br />
accuracy of ± 1/4 LSB (1LSB = 0.39% of full scale), that<br />
corresponds to 0.1% of the device dynamic range. The<br />
ECL shaper is implemented using a high-speed Motorola<br />
MECL 10KH D-trigger [9] and a series of logical gates<br />
[10] to achieve the correct form and the width of the<br />
output signal needed. The NIM level converter provides a<br />
standard output signal of 16 mA for 50 Ohm load. The<br />
total latency of the PTM trigger output, referred to the<br />
leading edge of fast input signals, is about 15 ns; the<br />
PTM pulse width can be adjusted from a minimum of 10<br />
ns up to 20 ns, using a potentiometer.<br />
Two operational amplifiers from NS [11] provide the<br />
comparator inputs with a differential unbalance of 5÷10<br />
mV, to avoid exciting the circuit with noise under zero<br />
threshold. The prototype MD board is mounted in a<br />
double-width NIM module. Two high frequency 50 Ohm<br />
coaxial BNC-type connectors, isolated from the module<br />
frame with an analog ground, and a miniature LEMO,<br />
with a standard grounded case, are placed on the front<br />
panel for analog inputs and logical output signals<br />
respectively. Two hexadecimal constant register switches,<br />
also mounted on the front panel, allow for the setting of<br />
binary threshold codes.<br />
3. IN-LAB TESTS OF THE MD<br />
PROTOTYPE<br />
Tests were performed on the MD in Bari and Dubna.<br />
The aim of these tests was to study the MD sensitivity to<br />
fast input signals and to obtain a calibration curve for the<br />
MD thresholds. S curves were also performed to evaluate<br />
the width of the MD transition gap near threshold.<br />
A STUDY OF THE MD SENSITIVITY TO INPUT<br />
SIGNALS<br />
It is essential, for a correct use of the MD, to study the<br />
correlation between preset and effective thresholds. In<br />
fact a minimum input signal amplitude Ueff should be<br />
applied in order for the comparator to be triggered at the<br />
preset DAC reference UDAC. It is a known that, when<br />
handling very fast and low-amplitude signals, the shorter<br />
and the smaller the input pulses, the bigger is the<br />
difference between the applied and the effective threshold<br />
values [16]. This circumstance could be explained due to<br />
a certain minimum of effective charge Qeff, which must<br />
be accumulated at the MD input capacity Cin, to reach<br />
UDAC and then to trigger the comparator by some<br />
additional charge Qtr :<br />
Q eff = C in UDAC + Q tr (1) .<br />
Pulses of smaller integrated area require more and<br />
more extra-charge compensation and for the MCP, which<br />
produces signals of almost fixed width, this compensation<br />
can be achieved only by increasing the pulse amplitude.<br />
In order to obtain a calibration curve for the<br />
comparator thresholds and to study the sensitivity of the<br />
MD to input signals, a series of measurements has been<br />
performed using a LeCroy 9211 [12] programmable pulse<br />
generator. The pulse generator time parameters were<br />
chosen and fixed such as to simulate MCP output signals.<br />
So we used the minimum available value of 0.9 ns for<br />
leading and trailing edges (Te), and a pulse width of 2.5<br />
ns base (Tb), at a selected repetition rate of 40 MHz.<br />
The fine and high stabilized tuning of the generated<br />
pulse amplitudes, with 5 mV programmable steps, made<br />
it feasible to investigate effective thresholds precisely<br />
over the full prototype MD linear range. Here we present<br />
some results of these measurements, corresponding to<br />
DAC voltage values in the reduced range 0 to 500 mV,<br />
with 50 mV steps.
In Figure 3 we present, as a function of the DAC<br />
threshold values, the effective voltage thresholds<br />
(squares) obtained and the absolute difference between<br />
effective and DAC voltage thresholds (triangles).<br />
700<br />
600<br />
500<br />
400<br />
300<br />
200<br />
100<br />
0<br />
mV<br />
U eff<br />
Ueff - UDAC<br />
0 50 100 150 200 250 300 350 400 450 500<br />
UDAC , mV<br />
Figure 3: Effective vs. DAC thresholds for 500 mV<br />
sweep of signals with 1.6 ns FWHM and 0.9 ns edges.<br />
The percentage of this difference, with respect to the<br />
DAC values is given in the figure 4, where an increase for<br />
signals of smaller amplitude is clearly observed, going<br />
from 31% needed at 500 mV DAC threshold up to<br />
60% at 50 mV.<br />
(Ueff - UDAC) / UDAC , %<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
Figure 4: Relative effective over DAC thresholds<br />
prevailing for 500 mV sweep of signals with 1.6 ns<br />
FWHM and 0.9 ns edges.<br />
An estimation of the sensitivity of the electronic<br />
scheme proposed for the MD prototype, would involve a<br />
measurement or calculation of the effective charge Q eff<br />
according to (1). This is hard to perform directly but<br />
can be roughly achieved using the measured data on U eff<br />
and the known time parameters settings of the LeCroy<br />
pulse generator.<br />
In fact, by fitting the LeCroy 9211 output pulses with<br />
an isosceles trapezium-like shape (Figure 5), it is possible<br />
to calculate the full electric charge Qpef carried by every<br />
pulse with amplitude Ueff, time base Tb and equal Te<br />
edges, as an integral of the pulse current ip(t):<br />
T<br />
eff pef ∫0 p<br />
0<br />
0 50 100 150 200 250 300 350 400 450 500<br />
UDAC , mV<br />
b<br />
Q ~ Q = i ( t) dt= ( T − T ) U ( U )/ R<br />
b e eff DAC S<br />
where Rs=150 Ohm is the total equivalent schematics<br />
resistance limiting the current charging Cin .<br />
Ueff<br />
Te<br />
Tb<br />
Te<br />
FWHM = 1.6 ns<br />
Figure 5: Approximate shape of the generated<br />
pulse, indicating the quantities Te = 0.9 ns and Tb =<br />
2.5 ns.<br />
The calculated effective charge values versus DAC<br />
voltage thresholds are shown in Figure 6. The implicit<br />
linear Q(U) dependence is evident from the fit presented<br />
in the figure, corresponding to the analytical expression:<br />
Q = 0.0136U + 0.2557.<br />
Qeff , pC<br />
7<br />
6<br />
5<br />
4<br />
3<br />
2<br />
1<br />
Q = 0.0136 U + 0.2557<br />
0<br />
0 50 100 150 200 250 300 350 400 450 500<br />
UDAC , mV<br />
Figure 6: Effective charge vs. DAC thresholds<br />
for 500 mV sweep of signals with fixed 1.6 ns FWHM<br />
and 0.9 ns equal edges.<br />
A comparison of this formula with the equation (1)<br />
allows some considerations:<br />
- the input capacity C in plays an especially important<br />
role because the lower the C in value, the smaller Q eff is for<br />
a given UDAC and the faster this value can be achieved;<br />
- for our prototype MD C in = 0.0136 pC/mV=13.6 pF,<br />
and, keeping in mind C in= C c+C m, where C c = 2 pF is the<br />
comparator input capacity, a parasitic input capacity of<br />
the MD mounting Cm of 11.6 pF must be considered;<br />
- a Q tr value of about 0.26 pC can be inferred from<br />
(1). This value should be considered as the minimum of<br />
over DAC threshold pulse charge to trigger the scheme,<br />
i.e. the MD sensitivity.<br />
B S CURVES<br />
In order to test the MD performance, S-curves were<br />
produced, at 3 different threshold values, to evaluate the<br />
width of the MD transition gap, near threshold.<br />
Extremely precise, fixed width pulses (3.5 ns FWHM)<br />
were sent simultaneously to the MD and to a reference<br />
LED-type discriminator, set at its minimum threshold<br />
over noise (20 mV). The pulse height of the generated<br />
signal was varied in steps of 2.5 mV.
The results are shown in figure 7. The MD shows a<br />
good threshold accuracy, reaching ≈ 100% efficiency in<br />
short threshold ranges: ≈ 3 mV at 360 mV and less then<br />
10 mV at 1080 mV threshold. These plots give, for all 3<br />
settings, an almost constant ratio of (threshold<br />
359 0<br />
361.5 Counts Ratio 18.8 at 360 mV Threshold Setting<br />
364N / Nmax (%)<br />
93.15<br />
366.5 110<br />
98.55<br />
369 100<br />
100<br />
90<br />
80<br />
70<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
356.5 359 361.5 364 366.5 369 371.5<br />
Input Pulse Amplitude, mV<br />
Figure 7: S curves for 360, 720 and 1080 mV threshold setting.<br />
4. IN-BEAM TESTS OF THE MD<br />
PROTOTYPE<br />
A first test of the prototype MD was performed in the<br />
CERN experimental area PS/T10, with muon beams of<br />
7.5 GeV/c. This test was planned to study he time<br />
resolution and the efficiency of different micro channel<br />
and micro sphere plate-based vacuum sectors for the<br />
ALICE T0 / Centrality detector [13]. Several modules of<br />
fast electronics, including high-speed amplifiers and<br />
discriminators were also tested.<br />
MCP sector<br />
∼5 m<br />
MD<br />
gap/threshold setting) not exceeding 1%, and<br />
uncertainties which are, in any case, smaller then the<br />
minimum setting step (10 mV).<br />
The MD module was plugged in a front-end electronics<br />
rack used for time-of-flight measurements, during 10<br />
runs, in place of specialized fast timing discriminators,<br />
such as Constant Fraction, Double Threshold- [14] and<br />
Pico-Timing [15] - type schemes, at a distance of about<br />
5 m from the tested detectors. The experimental setup<br />
for a single channel of the electronics is shown in Figure<br />
8. Various combinations of new specialized ultra-fast<br />
SMD devices [6] with a different gain, in the range 7÷30,<br />
were tested to search for the best signal/noise ratio. A<br />
LeCroy 2228A TDC of 50 ps LSB was used for Time-to-<br />
Digital conversion.<br />
Figure 8: Experimental arrangement of the prototype MD tests with PS CERN / T10 setup facilities.<br />
.<br />
The experimental aim of this test was two-fold:<br />
The fitted curve of figure 9 reproduces well the shape<br />
a) to simulate a study of multiplicity / centrality versus of a single particle distribution (figure 10), showing the<br />
the prototype MD threshold.<br />
correct operation of the MD with short and fast pulses.<br />
b) to test the timing properties of the prototype MD;<br />
In figure 9 we show an experimental plot giving the<br />
ratio D/Dmax VS different values of relative MD threshold<br />
settings UDAC/UDAC(max) , where D = NTDC-stop/NTDC-start. D / Dmax<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0<br />
0 0.2 0.4 0.6 0.8 1<br />
Relative MD threshold setting UDAC / UDAC, max<br />
Figure 9: Relative TDC start/stop counts ratio VS the<br />
MD relative threshold setting.<br />
715 0<br />
717.5 Counts Ratio 9.9at<br />
720 mV Threshold Setting<br />
720<br />
N 722.5 / Nmax (%)<br />
725 110<br />
47.85<br />
79.8<br />
96.7<br />
727.5<br />
100<br />
90<br />
80<br />
70<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
100<br />
712.5 715 717.5 720 722.5 725 727.5 730<br />
Input Pulse Amplitude, mV<br />
Scintillator Counters<br />
Start<br />
delay Stop<br />
1075 0<br />
1077.5 Counts 9.45 Ratio at 1080 mV Threshold Setting<br />
1080 34.15<br />
N / Nmax (%)<br />
1082.5 57.4<br />
1085 110<br />
76.65<br />
1087.5<br />
100<br />
1090<br />
91.9<br />
100<br />
TDC<br />
0<br />
1073 1075 1078 1080 1083 1085 1088 1090 1093<br />
Input Pulse Amplitude, mV<br />
Figure 10: Single and multiparticle distributions<br />
(p-Pb and Pb-Pb)<br />
90<br />
80<br />
70<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10
Another experimental result is presented in figure 11,<br />
where a TDC histogram of 5115 accepted events is<br />
shown. The fit gives a resolution of about 120 ps, a<br />
rather good result for a discriminator not optimized for<br />
timing applications.<br />
Figure 11: Example of events over TDC channels<br />
distributions from the prototype MD tests at CERN<br />
PS (1 TDC channel = 50 ps).<br />
5. CONCLUSIONS<br />
A prototype amplitude discriminator, for the ALICE L0<br />
multiplicity Trigger, has been designed, elaborated and<br />
tested. The discriminator was designed to stand short<br />
nanosecond signals coming from the ALICE<br />
T0/Centrality detector, based on Micro Channel Plates.<br />
Commercially available, inexpensive and fast<br />
components have been used to implement the MD<br />
prototype. It features an input signal range from 0 to<br />
±2.5V, a programmable threshold control with 8 bits<br />
resolution, and an output signal latency of 15 ns.<br />
The minimum input signal charge, to trigger the<br />
comparator over the DAC threshold, has been found in<br />
about 0.26 pC.<br />
Experimental tests of the prototype multiplicity<br />
discriminator, using an MCP-based detector, have been<br />
carried out at the CERN PS beam facilities. While<br />
applying the discriminator for timing in MIPs time-offlight<br />
measurements, a resolution of ~120 ps has been<br />
obtained. The MD was also tested by studying the<br />
response to real MCP signals as a function of the<br />
discriminator threshold.<br />
Further development of the multiplicity discriminator<br />
in terms of schematics and PCB design could still be<br />
made in order to improve parameters and overall<br />
performance, e.g. to reduce input capacity currently<br />
evaluated at 13.6 pF and to integrate the unit with other<br />
elements of the ALICE L0 Trigger electronics.<br />
REFERENCES<br />
1. N.Ahmad et al. ALICE Technical Proposal CERN /<br />
LHCC / 95-71, LHCC / P3,<br />
15 Dec. 1995, chapters 7, 9, 10.<br />
2. H.Beker et al. The ALICE Trigger System.<br />
Proceedings of the Second Workshop on Electronics for<br />
LHC Experiments, Balatonfüred, Sept. 23-27, 1996,<br />
CERN / LHCC / 96-39, 21 October 1996, p. 170-174.<br />
3. L.G.Efimov and O.Villalobos-Ballie. Design<br />
Considerations for Fast Pipelined Front-End Trigger<br />
Electronics.<br />
ALICE / 95-40, Internal Note/ Trigger, Nov.17, 1995.<br />
4. L.G.Efimov et al. Fast ALICE L0 Trigger.<br />
Proceedings of the Second Workshop on Electronics for<br />
LHC Experiments, Balatonfüred, Sept. 23-27, 1996,<br />
CERN / LHCC / 96-39, 21 October 1996, p. 166-169.<br />
5.L.G.Efimov et al. Fast Front-End L0 Trigger<br />
Electronics for ALICE FMD-MCP.<br />
Tests and Performance. Proceedings of the Third<br />
Workshop on Electronics for LHC Experiments,. London,<br />
Sept. 22-26, 1997,<br />
CERN / LHCC / 97-60, 21 October 1997, p. 359-363.<br />
6. A.E.Antropov et al. FMD-MCP Forward Multiplicity<br />
Detector based on MicroChannel Plates.<br />
Preliminary Technical Design Report, ISTC Project #345-<br />
Biennal Report, St.Petersburg State University,<br />
St.Petersburg, Feb.1999<br />
7. http://www.analog.com/pdf/96685_87.pdf<br />
8. http://www.analog.com/pdf/ad558.pdf<br />
9. http://onsemi.com/pub/Collateral/mc10h131rev6.pdf<br />
10. http://onsemi.com/pub/Collateral/mc10h101rev6.pdf<br />
11. http://www.national.com/ds/LF/LF351.pdf<br />
12. http://www.lecroy.com/Archive/TechData/<br />
9200_data/9200.html<br />
13. G.A.Feofilov et al. Results from In-Beam Tests of the<br />
MCP-based Vacuum Sector Prototype for ALICE<br />
T0/Centrality Detector. Proceedings of the Vienna<br />
Conferente on Instrumentation VCI-2001, February 2001<br />
(to be published).<br />
14. C.Neyer. A Discriminator Chip for Time of Flight<br />
Measurements in ALICE.. Proceedings of the Third<br />
Workshop on Electronics for LHC Experiments, London,<br />
Sept. 22-26, 1997,<br />
CERN / LHCC / 97-60, 21 October 1997, p. 238-241.<br />
15. http://www.ortec-online.com/electronics/disc/<br />
9307.htm<br />
16. E.A.Meleshko Nanosekundnaya elektronika v<br />
eksperimentalnoy fizike (in Russian). -<br />
Moscow, Energoatomizdat, 1987, p.57-61.
TIM ( TTC Interface Module ) for ATLAS SCT & PIXEL Read Out Electronics<br />
Jonathan Butterworth ( email : jmb@hep.ucl.ac.uk )<br />
Dominic Hayes [*] ( email : Dominic.Hayes@ra.gsi.gov.uk)<br />
John Lane ( email : jbl@hep.ucl.ac.uk )<br />
Martin Postranecky ( email : mp@hep.ucl.ac.uk )<br />
Matthew Warren ( email : warren@hep.ucl.ac.uk )<br />
University College London, Department of Physics and Astronomy, London, WC1E 6BT, Great Britain<br />
[*] now at Radiocommunication Agency, London, E14 9SX, Great Britain<br />
Abstract<br />
The design, functionality, description of hardware and<br />
firmware and <strong>preliminary</strong> results of the ROD ( Read Out<br />
Driver ) System Tests of the TIM ( TTC Interface Module )<br />
are described.<br />
The TIM is the standard SCT and PIXEL detector<br />
interface module to the ATLAS Level-1 Trigger, using the<br />
LHC-standard TTC ( Timing, Trigger and Control ) system.<br />
TIM was designed and built during 1999 and 2000 and<br />
two prototypes have been in use since then ( Fig. 1 ). More<br />
modules are being built this year to allow for more tests of the<br />
ROD system at different sites around the world.<br />
Fig. 1 First TIM-0 module<br />
I. INTRODUCTION<br />
The SCT ( or PIXEL ) interface with ATLAS Level 1<br />
receives the signals through the Timing, Trigger, and Control<br />
( TTC ) system [ 1 ] and returns the SCT ( or PIXEL ) Busy<br />
signal to the Central Trigger Processor ( CTP ). It interfaces<br />
with the SCT ( or PIXEL ) off-detector electronics [ 2 ], in<br />
particular with the Read-Out Driver ( ROD ), and is known as<br />
the SCT ( or PIXEL ) TTC system .<br />
The SCT ( or PIXEL ) TTC system consists of the<br />
standard TTC system distributing the signals to a custom TTC<br />
Interface Module ( TIM ) in each crate of RODs.<br />
This paper and the accompanying diagrams describe<br />
some hardware details of the TIM and their functionality. This<br />
paper should be read in conjunction with the original TIM<br />
paper presented in 1999 [ 3 ] and other specification<br />
documents [ 4 ], [ 5 ] , [ 6 ], [ 7 ] and [ 8 ].<br />
A. Functionality<br />
TTC<br />
input<br />
TTC¡ rx<br />
TTC<br />
interface<br />
II. TIM<br />
TIM Functional<br />
Model<br />
JBL/TJF<br />
UCL<br />
7/8/99<br />
7/9/99<br />
External<br />
Trigger Input<br />
Internal<br />
Timing,<br />
Trigger<br />
and<br />
Control<br />
generator<br />
L1ID BC ¢ ID TY £ P ¤ E L1ID BC ¢ L1<br />
ID TYP<br />
� A BC ¢ R ECR CAL<br />
Event<br />
queue<br />
and<br />
serialiser<br />
BC<br />
16<br />
Seria ¦ l ID<br />
Seria ¦ l TT<br />
FE § R<br />
spa<br />
¦ re<br />
CL � K<br />
CL� K<br />
BC<br />
A L1�<br />
R BC¡<br />
R EC¡<br />
CA<br />
� L<br />
R FE�<br />
re spa©<br />
Backplane<br />
mapping<br />
¥<br />
8<br />
¥<br />
8<br />
BC clocks TTC bus<br />
¥<br />
8<br />
TTC<br />
sequencer<br />
VME<br />
commands<br />
¤ E<br />
VME<br />
download<br />
BUSY<br />
mas ¨ ked<br />
OR ROD<br />
Cra© te<br />
16 Busy<br />
ROD Bu� sys<br />
Fig. 2 : TIM Functional Model<br />
The diagram ( Fig. 2 above ) shows the functional model<br />
of the TIM, and illustrates the principal functions of the
current TIM-0 modules :<br />
• To transmit the fast commands and event ID from the<br />
TTC system to the RODs with minimum latency. The<br />
clock is first transmitted to the Back-Of-Crate optocards<br />
( BOC ) , from where it is passed to the RODs<br />
• To pass the masked Busy from the RODs to the CTP in<br />
order to stop it sending triggers<br />
• To generate and send stand-alone clock, fast commands<br />
and event ID to the RODs under control of the local<br />
processor<br />
In addition to these main functions, the TIM has also the<br />
following capabilities :<br />
• The TIM has programmable timing adjustments and<br />
control functions<br />
• The TIM has a VME slave interface to give the local<br />
processor read and write access to its registers [ 9 ]<br />
• The TIM is configured by the local processor setting up<br />
TIM’s registers. They can be inspected by the local<br />
processor<br />
The TTC information, required by the RODs and by the<br />
SCT or PIXEL FE ( Front End ) electronics, is the following :<br />
Clock : BC Bunch Crossing clock<br />
Fast command : L1A Level-1 Accept<br />
ECR Event Counter Reset<br />
BCR Bunch Counter Reset<br />
CAL Calibrate signal<br />
Event ID : L1ID 24-bit Level-1 trigger number<br />
BCID 12-bit Bunch Crossing number<br />
TTID 8-bit Trigger Type ( + 2 spare bits )<br />
The TIM outputs the above information onto the<br />
backplane of a ROD crate with the appropriate timing. The<br />
event ID is transmitted with a serial protocol and so a FIFO<br />
( First In First Out ) buffer is required in case of rapid triggers.<br />
An additional FER ( Front End Reset ) signal, which may<br />
be required by the SCT FE electronics, can also be generated,<br />
either by the SCT-TTC or by the TIM. At present, it is<br />
proposed that FER is carried out by the ECR.<br />
The optical TTC signals are received by a receiver<br />
section containing a standard TTCrx receiver chip, which<br />
decodes the TTC information into electrical form.<br />
The TIM can also generate all the above information<br />
stand-alone at the request of the local processor. It can also be<br />
connected to another TIM for stand-alone multi-crate<br />
operation for system tests in the absence of TTC signals.<br />
The TIM produces a masked OR of the ROD Busy<br />
signals in each crate and outputs the overall crate Busy to a<br />
separate BUSY module. A basic ROD BUSYs monitoring is<br />
also available on TIM. It may be possible to implement more<br />
sophisticated monitoring functionality, on an additional FPGA<br />
device on each TIM, if this proves desirable.<br />
B. Hardware Implementation<br />
The TIM has been designed [ 10 ] as a 9U, single width,<br />
VME64x module, with a standard VME slave interface.<br />
A24/D16 or A32/D16 access is selectable, with the base<br />
address A16 – A23 ( or A16 - A31 ) being either preset as<br />
required, or set by the geographical address of the TIM slot in<br />
each ROD crate. Full geographical addressing ( GA ) and<br />
interrupts ( eg. for clock failure ) are available if required.<br />
On the TIM module, a combination of FastTTL, ECL,<br />
PECL and LV BiCMOS devices is used, requiring +5V, +3V3<br />
and -5V2 ( or +/- 12V to produce this –5V2 ) voltage supplies.<br />
The TTC interface is based on the standard TTCrx<br />
receiver chip, together with the associated PIN diode and<br />
preamplifier developed by the RD12 group at CERN, as<br />
described elsewhere [ 11 ]. This provides the BC clock and all<br />
the signals as listed in section A above. On the TIM modules,<br />
the TTCrx mezzanine test board ( CERN Ref: ECP 680-1102-<br />
630A ) [ 12 ] is used to allow an easy replacement if required.<br />
The latest version utilizes the rad-hard TTCrx3 DMILL<br />
BGA144 version of the receiver chip.<br />
The BC clock destined for the BOCs and RODs, with the<br />
timing adjusted on the TTCrx, is passed via differential PECL<br />
drivers [ 13 ] directly onto the point-to-point parallel<br />
impedance-matched backplane tracks. These are designed to<br />
be of identical length for all the slots in each crate to provide a<br />
synchronised timing marker. All the fast commands are also<br />
clocked directly, without any local delay, onto the backplane<br />
to minimise the TIM latency budget [ 14 ].<br />
Most of the logic circuitry required by TIM is contained<br />
on a number of CPLDs ( Complex Programmable Logic<br />
Device ), with only the programmable delays and the<br />
buffering of the various inputs and outputs being done by<br />
separate integrated circuits. This makes TIM very flexible, as<br />
it allows for possible changes to the functionality by<br />
reconfiguring the firmware of CPLDs, while keeping the<br />
inputs and outputs fixed ( Fig. 3 below ).<br />
This family of devices was chosen during the first design<br />
stage of the prototype TIM modules in 1998 because of<br />
familiarity with their use and capabilities, thus minimizing the<br />
time required for completion of the design. It is proposed to<br />
use VHDL to transfer the design to one or more FPGA<br />
devices for the final production versions of TIM to reduce<br />
costs.<br />
In total, ten of the Lattice ( formerly Vantis / AMD )<br />
Mach4 and Mach5 devices are used on the TIM-0. A<br />
proprietary Vantis / AMD compiler has been used to design<br />
the on-chip circuitry and they are all in-circuit programmable<br />
and erasable using a Lattice software via a fully-buffered J-<br />
TAG interface connected to a PC parallel port. A full,<br />
synchronously-clocked simulation of each CPLD circuit can<br />
be performed to verify the design, and a limited timing<br />
verification is also possible, including a report of all<br />
propagation times through the CPLD.
FRONT PANEL<br />
36 x L� EDS<br />
NIMINPUTS<br />
ECLIN� PUTS<br />
NIMOUTPUTS<br />
ECLOUTPUTS<br />
TT� C<br />
RES� ET<br />
SW<br />
TRIGGER<br />
SW<br />
8�<br />
9�<br />
5�<br />
9�<br />
6<br />
36<br />
3�<br />
8�<br />
9�<br />
6�<br />
5�<br />
3�<br />
TIM - DISCRE� TE VS. PLDS<br />
STAN�<br />
PLD2<br />
D-<br />
ALONE<br />
6�<br />
TTC<br />
rx<br />
Reg� . 0A<br />
SWITCH<br />
DIL�<br />
CLOCK<br />
DEL� AY<br />
1 - 8<br />
Reg� . 08<br />
PLD� 3<br />
STAND-<br />
ALONE<br />
VARIABLE<br />
TRIGGER<br />
OSCILLATOR<br />
ECR / FER<br />
OSCILLATOR<br />
PLD 9<br />
TTC<br />
INTERFACE<br />
PLD7<br />
SEQUENCER<br />
+<br />
SINK<br />
PLD5<br />
SERIALISER<br />
PLD6 MAPPING<br />
MP/TJF UCL 03<br />
�<br />
September 2001<br />
SEQ.<br />
RAM<br />
SINK<br />
RAM<br />
FIFO<br />
ID�<br />
TT<br />
FIFO<br />
PLD4A<br />
L1ID<br />
BCI�<br />
PLD4B<br />
D +<br />
TTID<br />
Fig. 3 CPLDs versus Discrete Components<br />
Fig. 4 CPLDs Arrangement on TIM-0<br />
1�<br />
9�<br />
PLD8<br />
PLD1<br />
VME<br />
INTERFACE<br />
8�<br />
BUSY<br />
ROD�<br />
8 x<br />
F / F<br />
8 x<br />
F / F<br />
BACKP� LANE<br />
9 x P� ECL<br />
CLOCK O� UTPUTS<br />
16<br />
16<br />
8�<br />
8�<br />
16<br />
8 x P� ECL<br />
16<br />
D (0� -15)<br />
VM� E<br />
31<br />
31 A (1-31)<br />
TTC(0-7)A<br />
TTC(0-7)B
The CPLD devices used on TIM-0 ( Fig. 4 above ) and their<br />
allocation to the individual functional blocks [ 15 ] are shown<br />
below, together with the percentages of their I/O pins<br />
( including 4x8-bit spare buses ) and macrocells utilization :<br />
Main Function Device I/O pins Macrocells<br />
PLD1 VME Interface M5-384/184-7HC 88 18<br />
PLD2 Stand-alone A M5-512/256-7AC 59 70<br />
PLD3 Stand-alone B M5-384/184-7HC 64 38<br />
PLD4a L1ID M5-384/184-7HC 61 21<br />
PLD4b BCID & TTID M5-384/184-7HC 67 23<br />
PLD5 Serialiser & FiFos M5-384/184-7HC 79 29<br />
PLD6 Output Mapping M4-256/128-7YC 88 23<br />
PLD7 Sequencer & Sink M5-384/184-7HC 85 46<br />
PLD8 ROD Busy M5-384/184-7HC 57 15<br />
PLD9 TTC Interface M5-384/184-7HC 84 27<br />
It can be seen that, apart from PLD-2, all devices use less<br />
than 50% of their macrocell capacity, and thus are easily<br />
reprogrammable. The PLD-2 circuit has been well tested on<br />
CLOAC modules [ 16 ] and is not likely to require any<br />
significant change.<br />
As mentioned before, the TIM uses the TTC information<br />
in the “Run” mode, or can operate stand-alone in the “SA”<br />
( Stand Alone ) mode. Detailed flowcharts [ 17 ] show the<br />
differences of the sources and of the flow of the fast<br />
commands [ 18 ] and event ID information between the<br />
NIMEX� TCLK<br />
TCLK1<br />
ECLEX�<br />
TCLK2<br />
ECLEX�<br />
80<br />
Z<br />
XTAL<br />
MH�<br />
CLK� 0B<br />
CLK� 0C<br />
1�<br />
1�<br />
2�<br />
CL� K0 INTC� LK2<br />
4 x TRIGGER<br />
WINDOW<br />
DELAYS<br />
INTC� LK1<br />
TCLK<br />
ENEX�<br />
ENINTCLK<br />
SYN� CH<br />
TRIG D� ELAY<br />
TIM - CLOCKS FLOW AND DELAYS<br />
TRIG. W� INDOW<br />
PLD3<br />
PLD� 4B<br />
PLD� 4A<br />
PL� D6<br />
PL� D5<br />
PLD7<br />
PLD8<br />
PLD9<br />
VME<br />
Reg. 0A<br />
DL� 8 DL7 DL� 6 DL� 5<br />
DEL� SIZE AY<br />
R� VME eg.<br />
SW� 8 SW� 5<br />
08<br />
CLK3<br />
4(A) CLK�<br />
4(B) CLK�<br />
K5 CL�<br />
K6 CL�<br />
7(B)<br />
CLK� 7(A)<br />
CLK�<br />
K8 CL�<br />
K9 CL�<br />
Fig. 5 :<br />
CLK� IN<br />
CLK2� (A)<br />
CLK2� (B)<br />
SACLKLED<br />
1�<br />
TRIG� GER<br />
2�<br />
2�<br />
2�<br />
2�<br />
2�<br />
2�<br />
3�<br />
2�<br />
2�<br />
3�<br />
CLKINB1<br />
SACLKDELAY<br />
various CPLDs on TIM, finally producing the same output to<br />
the RODs via the backplane.<br />
Another diagram ( Fig. 5 below ) shows the flow, the<br />
distribution and the programmable delays of the clocks in both<br />
the “Run” and the “SA” modes.<br />
It is important to note that in the “Run” mode [ 19 ] the<br />
priority is given to passing the BC clock and commands to the<br />
RODs, in their correct timing relationship, with the absolute<br />
minimum of delay to reduce the latency.<br />
In the “SA” mode, both the clock and the commands can<br />
arrive from a variety of sources [ 20 ]. The clock can be either<br />
generated on-board using an 80.16 MHz crystal oscillator, or<br />
arrive from external sources in either NIM or differential ECL<br />
standards. Similarly, the fast commands can be generated on<br />
the command of the local processor, or automatically by the<br />
TIM under local processor control. The fast commands can<br />
also be input from external sources in either NIM or<br />
differential ECL [ 21 ]. Thus, any of these internally or<br />
externally generated commands must be synchronised to<br />
whichever clock is being used at the time, to provide the<br />
correctly timed outputs.<br />
In addition, a ‘sequencer’, using 8x32k RAM, is<br />
provided to allow long sequences of commands and serial ID<br />
data to be written in by the local processor and used for<br />
KLED<br />
BCCL�<br />
SAMODE RUN<br />
MODE<br />
DL� 1<br />
6� 5�<br />
6�<br />
DL� 2<br />
DL3<br />
DL� 4<br />
3� DL4O� UT<br />
5�<br />
4� CLKINBC<br />
4�<br />
4�<br />
4�<br />
BCCL� K1B<br />
K2B<br />
BCCL�<br />
PCL� KB<br />
2 SW�<br />
DL2OUT<br />
6�<br />
6�<br />
6�<br />
4�<br />
4�<br />
CLKINB3<br />
SW� 3<br />
SETUP<br />
ROD�<br />
DL2O� UTB TTCC� LKB<br />
SAMODE<br />
DL3O� UT<br />
TTC<br />
SETUP<br />
RUNM� ODE<br />
CLKINB4<br />
MP/TJF UCL 2� 4 August 2001<br />
4�<br />
4�<br />
4 SW� TIM<br />
SETUP<br />
Clocks Flow and Delays<br />
CLOC� K40<br />
0DES1<br />
CLOCK4�<br />
0DES2<br />
CLOCK4�<br />
TTC� rx<br />
TTL - PECL PECL DRIVERS<br />
x� 8<br />
F / F<br />
8 x<br />
F / F<br />
KOUT<br />
NIMCL�<br />
ECLCLKOUT1<br />
ECLCLKOUT2<br />
8� TTC(0-7)<br />
8� TTC(0-7)B<br />
9 x<br />
CLOCK40<br />
8 x<br />
CLOCK40
testing the FE and off-detector electronics. A ‘sink’ ( receiver<br />
RAM ) of the same size is also provided to facilitate off-line<br />
checking of commands and ID data sent to the RODs [ 22 ].<br />
All the backplane signals are also mirrored as differential<br />
ECL outputs on the front panel to allow TIM interconnection.<br />
Two prototype TIM-0 modules were designed and<br />
manufactured during 1999 - 2000 and have been continuously<br />
tested since then, in the stand-alone mode, first at UCL and<br />
later also at Cambridge. During May and June 2001, a TIM-0<br />
module was used in the first SCT ROD system test at<br />
Cambridge. Meanwhile, the TTC interface has also been<br />
tested at UCL using a TTC optical test system incorporating<br />
TTCvi and TTCvx modules [ 1 ] .<br />
III. SYSTEM TEST<br />
The SCT off-detector electronics is based on 9U-sized<br />
modules in a VME64x crate [ 23 ]. There will be one TIM,<br />
one RCC ( Rod Crate Controller ) and up to 16 RODs and<br />
BOC modules in each ROD crate. Samples of the purposedesigned<br />
ROD crates have been manufactured by Wiener.<br />
They have been equipped with the custom-designed J3<br />
backplane [ 24 ] providing the complex inter-connection<br />
between TIM and the RODs and BOCs [ 25 ].<br />
The first ROD system test using prototype RCC,<br />
ROD, BOC and TIM modules took place in Cambridge in<br />
May and June 2001. This demonstrated successfully the<br />
feasibility of the whole system design and showed that TIM<br />
does correctly source and drive all the timing, trigger and<br />
command information to the RODs and BOCs. The timing and<br />
signal error rates have also been checked using the stand-alone<br />
capabilities of the TIM.<br />
Based on the results of this first system test, two<br />
more updated TIM-1 modules have been built and are now<br />
starting to be tested at UCL. Together, these four TIM<br />
prototype modules will enable further ROD system tests to<br />
take place later this year in Cambridge and in USA, to be<br />
followed by the first system and beam tests at CERN in 2002.<br />
Currently, the design of the final, production version of the<br />
TIM is beginning at UCL with the aim of starting the<br />
manufacture in the second half of 2002.<br />
IV. ACKNOWLEDGEMENTS<br />
We would like to thank Professor Tegid W. Jones for<br />
his continuous support of our work in the ATLAS<br />
collaboration. We also wish to thank Janet Fraser who helped<br />
to produce the diagrams used in this paper.<br />
V. REFERENCES<br />
[ 1 ] TTC Home Page :<br />
http://ttc.web.cern.ch/TTC/intro.html<br />
[ 2 ] SCT Off-Detector Electronics Schematics :<br />
http://www.hep.ucl.ac.uk/~jbl/SCT/archive/SCT_system_UCI.pdf<br />
[ 3 ] TIM and CLOAC – LEB1999 Paper :<br />
http://www.hep.ucl.ac.uk/~mp/TIM+CLOAC_paper1.pdf<br />
[ 4 ] TIM Overview :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_overview.html<br />
[ 5 ] TIM Functional Requirements :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_requirements.html<br />
[ 6 ] TIM-BOC Interface Specification :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_interface_BOC.html<br />
[ 7 ] TIM-ROD Interface Specification :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_interface_ROD.html<br />
[ 8 ] TIM-RCC Interface Specification :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_interface_RCC.html<br />
[ 9 ] TIM Registers :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_registers.html<br />
[ 10 ] TIM Implementation Model :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_model.html<br />
[ 11 ] J. Christiansen, A. Marchioro, P. Moreira, T.Toifl<br />
“TTCrx Reference Manual : A Timing , Trigger and<br />
Control Distribution Receiver ASIC for LHC<br />
Detectors”, Version 3.2, February 2001<br />
http:// http://ttc.web.cern.ch/TTC/TTCrx_manual3.2.pdf<br />
[ 12 ] TTCrx Mezzanine Board Schematics :<br />
http://ttc.web.cern.ch/TTC/TTCrxMezzanine2001.04.19.pdf<br />
[ 13 ] TIM Backplane Interfaces :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_backplane.pdf<br />
[ 14 ] SCT Latency Budget :<br />
http://www.hep.ucl.ac.uk/~jbl/SCT/SCT_latency.html<br />
[ 15 ] TIM CPLDs Block Diagrams :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_plds.pdf<br />
[ 16 ] CLOAC Module Description :<br />
http://www.hep.ucl.ac.uk/~jbl/SCT/CLOAC_welcome.html<br />
[ 17 ] TIM Overall Block diagrams :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_flowcharts.pdf<br />
[ 18 ] TIM Fast Commands Flow :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_fast-flow.pdf<br />
[ 19 ] TIM Schematics -2- INTERFACES :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_schem-2.pdf<br />
[ 20 ] TIM Schematics -1- STAND-ALONE :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_schem-1.pdf<br />
[ 21 ] TIM Front Panel Interfaces :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_front-panel.pdf<br />
[ 22 ] TIM Schematics -3- SEQUENCER<br />
& STAND-ALONE ID & ROD BUSY :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_schem-3.pdf<br />
[ 23 ] LHC Crates :<br />
http://atlas.web.cern.ch/Atlas/GROUPS/FRONTEND/documents/<br />
LHC_Crates_TS11.PDF<br />
[ 24 ] ROD Crate J3 Backplane Design :<br />
http://webnt.physics.ox.ac.uk/wastie/backplane.htm<br />
[ 25 ] ROD Crate Backplane Interconnections :<br />
http://www-wisconsin.cern.ch/~atlas/off-detector/ROD/doc/<br />
2000-07-27-RODCrateBPSlot.doc<br />
VI. FIGURES<br />
Fig. 1 First TIM-0 Module :<br />
http://www.hep.ucl.ac.uk/~mp/TIM-0_photo-4.jpg<br />
Fig. 2 TIM Functional Model :<br />
http://www.hep.ucl.ac.uk/~mp/TIM_Functional_model.eps.gz<br />
Fig. 3 CPLDs versus Discrete Components :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_PLDs-vs-ICs.pdf<br />
Fig. 4 CPLDs Arrangement on TIM-0 :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_func_pic.jpg<br />
Fig. 5 Clocks Flow and Delays :<br />
http://www.hep.ucl.ac.uk/atlas/sct/tim/TIM_clocks.pdf
Abstract<br />
In this paper we describe the LHCb Timing and Fast<br />
Control (TFC) system. It is different from that of the other<br />
LHC experiments in that it has to support two levels of highrate<br />
triggers. Furthermore, emphasis has been put on<br />
partitioning and on locating the TFC mastership in one type of<br />
module: the Readout Supervisor. The Readout Supervisor<br />
handles all timing, trigger, and control command distribution.<br />
It generates auto-triggers as well as controls the trigger rates.<br />
Partitioning is handled by a programmable patch<br />
panel/switch introduced in the TTC distribution network<br />
between a pool of Readout Supervisors and the Front-End<br />
electronics.<br />
I. INTRODUCTION<br />
LHCb has devised a Timing and Fast Control (TFC)<br />
system[1] to distribute information that must arrive<br />
synchronously at various places in the experiment. Examples<br />
of this kind of information are:<br />
• LHC clock<br />
• Trigger decisions<br />
• Reset and synchronization commands<br />
• Bunch crossing number and event number<br />
Although the backbone of the timing, trigger and control<br />
distribution network is based on the CERN RD12 system<br />
(TTC)[2], several components are specific to the LHCb<br />
experiment due to the fact that the readout system is different<br />
from that of the other experiments in several respects. Firstly,<br />
the LHCb TFC system has to handle two levels of high-rate<br />
triggers: a Level 0 (L0) trigger with an accept rate of<br />
maximum 1.1 MHz and a Level 1 (L1) trigger with an accept<br />
rate of maximum 40 - 100 kHz. This feature reflects itself in<br />
the architecture of the Front-End electronics, which consists<br />
of a L0 part and a L1 part (see Figure 1). The L0 Front-End<br />
(FE) electronics samples the signals from the detector at a rate<br />
of 40 MHz and stores them during the duration of the L0<br />
trigger processing. The event data are subsequently de-<br />
The LHCb Timing and Fast Control system<br />
R. Jacobsson, B. Jost<br />
CERN, 1211 Geneva 23, Switzerland<br />
Richard.Jacobsson@cern.ch, Beat.Jost@cern.ch<br />
A. Chlopik, Z. Guzik<br />
Soltan Institute for Nuclear Studies, Swierk-Otwock, Poland<br />
arek@ipj.gov.pl, zbig@ipj.gov.pl<br />
randomized before being handed over to the L1 FE<br />
electronics. The L1 FE electronics buffers the data during the<br />
L1 trigger processing, de-randomizes the events before it<br />
zero-suppresses the data, and finally feds the data into the<br />
DAQ system for event building.<br />
Secondly, the TFC architecture has been designed with<br />
emphasis on partitioning[3]. A partition is in LHCb a generic<br />
term, defined as a configurable ensemble of parts of a subdetector,<br />
an entire sub-detector or a combination of subdetectors<br />
that can be run in parallel, independently and with a<br />
different timing, trigger and control configuration than any<br />
other partition.<br />
Furthermore, the aim has been to locate the entire TFC<br />
mastership of a partition in a single module. The trigger<br />
decision units are also considered as sub-detectors.<br />
Level 0<br />
Trigger<br />
Level 1<br />
Trigger<br />
40 MHz<br />
1 MHz<br />
Timing<br />
&<br />
L0<br />
40 kHz<br />
Fast<br />
L1<br />
Control<br />
1 MHz<br />
Throttle<br />
Variable latency<br />
L2 ~10 ms<br />
L3 ~200 ms Storage<br />
LHC-B Detector<br />
VDET TRACK ECAL HCAL MUON RICH<br />
RU RU RU<br />
CPU<br />
CPU<br />
CPU<br />
CPU<br />
Level-0<br />
Front-End Electronics<br />
Level-1<br />
Front-End Multiplexers (FEM)<br />
Front End Links<br />
Read-out units (RU)<br />
Read-out Network (RN)<br />
SFC SFC Sub-Farm Controllers (SFC)<br />
Trigge r Level 2 & 3<br />
Event Filter<br />
Figure 1: Overview of the LHCb readout system.<br />
II. USE OF THE TTC SYSTEM<br />
LAN<br />
Data<br />
rates<br />
40 TB/s<br />
1 TB/s<br />
6-15 GB/s<br />
6-15 GB/s<br />
Control 50 MB/s<br />
&<br />
Monitoring<br />
The TTC system has been found to suite well the LHCb<br />
application. The LHC clock is transmitted to all destinations<br />
using the TTC system and Channel A is used as it was<br />
intended, i.e. to transmit the LHCb L0 trigger decisions to the<br />
FE electronics in the form of a accept/reject signal at 40 MHz.
Channel B supports several functions:<br />
• Transmission of the Bunch Counter and the<br />
Event Counter Reset (BCR/ECR).<br />
• Transmission of the L1 trigger decision (~1.1<br />
MHz).<br />
• Transmission of Front-End control commands,<br />
e.g. electronics resets, calibration pulse triggering<br />
etc.<br />
Table 1: Encoding of the Channel B broadcasts. “R” stands for<br />
reserve bit.<br />
7 6 5 4 3 2 1 0<br />
L1 Trigger 1 Trigger type EventID 0 0<br />
Reset 0 1 R<br />
L1<br />
EvID<br />
L1<br />
FE<br />
L0<br />
FE<br />
ECR BCR<br />
Calibration 0 0 0 1 Pulse type 0 0<br />
Command 0 0 R R R R R R<br />
The information is transmitted in the form of the short<br />
broadcast format[4], i.e. 16 bits out of which two bits are<br />
dedicated to the BCR/ECR and six bits are user defined.<br />
From the TTC bandwidth it follows that a maximum of<br />
Local trigger<br />
(optiona l)<br />
L0<br />
L1<br />
Readout<br />
Supervisor<br />
L0 Throttle<br />
switch<br />
LHC clock<br />
Clock fanout<br />
BC and BCR<br />
TTCrx TTCrx<br />
L0 trigger L1 trigger<br />
L0<br />
Readout<br />
Supervisor<br />
~2.5 MHz of broadcasts can be transmitted. The eight bits are<br />
encoded according to Table 1. A priority scheme determines<br />
the order in which the different broadcasts are transmitted in<br />
case they clash.<br />
III. TFC COMPONENTS SPECIFIC TO LHCB<br />
The TFC architecture is shown in Figure 2. It incorporates<br />
a pool of TFC masters, Readout Supervisors[5], one of which<br />
is interfaced to the central trigger decision units and that is<br />
used for normal data taking. The other Readout Supervisors<br />
are reserves and can be invoked for tests, calibrations and<br />
debugging. The reserve Readout Supervisors also allow<br />
connecting local trigger units.<br />
The TFC Switch[6] distributes the TTC information to the<br />
Front-End electronics and the Throttle Switches[6] feed back<br />
hardware throttle signals from the L1 trigger system, the L1<br />
de-randomizers and components in the data-driven part of the<br />
DAQ system, to the appropriate Readout Supervisors.<br />
The Throttle ORs[6] form a logical OR of the throttle<br />
signals from sets of Front-End electronics.<br />
A GPS system allows time-stamping the local event<br />
information sampled in the Readout Supervisor.<br />
L1<br />
TFC switch<br />
TTCrx<br />
TTCrx<br />
L0E L0E<br />
ADC<br />
TTCrx ADC<br />
TTCrx<br />
ADC ADC<br />
Control<br />
Contr ol<br />
L1E<br />
L1E<br />
FEchip<br />
FEchip FEchip<br />
FEchip<br />
L1<br />
FEchip FEc FEc buffer hip<br />
L1 b uffe<br />
hip<br />
r<br />
ADC ADC DSP ADC DSP<br />
L1 Throttle<br />
switch<br />
Readout<br />
Sup er visor<br />
SD1 TTCtx SD2 TTCtx SDn TTCtx L0 TTCtx L1 TTCtx<br />
Optical couplers Optica l co uplers Optica l coup lers Optical couplers<br />
TTCrx<br />
TTCrx<br />
L0E L0E<br />
ADC<br />
TTCrx ADC<br />
TTCrx<br />
ADC<br />
ADC<br />
Control<br />
Contr ol<br />
L1E<br />
L1E<br />
L1 trig ger sy stem<br />
FEc FEc<br />
hip<br />
FEchip<br />
hip<br />
FEc<br />
hip<br />
FEchip hip<br />
FEchip<br />
FEc<br />
FEc<br />
hip<br />
L1 FEchip buffer hip<br />
L1 FEchip buffer<br />
ADC ADC DSP ADC DSP<br />
Throttle OR<br />
D AQ DAQ<br />
Figure 2: Overview of the TFC architecture.<br />
17<br />
17<br />
GPS receiver<br />
Throttle OR<br />
TTC system
IV. THE READOUT SUPERVISOR<br />
The Readout Supervisor has been designed with emphasis<br />
on versatility in order to support many different types of<br />
running mode, and modifiability for functions to be added and<br />
changed easily. Below is a summary of the most important<br />
functions. A complete description can be found in Reference<br />
[5].<br />
The TTC encoder circuit incorporated in each Readout<br />
Supervisor receives directly the LHC clock and the orbit<br />
signal from the TTC machine interface (TTCmi). The clock is<br />
distributed on the board in a star fashion and is transmitted to<br />
all synchronous destinations via the TTC system.<br />
The Readout Supervisor receives the L0 trigger decision<br />
from the central L0 trigger Decision Unit (L0DU), or from an<br />
optional local trigger unit, together with the Bunch Crossing<br />
ID. In order to adjust the global latency of the entire L0<br />
trigger path to a total of 160 cycles, the Readout Supervisor<br />
has a pipeline of programmable length at the input of the L0<br />
trigger. Provided no other changes are made to the system, the<br />
depth of the pipeline is set once and for all during the<br />
commissioning with the first timing alignment. The Bunch<br />
Crossing ID received from the L0DU is compared to the<br />
expected value from an internal counter in order to verify that<br />
the L0DU is synchronized. For each L0 trigger accept, the<br />
source of the trigger (3-bit encoded) together with a 2-bit<br />
Bunch Crossing ID, a 12-bit L0 Event ID (number of L0<br />
triggers accepted), and a “force bit” is stored in a FIFO. The<br />
force bit indicates that the trigger has been forced and that<br />
consequently the L1 trigger decision should be made positive,<br />
irrespective of the decision of the central L1 trigger Decision<br />
Unit (L1DU). The information in the FIFO is read out at the<br />
arrival of the corresponding L1 trigger decisions from the<br />
L1DU.<br />
The RS receives the L1 trigger decision together with a 2bit<br />
Bunch Crossing ID and a 12-bit L0 Event ID. The two<br />
incoming IDs are compared with the IDs stored in the FIFO in<br />
order to verify that the L1DU is synchronized. If the force bit<br />
is set the decision is converted to positive. The 3-bit trigger<br />
type and two bits of the L0 Event ID is subsequently<br />
transmitted as a short broadcast according to the format in<br />
Table 1. In order to space the L1 trigger decision broadcasts a<br />
L1 de-randomizer buffer has been introduced.<br />
The Readout Supervisor controls the trigger rates<br />
according to the status of the buffers in the system in order to<br />
prevent overflows. As the distance and the high trigger rate,<br />
the L0 de-randomizer buffer occupancy cannot be controlled<br />
in a direct way. However, as the buffer activity is completely<br />
deterministic, the RS has a finite state machine to emulate the<br />
occupancy. This is also the case for the L1 buffer. In case an<br />
overflow is imminent the RS throttles the trigger, which in<br />
reality is achieved by converting trigger accepts into rejects.<br />
The slower buffers and the event-building components feed<br />
back throttle signals via hardware to the RS. Data congestion<br />
at the level of the L2/L3 farm is signalled via the Experiment<br />
Control System (ECS) to the onboard ECS interface, which<br />
can also throttle the triggers. “Stopping data taking” via the<br />
ECS is carried out in the same way. For monitoring and<br />
debugging, the RS has history buffers that log all changes on<br />
the throttle lines.<br />
The RS also provides several means for auto-triggering. It<br />
incorporates two independent uniform pseudo-random<br />
generators to generate L0 and L1 triggers according to a<br />
Poisson distribution. The RS also has a unit running several<br />
finite state machines synchronized to the orbit signal for<br />
periodic triggering, periodic triggering of a given number of<br />
consecutive bunch crossings (timing alignment), triggering at<br />
a programmable time after sending a command to fire a<br />
calibration pulse, triggering at a given time on command via<br />
the ECS interface etc. The source of the trigger is encoded in<br />
the 3-bit L1 trigger qualifier.<br />
The RS also has the task of transmitting various reset<br />
commands. For this purpose the RS has a unit running several<br />
finite state machine, also synchronized to the orbit signal, for<br />
transmitting Bunch Counter Resets, Event Counter Resets, L0<br />
FE electronics reset, L1 + L0 electronics reset, L1 Event ID<br />
resets etc. The RS can be programmed to send the commands<br />
regularly or solely on command via the ECS interface. The<br />
Bunch Counter and the Event Counter Reset have highest<br />
priority. Any clashing broadcast is postponed until the first<br />
broadcast is ready (L1 trigger broadcast) or until the next<br />
LHC orbit (reset, calibration pulse, and all miscellanous<br />
commands).<br />
The RS keeps a large set of counters that record its<br />
performance and the performance of the experiment (deadtime<br />
etc.). In order to get a consistent picture of the status of<br />
the system, all counters are samples simultaneously in<br />
temporary buffers waiting to be read out via the onboard ECS<br />
interface.<br />
���������<br />
���������<br />
������������<br />
������������<br />
������������<br />
�����������������<br />
�����������������<br />
�����������<br />
�����������<br />
�����������<br />
����� ����� �����<br />
��� ��� ���<br />
��� ��� ���<br />
�� �� ��<br />
��������<br />
��������<br />
��������<br />
����������<br />
����������<br />
����������<br />
����� ����� �����<br />
�� �� ��<br />
��à ��à ��Ã<br />
������������<br />
������������<br />
������������<br />
����������������<br />
����������������<br />
����������������<br />
�������������<br />
�������������<br />
�������������<br />
���������<br />
���������<br />
���������<br />
Figure 3: Simplified logical diagram of the Readout Supervisor<br />
showing the basic functions.<br />
The RS also incorporates a series of buffers analogous to a<br />
normal Front-End chain to record local event information and<br />
provide the DAQ system with the data on an event-by-event<br />
basis. The “RS data block” contains the “true” bunch crossing<br />
ID and the Event Number, and is merged with the other event<br />
data fragments during the event building.
The ECS interface is a Credit Card PC through which the<br />
entire RS is programmed, configured, controlled, and<br />
monitored. Note that in order to change the trigger and control<br />
mode of the RS for testing, calibrating and debugging it is not<br />
necessary to reprogram any of the FPGAs. All functionality is<br />
set up and activated via parameters that can be written at any<br />
time.<br />
�������<br />
�� �� �� ��<br />
��<br />
���������Ã<br />
��������<br />
�����������������<br />
���������Ã<br />
��������<br />
���������Ã<br />
��������<br />
���������<br />
���������Ã<br />
��������<br />
���������Ã<br />
��������<br />
Figure 4: The TFC architecture simplified to show an example of<br />
partitioning.<br />
A. The TFC Switch and Partitioning<br />
A good partitioning scheme is essential in order to carry<br />
out efficient commissioning, testing, debugging, and<br />
calibrations. The LHCb TFC partitioning is shown by an<br />
example in figure 4, in which the TFC architecture in Figure 2<br />
has been simplified. The TFC Switch allows setting up a<br />
partition by associating a number of partition elements (e.g.<br />
sub-detectors) to a specific Readout Supervisor (Figure 5).<br />
The Readout Supervisor can then be configured to control and<br />
trigger the partition in whatever specific mode that is<br />
required. In the example in figure 4, the partition elements 2 –<br />
5 are running with the central RS, which is interfaced to the<br />
central triggers. Partition element 1 is simultaneously running<br />
a stand-alone run with a separate RS. The three other Readout<br />
Supervisors are idle and can be reserved at any time for other<br />
partitions. Note that the TFC Switch is located before the TTC<br />
optical transmitters (TTCtx) and that it is handling the<br />
encoded TTC signals electrically.<br />
The configuring of the TFC Switch is done via the<br />
standard LHCb ECS interface incorporated onboard: the<br />
Credit Card PC.<br />
���������<br />
������������������������<br />
�������������<br />
���������� ����������<br />
��������������<br />
����������������<br />
�����������������������������������������������������������<br />
Figure 5: The principle of the TFC Switch.<br />
�<br />
�<br />
��<br />
�<br />
�<br />
�<br />
�<br />
�<br />
��<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
��<br />
�<br />
��<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
From the architecture of the system it follows that the FE<br />
electronics that is fed by the same TTCtx is receiving the<br />
same timing, trigger, and control information. Hence the<br />
TTCtx define the partition elements. The TFC Switch has<br />
been designed as a 16x16 switch and thus allows the LHCb<br />
detector to be divided into 16 partition elements. To increase<br />
the partition granularity an option exists whereby four TFC<br />
Switches are deployed in order to divide the LHCb detector<br />
into 32 partitions (Figure 6).<br />
5HDGRXW 6XSHUYLVRUV<br />
7)& 6ZLWFK 7)& 6ZLWFK<br />
7)& 6ZLWFK 7)& 6ZLWFK<br />
)URQW (QG<br />
Figure 6: Four TFC Switches put together to increase the partition<br />
granularity to 32.<br />
A crucial point concerning the TFC Switch is that all<br />
internal paths from input to output must have equal<br />
propagation delays. Otherwise, the partition elements will<br />
suffer from timing alignment problems using different<br />
Readout Supervisors. Measurements performed on the first<br />
prototype of the TFC Switch shows that it will be necessary to<br />
add adjustable delays at the outputs due to strongly varying<br />
propagation delays in the 16:1 multiplexers used.<br />
B. The Throttle Switches and the Throttle ORs<br />
The function of the Throttle Switches is to feed back the<br />
throttle information to the appropriate Readout Supervisor,<br />
such that only the Readout Supervisor in control of a partition<br />
is throttled by the components within that partition. Figure 4<br />
shows an example of how they are associated. The logical<br />
operation of the Throttle Switch is to perform a logical OR of<br />
the inputs from the components belonging to the partition<br />
(Figure 7). The system incorporates two Throttle Switches, a<br />
L0 and a L1 Throttle Switch. The sources of L0 throttles are<br />
essentially the components that feed the L1 trigger system.<br />
The sources of L1 throttles are the L1 de-randomizers and the<br />
event building components.<br />
��������������<br />
������������������������<br />
��������������<br />
OR OR<br />
���������� ����������<br />
�������������<br />
����������������<br />
������������������������������������������������������<br />
Figure 7: The principle of the Throttle Switches.<br />
For monitoring and debugging, the Throttle Switches keep<br />
a log of the history of the throttles. A transition on any of the<br />
��<br />
�<br />
�<br />
��<br />
�<br />
�<br />
�<br />
��<br />
�<br />
�<br />
�<br />
�
throttle lines trigger the state of all throttle lines together with<br />
a time-stamp to be stored in a FIFO.<br />
The configuring and the monitoring of the Throttle<br />
Switches are done via the standard LHCb ECS interface.<br />
The Throttle ORs group throttle lines belonging to the<br />
same partition elements. They are identical to the Throttle<br />
Switches in all aspects except that they OR all inputs and have<br />
only one output.<br />
VI. CONCLUSIONS<br />
The LHCb Timing and Fast Control (TFC) system and the<br />
use of the TTC system are well established. The Readout<br />
Supervisor incorporates all mastership in a single module and<br />
it provides a lot of flexibility and versatility. Partitioning is<br />
well integrated through the TFC Switch and the Throttle<br />
Switches.<br />
The architecture and the components have been put<br />
through two reviews and the system is now in the prototyping<br />
phase.<br />
V. REFERENCES<br />
[1] R. Jacobsson and B. Jost, “The LHCb Timing and Fast<br />
Control system”, LHCb 2001-016 DAQ.<br />
[2] RD-12 Documentation on WWW<br />
(http://www/cern.ch/TTC/intro.html) and references<br />
therein.<br />
[3] C. Gaspar, R. Jacobsson, B. Jost, “Partitioning in<br />
LHCb”, LHCb 2001-116 DAQ.<br />
[4] B. Jost, “The TTC Broadcast Format (proposal)”,<br />
LHCb 2001-017 DAQ.<br />
[5] R.Jacobsson, B. Jost, Z. Guzik, “Readout Supervisor<br />
Design Specifications”, LHCb 2001-012 DAQ.<br />
[6] Z. Guzik, Richard Jacobsson, and B. Jost, “The TFC<br />
Switch specifications”, LHCb 2001-018 DAQ.
Implementation Issues of the LHCb Readout Supervisor<br />
Z.Guzik, A.Chlopik<br />
Soltan Institute for Nuclear Studies, 05-400 Swierk-Otwock, Poland<br />
Zbig@ipj.gov.pl, Arek@ipj.gov.pl<br />
Abstract<br />
In this paper we describe the architecture of the most<br />
crucial and sophisticated element of the LHCb Timing and<br />
Fast Control (TFC) System - the Readout Supervisor (RS).<br />
The multi-functionality, the complexity and the speed<br />
demands dictate usage of the most advanced and performant<br />
technological solutions. The logical part of the Readout<br />
Supervisor is therefore based on the fastest PLDs on the<br />
market i.e. the Altera MAX and FLEX devices working on<br />
2.5V. There are 12 such units implemented where each unit<br />
carry separate logical functions. The front-end logic of the<br />
module is designed with positive ECLinPS Lite ICs. The<br />
Experiment Control System (ECS) interface to the Readout<br />
Supervisor is based on a commercial Credit Card PC from<br />
Digital Logic AG.<br />
I. INTRODUCTION<br />
The Readout Supervisor is a central component in the<br />
LHCb Timing and Fast Control (TFC) system. The functional<br />
specifications and the detailed description of all the Readout<br />
Supervisor tasks, features and nodes have been covered in<br />
document [1].<br />
The current implementations of the first minimal version<br />
of the Readout Supervisor is covered in a separate document<br />
[2]. There are a few minor differences between the actual<br />
implementation described in [2] and the one proposed in the<br />
Readout Supervisor Design Specification [1]. Therefore, the<br />
final assignments and resource allocations are only valid as<br />
presented in [2].<br />
II. DESIGN CRITERIA<br />
During designing and implementation process following<br />
design criteria were obeyed:<br />
• Modular approach to logical design - entire RS design<br />
divided into logically consistent nodes (entities) which<br />
are programmed on separate PLD’s. Logical connections<br />
between modules have fully pipelined structure.<br />
• The current prototype version of Readout Supervisor is<br />
implemented with a FASTBUS board form factor; only<br />
and<br />
R.Jacobsson,<br />
CERN, 1211 Geneva 23, Switzerland<br />
Richard.Jacobsson@cern.ch<br />
one external +5V power supply is used, either via the<br />
FASTBUS connector or from an on-board special IBM-<br />
PC type connector. The latter means that a FASTBUS<br />
power–supply/crate is not necessary. On-board DC/DC<br />
converters from DATEL provide the three additional<br />
voltages: +3.3 V (7 A), +2.5 V (5A) and -5V (2A).<br />
• Interfaces concerning external trigger data are based on<br />
LVDS technology. The National DS90C402 chip was selected<br />
as a dual receiver, while DS90C401 chip was<br />
chosen as a dual LVDS driver.<br />
• All discrete fast logic (clock regeneration and TTC<br />
encoder) will be realized with MOTOROLA ECLinPS or<br />
ECLinPS Lite integrated circuits from the 100E, 100EL<br />
or 100ELT series.<br />
• The TTC encoder (Channel A and Channel B time<br />
division multiplexor for broadcasting L0, L1 triggers and<br />
commands) is based - with slight modifications – on the<br />
TTCvx module by Per Gällnö. Also TTCex design from<br />
Bruce Taylor is taken into consideration as an alternate<br />
mezzanine. TTC encoder together with clock PLL<br />
regeneration is located on separate mezzanine board<br />
• All Readout Supervisor functional logic will be<br />
performed by Altera MAX 7000AE (for the most time<br />
critical parts) and FLEX 10KE PLD devices. All of the<br />
used PLDs (except PLD constituting IOBUS) have the<br />
same 144-pin count. For easy debugging and to facilitate<br />
connecting a Logic State Analyzer, all PLD’s will be<br />
placed on separate Mezzanine sub-boards with 100 mils<br />
pin spacing.<br />
• All PLD's are working with 3.3 V power supply for<br />
input-output (VCCINT). The in-core PLD logic<br />
(VCCINT) for MAX devices is the +3.3 V and for FLEX<br />
devises is the +2.5V. The speed grades of the selected<br />
PLDs must not be worse than “5” for the MAX devices<br />
and “1” for the FLEX devices. Configuration and<br />
programming of all PLD’s is organized by means of<br />
programmable JTAG chain.<br />
• All the implemented FIFO’s are the CY7C4251 from<br />
Cypress. They are 8Kx9 synchronous devices with 10 ns<br />
access time and 3 ns setup time.
• MAX+plus II and AHDL were chosen as the PLD design<br />
environment for all modules except for the random trigger<br />
generator module. In the future the aim is to translate<br />
all designs into VHDL. The PLD designs are simulated in<br />
MAX+plus, as well as in Cadence using test benches in<br />
VHDL.<br />
• The schematics and the PCB layout are done with the<br />
help of the Protel Design Explorer 99 SE software with<br />
Service Pack 6.<br />
• The Readout Supervisor is controlled by Commercial<br />
Credit Card PC via Ethernet Link. Interfacing with its<br />
Ext. L0 throttles<br />
- L0 trigger inhibits (Q_T1B)<br />
- Buffer emulators (Q_T1B)<br />
Ext. JTAG<br />
ECS JTAG<br />
ECS PLX bus<br />
- L0 random trigger (Q_RNDM)<br />
- L1 random trigger (Q_RNDM<br />
- State machine triggers (Q_CMD)<br />
Count enables from RS logic<br />
- Counters (Q_CNT)<br />
- JTAG interface (Q_IOI)<br />
- IOBUS interface (Q_IOI)<br />
- General Configuration<br />
Register (Q_IOI)<br />
Local JTAG ro RS logic<br />
IOBUS to RS logic<br />
L0 Trigger decision unit<br />
- L0 Pipeline (Q_PIPE)<br />
- Strobe check (Q_PIPE)<br />
- L0 trigger handling (Q_L0)<br />
- Synchronization check (Q_L0)<br />
- L0 counters (Q_L0)<br />
PCI bus is organized with help of PLX PCI 9080<br />
accelerator chip forming local bus.<br />
• There is no single jumper on the Readout Supervisor<br />
Board.<br />
A schematic block diagram of the entire Readout Supervisor<br />
is presented in Figure 1.<br />
L0 Accept FIFO<br />
(AFIFO)<br />
- Command generator (Q_CMD)<br />
- Reset (internal) generator (Q_CMD)<br />
Resets to int. logic<br />
TTC encoder<br />
Channel A/B<br />
- Generic Command Sender (Q_GCS)<br />
- TTC Shifter (Q_GCS)<br />
- General CSR (Q_GCS)<br />
Figure 1: Block Diagram of the Readout Supervisor<br />
L1 Trigger decision unit<br />
- L1 trigger handling(Q_L1)<br />
- Synchronization check (Q_L1)<br />
- L1 counters (Q_L!)<br />
L1 Trigger De-random.<br />
(TFIFO)<br />
Ext. L1 throttle<br />
- L1 trigger inhibits (Q_T1B)<br />
- Trigger brdcst generator (Q_T1B)
III. ECS INTERFACE<br />
The Experiment Control System (ECS) interface to the<br />
Readout Supervisor and its associated logic is presented in<br />
Fig.2 below.<br />
The ECS interface is based on a commercial Credit Card<br />
PC (CC-PC) from Digital Logic AG. Its access to the onboard<br />
logic is provided by means of smart480BUS consisting<br />
of 480 pins. The smart480BUS resources include a PCI bus.<br />
This is the basic medium to exchange information between<br />
the CC-PC and the Readout Supervisor logic. In between the<br />
CC-PC and the Readout Supervisor, there is an intermediate<br />
interface, the so-called “Glue Board”, which contains a<br />
PCI 9080 chip from PLX Technology. The PCI 9080is a PCIto-Local<br />
Bus accelerator chip working in J-mode (multiplexed<br />
a/d mode). The CC-PC is accessed externally via an<br />
ETHERNET LAN and, optionally, via a serial RS-232 link.<br />
CC-PC<br />
Ethernet<br />
Serial<br />
Port<br />
Parallel<br />
Port<br />
PCI Bus<br />
USERo<br />
USERi<br />
LRESETi#<br />
RST#<br />
GLUE BOARD<br />
PCI 9080<br />
PLX chip<br />
AMODE<br />
BIGEND<br />
MODE1<br />
MODE0<br />
BREQo<br />
LHOLD<br />
LHOLDA<br />
WAITi<br />
LAD[31..0]<br />
The Glue Board also has JTAG interface incorporated that can<br />
be used for indispensable PLD’s configuration and<br />
programming. JTAG interface is composed from parallel<br />
printer port.<br />
All PLDs are accessed internally by means of an IOBUS<br />
formed from PCI 9080 Local Bus<br />
GND<br />
VCC<br />
GND<br />
ADS#<br />
BLAST#<br />
LBE[3..0]<br />
BTERMo#<br />
READYo#<br />
READYi#<br />
LINTi<br />
LINTo<br />
LSERR#<br />
S0<br />
VCC<br />
LRESETo#<br />
LCLK<br />
TCK<br />
Local Bus<br />
MAX 7256<br />
Figure 2: Structure of the Experiment Control Interface<br />
IV. PLD IN-SYSTEM PROGRAMMABILITY AND<br />
CONFIGURATION<br />
In the Readout Supervisor, two types of PLDs are used.<br />
The programming mechanism is different for the two. While<br />
the MAX 7K devices retain their configuration when power is<br />
switched off, the FLEX 10KE must be re-configured after<br />
each power-down. In the current design of the RS there are<br />
six (+1 reserve) MAX 7K devices and four FLEX 10KE<br />
devices. We have focused exclusively on JTAG for<br />
programming, and the Altera native configuration system has<br />
been skipped.<br />
One PLD (MAX 7256-208) is used for interfacing<br />
between CC-PC Glue Board PLX chip and rest of the Readout<br />
Supervisor logic. Separate Byte Blaster driven externally via<br />
on-board header should program this PLD. Remaining all<br />
PLDs are programmed or configured by JTAG interface<br />
located on PLX chip Glue<br />
Board. This MAX 7256 PLD<br />
contains JTAG distribution logic<br />
/USR_LED<br />
USR_SW<br />
On Board<br />
JTAG Chain<br />
CS10-1<br />
BDAT[31..0]<br />
RADR[4..0]<br />
WR<br />
/SRES<br />
Hardware Setup<br />
External<br />
ByteBlaster<br />
RJ45<br />
RS-232<br />
IOBUS<br />
for all other PLDs. The TCK<br />
JTAG clock lines for<br />
programmed PLDs are driven<br />
directly from the „Glue Board”<br />
JATG Interface. The remaining<br />
three JTAG lines (TMS, TDI<br />
and TDO) are passed through<br />
the MAX 7256 and are<br />
distributed to every other PLD<br />
individually (see Figure 3).<br />
Proposed approach allows<br />
configuring and programming<br />
any set of selected PLDs - from<br />
one to all of them.<br />
The selection of a specific<br />
PLD to be configured is made<br />
via programmable register<br />
(CSR) contained in the MAX<br />
7256 device (Q_IOI module).<br />
When any " x " PLD is<br />
unselected then its TMS_x and<br />
TDO_x lines are driven<br />
permanently HIGH and its<br />
TDO_x/TDI_x pins doesn’t<br />
participate in the closed JTAG<br />
chain. When a given "x" PLD is<br />
selected for programming or<br />
configuration then its TMS_x is<br />
controlled by the “Glue Board” JTAG Interface TMS. Its<br />
TDO_x/TDI_x lines then constitute the JTAG closed chain<br />
together with the other selected PLDs. The order of PLDs in a<br />
global JTAG chain is given in Table 1. The device is selected<br />
when the appropriate bit in CSR is set HIGH. In working<br />
conditions all FLEX devices are selected and all MAX<br />
devices should be deselected.
JTAG Interface<br />
from Glue Board<br />
TMS<br />
TDI<br />
TDO<br />
MAX 7256<br />
TMS_1<br />
TDI_1<br />
TDO_1<br />
TMS_2<br />
TDI_2<br />
TDO_2<br />
TMS_n<br />
TDI_n<br />
TDO_n<br />
V. SYNCHRONIZATION<br />
to PLD #1<br />
to PLD #2<br />
to PLD #n<br />
Figure 4: Structure of the On-board JTAG Distribution<br />
The Readout Supervisor receives the LHC bunch clock<br />
and LHC orbit signal from the TTCmi. The bunch clock is<br />
used without any phase adjustments but it is regenerated by a<br />
Phase Locked Loop Frequency Multiplier (MPC991) to produce<br />
the basic 40.08 MHz clock (BCLK), and the 80 and the<br />
160 MHz clocks that are necessary for TTC encoding.<br />
The external orbit signal is passed through a delay line to<br />
be phase adjusted to the internal BCLK. The PDU54-1500<br />
from Data Delay Devices is used as delay line. It has 16 steps<br />
of 1.5 ns. It is also possible to work without external<br />
synchronization signals. When selected the BCLK and the<br />
ORBIT signals are produced internally.<br />
Synchronizing the external L0 and L1 trigger data with the<br />
internal clock is an important task. It is achieved by means of<br />
clock edge selection as described below. The trigger data are<br />
received according to the timing diagram presented in Figure<br />
4. The data are accompanied by a strobe signal (every clock<br />
cycle for the L0 trigger and every decision for the L1 trigger).<br />
The received data are written into an internal buffer at the<br />
rising edge of the strobe (First Pipeline). The external strobe<br />
is subsequently delayed by approximately 5 ns and is or’ed<br />
with original one. The presence of the strobe is tested by<br />
sampling this or’ed signal with both the positive and the<br />
negative edge of the internal clock. Selecting the good clock<br />
edge is made by the H_0PHASE parameter (for L0) and the<br />
H_1PHASE parameter (for L1) and is established during the<br />
timing alignment of the experiment. If the negative edge of<br />
the local clock was chosen as the proper one (as on the<br />
picture), then the data are stored in a Second Pipeline at this<br />
edge. After another half a clock period the data is transferred<br />
into the Third Pipeline at the positive edge to form the final<br />
data for this clock cell. Of course, if the positive edge was<br />
chosen, then the Second Pipeline step is skipped. In addition,<br />
for L1 triggers, a validation strobe is produced to be used as a<br />
write enable to the L1 Trigger De-randomizer.<br />
For proper on-board clock distribution, separate PECL<br />
differential pairs of equal length are pulled to each PLD.<br />
Translators from PECL to TTL are placed at the closest<br />
distance to the clock pins of each PLD. A MC100E111 clock<br />
driver distributes the PECL clock lines in star fashion.<br />
EXTERNAL DATA<br />
EXTERNAL STROBE<br />
FIRST PIPELINE<br />
DELAYED STROBE<br />
TESTED STROBE (or)<br />
LOCAL CLOCK (INVERTED)<br />
LOCAL CLOCK<br />
A B C<br />
A B C<br />
SECOND PIPELINE A B C<br />
FINAL RECEIVED DATA A B C<br />
VALIDATION STROBE (for L1)<br />
Figure 3: Timing Diagram Showing the Synchronization<br />
Principle<br />
VI. TTC MEZZANINE<br />
The TTC Mezzanine function is to multiplex and encode<br />
A and B channel signals generated by appropriate PLD’s. The<br />
another task of this sub-board is to regenerate Bunch Clock or<br />
in case of its absence to generate internal 40 MHz clock.<br />
Switching between internal and external clock is realized by<br />
H_EXT level generated in Q_IOI module. The A and B<br />
channels are time division multiplexed and bi-phase mark<br />
encoded (see Figure 5). Phase locked loop frequency<br />
synthesizer circuit handles clock multiplication necessary for<br />
the encoding.<br />
Basic Clock<br />
A = "0"<br />
B = "0"<br />
A = "1"<br />
B = "0"<br />
A = "0"<br />
B = "1"<br />
A = "1"<br />
B = "1"<br />
Channel A Channel B<br />
25 ns<br />
Figure 5: Encoder Output Wave Forms
VII. PLD MODULES<br />
For the first prototype design we have chosen solution<br />
with a separate PLD for each logical entity (module). For the<br />
most critical parts of the system the Altera MAX 7000AE<br />
PLD is used – nowadays it is the fastest PLD on the market.<br />
For the more complex entities, such as long multiple counters<br />
or random generators, the Altera FLEX 10KE is sufficient.<br />
There are 6 MAX devices and four FLEX devices used for the<br />
entire project. One reserve MAX PLD will also be mounted<br />
on the board.<br />
A. I/O Interface & Resets<br />
The main task of this module is to act as an interface<br />
between the ECS Glue board and the Readout Supervisor<br />
logic. It controls the internal IOBUS by providing dedicated<br />
chip selects and common control signals to all the Readout<br />
Supervisor PLDs. Another task of this module is to provide<br />
system reset and to distribute programmable JTAG chain to<br />
other PLDs. It is the only PLD programmed by external Byte<br />
Blaster.<br />
B. L0 External Trigger Phasing & Pipelining<br />
The primary function of this module is to provide a 16stage<br />
delay pipeline for L0 Trigger path, where the pipeline<br />
depth is programmable. Besides that it phases the incoming<br />
L0 trigger data to the internal clock and detects missing input<br />
strobes.<br />
C. L0 Trigger Handling<br />
This module receives all the different types of Level-0<br />
triggers (external and internal) and compiles a final trigger<br />
qualifier according to the trigger priority, presence of the L0<br />
inhibit and possible errors. It also performs the synchronization<br />
check of the incoming external L0 triggers. The<br />
accepted L0 triggers are written into the L0 Accept FIFO<br />
(AFIFO) and the YES decisions are broadcasted over the TTC<br />
Channel A. Additionally, this module contains the L0 gap<br />
generator, which, if needed, can force gaps of programmable<br />
length between L0 trigger accepts.<br />
D. L1 Trigger Handling<br />
This module receives the L1 external trigger data and<br />
performs a synchronization check with the corresponding L0<br />
triggers contained in the L0 Accept FIFO. It evaluates the L1<br />
triggers and writes them into the L1 trigger de-randomizer<br />
(TFIFO). It also ensures that internal triggers are maintained<br />
when external L1 trigger path is blocked.<br />
E. L1 Trigger Broadcasts & Inhibits<br />
This module contains a rate controller for the L1 trigger<br />
broadcasting (real broadcasting is performed by another module).<br />
The second task of this module is to centralize the<br />
evaluation of all the different L0 and L1 inhibits in order to<br />
produce a single combined L0 inhibit and a single combined<br />
L1 inhibit. It also runs a L0 front-end de-randomizer occupancy<br />
controller and a L1 front-end buffer occupancy<br />
controller.<br />
F. Generic Command Sender & General Status<br />
Register<br />
The main task of this module is to resolve all of the<br />
incoming requests for trigger and command broadcasting. The<br />
module ensures that the command broadcasts get higher<br />
priority than the pending trigger broadcasts. It also makes sure<br />
that the Bunch Counter Resets and the Event Counter resets<br />
are sent with highest priority at the appropriate times. If a<br />
+command broadcast request is refused due to the Generic<br />
Command Sender being busy, the command is postponed until<br />
the same bunch crossing in the next LHC turn. In case a<br />
trigger broadcast request is refused, it is delayed until the<br />
Generic Command Sender is free.<br />
Another important function of this module is to maintain<br />
all the general status bits of the entire Readout Supervisor. Finally,<br />
it also generates the internal orbit signal when internal<br />
synchronization is selected by the user and detects presence of<br />
external orbit signals.<br />
G. Command Generator & Internal Triggers<br />
This module runs a set of state machines that at the<br />
appropriate times request sending the different type of<br />
commands (resets, calibration pulsing, etc). It also generates<br />
all the signals, which should accompany certain commands. It<br />
also generates all the internal L0 triggers, except the random<br />
triggers. Another important function of this module is to<br />
maintain and generate dedicated resets to all logical nodes that<br />
need to be cleared individually.<br />
H. Random Generator<br />
The Random Generator module produces random L0<br />
triggers that are injected into the L0 trigger handling path. It<br />
can also generate random L1 triggers by randomly forcing a<br />
subset of the random L0 triggers at level one. Optionally, the<br />
module can also be configured to force every or none of the<br />
random L0 triggers. The triggers are generated according to a<br />
Poisson distribution and the rate is fully programmable as will<br />
be explained in the appendix concerning the Random<br />
Generator.<br />
I. Universal Counter Modules<br />
Each of this module contains sixteen 32-bit counters every<br />
of which increments when corresponding ”count enable” input<br />
is produced by appropriate module. Optionally, each<br />
counter may be pre-scaled by a common pre-scale factor. In<br />
this minimal version of the RS, there are two such modules.<br />
All counters are presented in table 1.
VIII. REFERENCES<br />
[1] R.Jacobsson, B.Jost and Z.Guzik – Readout<br />
Supervisor Design Specification, LHCb Technical<br />
Note, LHCb 2001 – 012 DAQ, CERN, February<br />
12, 2001.<br />
Table 1: Implemented Readout Supervisor Counters<br />
[2] Z.Guzik and R.Jacobsson – “ODIN” – LHCb<br />
Readout Supervisor, Technical Reference,<br />
Revision 1.4 – September 2001.<br />
# Description<br />
Proposed<br />
prescaling<br />
0 External L0 sync errors ungated -<br />
1 External L0 sync errors gated -<br />
2 External L0 accepts converted to NO by sync error -<br />
3 Total L0 accept ungated 4<br />
4 Total L0 accept gated 4<br />
5 Total L0 forces ungated -<br />
6 Total L0 forces gated -<br />
7 External L0 accepts ungated 4<br />
8 External L0 accepts gated 4<br />
9 External L0 force ungated -<br />
10 External L0 force gated -<br />
11 Sequencer periodic L0 triggers ungated -<br />
12 Sequencer periodic L0 triggers gated -<br />
13 Random triggers L0 ungated 4<br />
14 Random triggers L0 gated 4<br />
15 reserved<br />
16 Bunch Clock 2^10<br />
17 Bunch Clock gated by L0 Inhibit 4<br />
18 Bunch Clock gated by L1 Inhibit 4<br />
19 L1 external sync errors -<br />
20 External L1 Accepts (Triggers) -<br />
21 reserved<br />
22 L1 Random Force -<br />
23 L1 Forces (from AFIFO) -<br />
24 Total writes to TFIFO 4<br />
25 Accepted writes to TFIFO -<br />
26 Total broadcasts of L1 Trigger commands 4<br />
27 Total number L1 positive trigger retrieved from TFIFO -<br />
28 Total broadcast of L1 Positive Triggers commands -<br />
29 Number of Turns -<br />
30 Bunch Clock gated by L0 External Throttle -<br />
31 Bunch Clock gated by L1 External Throttle -<br />
...
Abstract<br />
CMS REGIONAL CALORIMETER TRIGGER JET LOGIC<br />
W. H. Smith, P. Chumney, S. Dasu, F. di Lodovico M. Jaworski, J. Lackey, P. Robl,<br />
Physics Department, University of Wisconsin, Madison, WI 53706 USA<br />
The CMS regional calorimeter trigger system detects<br />
signatures of electrons/photons, taus, jets, and missing and<br />
total transverse energy in a deadtimeless pipelined<br />
architecture. This system contains 20 crates of custombuilt<br />
electronics. Recent changes to the Calorimeter<br />
Trigger have been made to improve the efficiency and<br />
purity of jet and τ triggers. The revised algorithms, their<br />
implementation in hardware, and their performance on<br />
physics signals and backgrounds are discussed.<br />
1. CMS CALORIMETER L1 TRIGGER<br />
The CMS level 1 trigger decision is based in part upon<br />
local information from the level 1 calorimeter trigger<br />
about the presence of physics objects such as photons,<br />
electrons, and jets, as well as global sums of E T and<br />
missing ET (to find neutrinos) [1].<br />
For most of the CMS ECAL, a 5 x 5 array of PbWO4<br />
crystals is mapped into trigger towers. In the rest of the<br />
ECAL there is somewhat lower granularity of crystals<br />
within a trigger tower. There is a 1:1 correspondence<br />
between the HCAL and ECAL trigger towers. The trigger<br />
tower size is equivalent to the HCAL physical towers,<br />
.087 x .087 in η x φ. The φ size remains constant in Δφ<br />
and the η size remains constant in Δη out to an η of 2.1,<br />
beyond which the η size increases.<br />
Figure 1. Calorimeter Trigger Jet Algorithm<br />
The jet trigger algorithm shown in Figure 1 uses the<br />
transverse energy sums (ECAL + HCAL) computed in<br />
calorimeter regions (4x4 trigger towers). Jets and τs are<br />
characterized by the transverse energy ET in 3x3<br />
calorimeter regions (12x12 trigger towers). For each<br />
calorimeter region a τ-veto bit is set if there are more than<br />
two active ECAL or HCAL towers in the 4x4 region. A jet<br />
is defined as ’tau-like’ if none of the 9 calorimeter region<br />
τ-veto bits are set.<br />
2. CALORIMETER TRIGGER HARDWARE<br />
The calorimeter level 1 trigger system, shown in Figure<br />
2, receives digital trigger sums from the front-end<br />
electronics system, which transmits energy on an eight bit<br />
compressed scale. The data for two trigger towers is sent<br />
on a single link with eight bits apiece, accompanied by<br />
five bits of error detection code and a “fine- grain” bit for<br />
each trigger tower characterizing the energies summed<br />
into it, i.e. isolated energy for the ECAL or an energy<br />
deposit consistent with a minimum ionizing particle for<br />
the HCAL.<br />
Figure 2. Overview of Level 1 Calorimeter Trigger
The calorimeter regional crate system uses 20 regional<br />
processor crates covering the full detector. Eighteen crates<br />
are dedicated to the barrel and two endcaps. These crates<br />
cover the region |η|
The jets and τs are characterized by the transverse<br />
energy E T in 3x3 calorimeter regions using a sliding<br />
window technique that spans the complete (η,φ) coverage<br />
of the CMS calorimeters seamlessly. The summation<br />
spans 12x12 trigger towers in the barrel and endcap or<br />
3x3 larger HF towers in the HF. The φ size of the jet<br />
window is the same everywhere. The η binning gets<br />
somewhat larger at high η due to the size of calorimeter<br />
and trigger tower segmentation. The jet trigger central<br />
region E T is required to be higher than the eight neighbor<br />
region E T values.The jets are labeled by (η,φ) indexes of<br />
the central calorimeter region.<br />
For each calorimeter region a τ-veto bit is set ON if<br />
there are more than two active ECAL or HCAL towers in<br />
the 4x4 region. This assignment of a τ-veto bit is<br />
performed by the input memory lookup tables on the<br />
Receiver Card that assign the E T values to the appropriate<br />
scales for downstream processing. Spare bits in these<br />
memories are used as threshold ETs to determine the<br />
number of active towers. These towers are counted by the<br />
downstream logic to determine τ-like energy deposits. A<br />
jet is defined as “τ-like” if none of the 9-calorimeter<br />
region τ-veto bits are ON.<br />
The Jet/Summary card receives 4x4 trigger tower 10-bit<br />
4x4 E T sums, 1 overflow bit and active trigger tower<br />
counts from all Receiver cards in the crate, including all<br />
of the 14 regions 4x4 served by the crate. These data are<br />
multiplexed for transmission at 80 MHz to the Cluster<br />
crate for finding jets and τ candidates.<br />
The Jet/Summary card processes the 2-bit ECAL and<br />
HCAL activity counts for each of the 14 regions covered<br />
by the crate. If the trigger tower activity counts from<br />
ECAL or HCAL are greater than two, the 4x4 region τ<br />
veto bit is set ON. There is enough room on the card to<br />
implement this algorithm in discrete logic components.<br />
The logic is used at least twice per crossing to determine<br />
τ veto bits for all 14 regions handled by this card.<br />
The eighteen regional trigger crates and the single HF<br />
crate send 4x4 trigger tower E T sums to a single Cluster<br />
crate where 12x12 overlapping ET sums are calculated to<br />
form jet and τ candidates. As shown in Figure 4, the<br />
Cluster crate consists of 9 Cluster Processor cards each<br />
receiving data from two regional crates and one HF crate<br />
on six 34-pair cables. The data from two regional crates,<br />
covering |η|
The four highest energy central and forward jets, and<br />
central τs in the calorimeter are selected. This choice of<br />
the four highest energy central and forward jets and of the<br />
four highest energy τs provides enough flexibility for the<br />
definition of combined triggers.<br />
In addition, counters of the number of jets above<br />
programmable thresholds in various η regions are<br />
provided to give the possibility of triggering on events<br />
with a large number of low energy jets. Jets in the forward<br />
and backward HF calorimeters are sorted and counted<br />
separately. This separation is a safety measure to prevent<br />
the more background-susceptible high-η region from<br />
masking central jets. Although the central and forward jets<br />
are sorted and tracked separately through the trigger<br />
system, the global trigger can use them seamlessly as the<br />
same algorithm and resolutions are used for the entire η−φ<br />
plane.<br />
Another possibility of performing the sliding window<br />
jet cluster algorithm is to use the Global Calorimeter<br />
Trigger hardware [4]. In this option, the GCT receives 14<br />
subregion energies from the Jet/Summary Card in each<br />
RCT barrel / endcap crate, along with the τ feature bits.<br />
The subregion energy data from the HF is also input. A<br />
clustering algorithm based on a 3 x 3-subregion sliding<br />
window is employed to find jets over the full range. The<br />
jets found by this procedure are then sorted as in the<br />
baseline design in three streams: central jets, forward jets<br />
and τ-jets.<br />
In either realization, single, double, triple and quad jet<br />
(τ) triggers are possible. The single jet (τ) trigger is<br />
defined by the transverse energy threshold, the (η,φ)<br />
region of validity and eventually by a prescaling factor.<br />
Prescaling will be used for low energy jet (τ) triggers,<br />
which are necessary for efficiency measurements.<br />
The multi jet (τ) triggers are defined by the number of<br />
jets (τs) and their transverse energy thresholds, by a<br />
minimum separation in (η,φ), as well as by a prescaling<br />
factor. The global trigger accepts the definition, in<br />
parallel, of different multi jet (τ) triggers conditions.<br />
4. JET AND t -TRIGGER PERFORMANCE<br />
The jet trigger efficiency turn-on versus the generator<br />
level jet pT for the location matched jets is shown in<br />
Figure 5 for single, double, triple and quadruple jet<br />
events. The plots are made from a full GEANT-based<br />
detailed simulation of the CMS detector and trigger logic.<br />
The jet trigger integrated rate is plotted versus the<br />
corrected L1 jet E T for single, double, triple and<br />
quadruple jet events in Figure 6. For the multi-jet triggers<br />
all the trigger jets are required to be anywhere in |η|
Figure 6. Jet trigger rates for single, double, triple and<br />
quadruple jet triggers.<br />
Figure 7. Low Luminosity single and double jet and tau<br />
trigger rates.<br />
Figure 8. High Luminosity Single and double jet and<br />
tau trigger rates.<br />
6. REFERENCES<br />
[1] CMS, The TRIDAS Project Technical Design Report,<br />
Volume 1: The Trigger Systems, CERN/LHCC 2000-38,<br />
CMS TDR 6.1.<br />
[2] J. Lackey et al., CMS Calorimeter Level 1 Regional<br />
Trigger Conceptual Design, CMS NOTE-1998/074<br />
(1998).<br />
[3] W.H. Smith et al., CMS Calorimeter Regional<br />
Calorimeter Trigger High Speed ASICs, in Proceedings<br />
of the Sixth Workshop on Electronics for LHC<br />
Experiments, Cracow, Poland, September, 2000<br />
[4] J. J. Brooke et al., An FPGA Implementation of the<br />
CMS Global Calorimeter, in Proceedings of the Sixth<br />
Workshop on Electronics for LHC Experiments, Cracow,<br />
Poland, September, 2000
The Track-Finding Processor for the Level-1 Trigger of the CMS Endcap Muon System<br />
D. Acosta, A. Madorsky (Madorsky@phys.ufl.edu), B. Scurlock, S.M. Wang<br />
University of Florida<br />
A. Atamanchuk, V. Golovtsov, B. Razmyslovich, L. Uvarov<br />
Abstract<br />
We report on the development and test of a prototype<br />
track-finding processor for the Level-1 trigger of the CMS<br />
endcap muon system. The processor links track segments<br />
identified in the cathode strip chambers of the endcap muon<br />
system into complete three-dimensional tracks, and measures<br />
the transverse momentum of the best track candidates from<br />
the sagitta induced by the magnetic bending. The algorithms<br />
are implemented using SRAM and Xilinx Virtex FPGAs, and<br />
the measured latency is 15 clocks. We also report on the<br />
design of the pre-production prototype, which achieves<br />
further latency and size reduction using state-of-the-art<br />
technology.<br />
I. INTRODUCTION<br />
The endcap muon system of CMS consists of four stations<br />
of cathode strip chambers (CSCs) on each end of the<br />
experiment. The coverage in pseudo-rapidity (η) is from 0.9<br />
to 2.4. A single station of the muon system is composed of<br />
six layers of CSC chambers, where a single layer has cathode<br />
strips aligned radially (from the beam axis) and anode wires<br />
aligned in the orthogonal direction. The CSC chambers are<br />
trapezoidal in shape with a 10° or 20° angular extent in<br />
azimuth (ϕ). The CSC chambers are fast (60 ns drift-time)<br />
and participate in the Level-1 trigger of CMS.<br />
A “Local Charged Track” (LCT) forms the most primitive<br />
trigger object of the Endcap muon system. Both cathode and<br />
anode front-end LCT trigger cards search for valid patterns<br />
from the six wire and strip planes of the CSC chamber. The<br />
anode data provide precise timing information as well as η<br />
information, and the cathode data provide precise ϕ<br />
information. A motherboard on the chamber collects the LCT<br />
information, associates the wire data to the cathode data, tags<br />
the bunch crossing time, and selects the two best candidates<br />
from each chamber. The end result is a three-dimensional<br />
vector, encoded as a bit pattern, which corresponds to a track<br />
segment in that muon station. It is transmitted via optical<br />
links to the counting house of CMS. To reduce the number<br />
St. Petersburg Nuclear Physics Institute<br />
of optical connections, only the three best track segments are<br />
sent from nine chambers (18 track segments).<br />
The Track-Finder must reconstruct muons from track<br />
segments received from the endcap muon system, measure<br />
their momenta using the fringe field of the central 4 T<br />
solenoid, and report the results to the first level of the trigger<br />
system (Level-1). This objective is complicated by the nonuniform<br />
magnetic field in the CMS endcap and by the high<br />
background rates; consequently, the design must incorporate<br />
full 3-dimensional information into the track-finding and<br />
measurement procedures.<br />
The experimental goal of the Track-Finder is to efficiently<br />
identify muons with as low a threshold in transverse<br />
momentum (PT) as possible in order to meet the rate<br />
requirement of the Level-1 Trigger of CMS. This translates<br />
into a single muon trigger rate which does not exceed about 1<br />
kHz per unit rapidity at the full luminosity of the LHC. The<br />
resolution on PT, therefore, should be less than about 30% at<br />
least, which requires measurements of the ϕ and η<br />
coordinates of the track from at least three stations.<br />
II. TRACK-FINDER LOGIC<br />
The reconstruction of complete tracks from individual<br />
track segments is partitioned into several steps to minimize<br />
the logic and memory size of the Track-Finder [1]. The steps<br />
are pipelined and the trigger logic is deadtime-less.<br />
First, nearly all possible pairwise combinations of track<br />
segments are tested for consistency with a single track. That<br />
is, each track segment is extrapolated to another station and<br />
compared to other track segments in that station. Successful<br />
extrapolations yield tracks composed of two segments, which<br />
is the minimum necessary to form a trigger. The process is<br />
not complete, however, since the Track-Finder must report<br />
the number of distinct muons to the Level-1 trigger. A muon<br />
that traverses all four muon stations and registers four track<br />
segments would yield six track “doublets.” Thus, the next<br />
step is to assemble complete tracks from the extrapolation<br />
results and cancel redundant shorter tracks. Finally, the best<br />
three muons are selected, and the track parameters are<br />
measured.
A. Extrapolation<br />
A single Extrapolation Unit forms the core of the Track-<br />
Finder trigger logic. It takes the three-dimensional spatial<br />
information from two track segments in different stations,<br />
and tests if those two segments are compatible with a muon<br />
originating from the nominal collision vertex with a curvature<br />
consistent with the magnetic bending in that region.<br />
All possible extrapolation pairs are tested in parallel to<br />
minimize the trigger latency. This corresponds to 81<br />
combinations for the 15 track segments of the endcap region.<br />
However, we have excluded direct extrapolations from the<br />
first to fourth muon station in order to reduce the number of<br />
combinations to 63. This prohibits triggers involving hits in<br />
only those stations, but saves logic and reduces some random<br />
coincidences (since those chambers are expected to have the<br />
highest rates). It also facilitates track assembly based on<br />
“key stations,” which is explained in the next section.<br />
B. Track Assembly<br />
The track assembly stage of the Track-Finder logic<br />
examines the results of the extrapolations and determines if<br />
any track segment pairs belong to the same muon. If so,<br />
those segments are combined and a code is assigned to denote<br />
which muon stations are involved. The underlying feature of<br />
the track-assembly is the concept of a “key station.” For this<br />
design, the second and third muon stations are key stations. A<br />
valid trigger in the endcap region must have a hit in one of<br />
those two stations. The second station is actually used twice:<br />
once for the endcap region and once for the region of overlap<br />
with the barrel muon system, so there are a total of three data<br />
streams. The track assembler units output a quality word for<br />
the best track for each hit in the key stations.<br />
C. Final Selection<br />
The final selection logic combines the nine best<br />
assembled tracks, cancels redundant tracks, and selects the<br />
three best distinct tracks. For example, a muon which leaves<br />
track segments in all four endcap stations will be identified in<br />
both track assembler streams of the endcap since it has a<br />
track segment in each key station. The Final Selection Unit<br />
must interrogate the track segment labels from each<br />
combination of tracks from the two streams to determine<br />
whether one or more track segments are in common. If the<br />
number of common segments exceeds a preset threshold, the<br />
two tracks are considered identical and one is cancelled.<br />
Thus, the Final Section Unit is a sorter with cancellation<br />
logic.<br />
D. Measurement<br />
The final stage of processing in the Track-Finder is the<br />
measurement of the track parameters, which includes the ϕ<br />
and η coordinates of the muon, the magnitude of the<br />
transverse momentum PT , the sign of the muon, and an<br />
overall quality which we interpret as the uncertainty of the<br />
momentum measurement. The most important quantity to<br />
calculate accurately is the muon PT , as this quantity has a<br />
direct impact on the trigger rate and on the efficiency.<br />
Simulations have shown that the accuracy of the momentum<br />
measurement in the endcap using the displacement in ϕ<br />
measured between two stations is about 30% at low<br />
momenta, when the first station is included. (It is worse than<br />
70% without the first station.) We would like to improve this<br />
so as to have better control on the overall muon trigger rate,<br />
and the most promising technique is to use the ϕ information<br />
from three stations when it is available. This should improve<br />
the resolution to at least 20% at low momenta, which is<br />
sufficient.<br />
III. FIRST PROTOTYPE SYSTEM ARCHITECTURE<br />
The Track-Finder is implemented as 12 “Sector<br />
Processors” that identify up to the three best muons in 60°<br />
azimuthal sectors. Each Processor is a 9U VME card housed<br />
in a crate in the counting house of CMS. Three receiver<br />
cards [3] collect the optical signals from the CSC chambers<br />
of that sector and transmit data to the Sector Processor via a<br />
custom point-to-point backplane. A maximum of six track<br />
segments are sent from the first muon station in that sector,<br />
and three each from the remaining three stations. In addition,<br />
up to eight track segments from chambers at the ends of the<br />
barrel muon system are propagated to a transition board in the<br />
back of the crate and delivered to each Sector Processor as<br />
well.<br />
A total of nearly 600 bits of information are delivered to<br />
each Sector Processor at the beam crossing frequency of 40<br />
MHz (3 GB/s). To reduce the number of connections, LVDS<br />
Channel Link transmitters/receivers from National<br />
Semiconductor [2] were used to compress the data by about a<br />
factor of three through serialization/de-serialization. A<br />
custom point-to-point backplane operating at 280 MHz is<br />
used for passing data to the Sector Processor.<br />
Each Sector Processor measures the track parameters (PT,<br />
ϕ, η, sign, and quality) of up to the three best muons and<br />
transmits 60 bits through a connector on the front panel. A<br />
sorting processor accepts the 36 muon candidates from the 12<br />
Sector Processors and selects the best 4 for transmission to<br />
the Global Level-1 Trigger.<br />
A prototype Sector Processor was built using 15 large<br />
Xilinx Virtex FPGAs, ranging from XCV50 to XCV400, to<br />
implement the track-finding algorithm, one XCV50 as VME<br />
interface and one XCV50 as an output FIFO (Fig. 1).<br />
The configuration of the FPGAs, including the VME<br />
interface, was done via a fast VME-to-JTAG module,<br />
implemented on the same board. This module takes<br />
advantage of the VME parallel data transmission, and reduces
the configuration time down to 6 seconds, instead of ~6<br />
minutes if we use a standard Xilinx Parallel III cable.<br />
The following software modules were written to support<br />
testing and debugging:<br />
• Standalone version of the C++ model for Windows<br />
• Module for the comparison of the C++ model with the<br />
board’s output<br />
• JTAG configuration routine, controlling the fast VMEto-JTAG<br />
module of the board<br />
• Lookup configuration routine, used to write and check<br />
the on-board lookup memory<br />
• Board Configuration Database with a Graphical User<br />
Interface (GUI), that keeps track of many configuration<br />
variants and provides a one-click selection of any one of<br />
them. Each variant contains the complete information for<br />
FPGA and lookup memory configuration.<br />
All software was written in portable C++ or C, to simplify<br />
porting into another operating systems. The Board<br />
Configuration Database is written in JAVA, since this is the<br />
simplest way to write a portable GUI. All software can and<br />
will be used for the second (pre-production) prototype<br />
debugging and testing.<br />
The first prototype was completely debugged and tested.<br />
Simulated input data as well as random numbers were<br />
transmitted over the custom backplane to this prototype, and<br />
the results were read from the output FIFO. These results<br />
were compared with a C++ model, and 100% matching was<br />
demonstrated. The latency from the input of the Sector<br />
Receivers [3] (not including the optical link latency) to the<br />
output of the Sector Processor is 21 clocks, 15 of which are<br />
used by Sector Processor logic.<br />
Figure 2 shows the test stand used for testing and<br />
debugging the first prototype.<br />
IV. SECOND (PRE-PRODUCTION) PROTOTYPE<br />
SYSTEM ARCHITECTURE<br />
Recent dramatic improvements in the programmable logic<br />
density [4] allow implementing all Sector Processor logic into<br />
one FPGA. Additionally, the optical link components have<br />
become smaller and faster. All this allows combining three<br />
Sector Receivers and one Sector Processor of the first<br />
prototype onto one board. This board will accept 15 optical<br />
links from the Muon Port Cards [3]; each link carries the<br />
information about one muon track segment. Additionally, the<br />
board receives up to 8 muon track segments from the Barrel<br />
Muon System via a custom backplane.<br />
Since the track segment information arrives from 15<br />
different optical links, it has to be synchronized to the<br />
common clock phase. Also, because the optical link’s<br />
deserialization time can vary from link to link, the input data<br />
must be aligned to the proper bunch crossing number.<br />
Next, the track segment information received from the<br />
optical links is processed using lookup tables to convert the<br />
Cathode LCT pattern number, sign, quality and wire-group<br />
number into the angular values describing this track segment.<br />
The angular information about all track segments is fed to a<br />
large FPGA, which contains the entire 3-dimensional Sector<br />
Processor algorithm. On the first prototype this algorithm<br />
occupied 15 FPGAs.<br />
The output of the Sector Processor FPGA is sent to the PT<br />
assignment lookup tables, and the results of the PT<br />
assignment for the three best muons are sent via the custom<br />
backplane to the Muon Sorter.<br />
In the second (pre-production) prototype Track-Finder<br />
system we stopped using Channel Links for the backplane<br />
transmission because of their long latency, and moved to the<br />
GTLP backplane technology. This allows transmitting the<br />
data point-to-point (from Sector Processor to Muon Sorter) at<br />
80 MHz, with no time penalty for serialization since the most<br />
relevant portions of data are sent in the first frame. The data<br />
in the second frame are not needed for immediate calculation,<br />
so they do not delay the Muon Sorter processing.<br />
The entire second (pre-production) prototype Track-<br />
Finder system will fit into one 9U VME crate (Fig. 3).<br />
V. SECTOR PROCESSOR ALGORITHM AND C++<br />
MODEL MODIFICATIONS<br />
The Sector Processor algorithm was significantly<br />
modified to fit into one chip and to reduce latency. The<br />
comparison of the old and new algorithms is shown on Fig. 4.<br />
In particular, the following modifications were made:<br />
• The algorithms of the extrapolation and final selection<br />
units are reworked, and now each of them is completed<br />
in only one clock.<br />
• The Track Assembler Units in the first prototype were<br />
implemented as external lookup tables (static memory).<br />
For the second prototype, they are implemented as FPGA<br />
logic. This saved I/O pins on the FPGA and one clock of<br />
the latency.<br />
• The <strong>preliminary</strong> calculations for the PT assignment are<br />
done in parallel with final selection for all 9 muons, so<br />
when three best out of nine muons are selected, the precalculated<br />
values are immediately sent to the external PT<br />
assignment lookup tables.<br />
All this allowed reducing the latency of the Sector<br />
Processor algorithm (FPGA plus PT assignment memory)
down to 5 clocks (125 ns) from 15 clocks in the first<br />
prototype.<br />
The current version of the Sector Processor FPGA is<br />
written entirely in Verilog HDL. The core code is portable; it<br />
does not contain any architecture-specific library elements. It<br />
is completely debugged with Xilinx simulator in timing<br />
mode, and its functionality exactly matches the C++ model.<br />
During the construction and debugging of the first<br />
prototype, we have encountered many problems related to the<br />
correspondence between hardware and C++ model. In<br />
particular, sometimes it is very problematic to provide the<br />
exact matching, especially if the model uses the C++ built-in<br />
library modules, such as lists and list management routines,<br />
etc.<br />
To eliminate these problems in the future, the C++ model<br />
was completely rewritten in strict line-by-line correspondence<br />
to the Verilog HDL code. All future modifications will be<br />
done simultaneously in the model and Verilog HDL code,<br />
keeping the correspondence intact.<br />
VI. SUMMARY<br />
The conceptual design of a Track-Finder for the Level-1<br />
trigger of the CMS endcap muon system is complete. The<br />
design is implemented as 12 identical processors, which<br />
cover the pseudo-rapidity interval 0.9 < η < 2.4. The trackfinding<br />
algorithms are three-dimensional, which improves the<br />
background suppression. The PT measurement uses data<br />
from 3 endcap stations, when available, to improve the<br />
resolution to 20%. The latency is expected to be 7 bunch<br />
crossings (not including the optical link latency). The design<br />
is implemented using Xilinx Virtex FPGAs and SRAM lookup<br />
tables and is fully programmable. The first prototype was<br />
successfully built and tested; the pre-production prototype is<br />
under construction now.<br />
VII. ACKNOWLEDGEMENTS<br />
This work was supported by grants from the US<br />
Department of Energy. We also would like to acknowledge<br />
the efforts of R. Cousins, J. Hauser, J. Mumford, V. Sedov,<br />
B. Tannenbaum, who developed the first Sector Receiver<br />
prototype, and the efforts of M. Matveev and P. Padley, who<br />
developed the Muon Port Card and Clock and Control Board,<br />
which were used in the tests.<br />
VIII. REFERENCES<br />
[1]. D. Acosta et al. “The Track-Finding Processor for the<br />
Level-1 Trigger of the CMS Endcap Muon System.”<br />
Proceedings of the LEB 1999 Workshop, CERN 99-09,<br />
p.318.<br />
[2] National Semiconductor, DS90CR285/286 datasheet.<br />
[3] CMS Level-1 Trigger Technical Design Report,<br />
CERN/LHCC 2000-038<br />
[4] Xilinx Inc., www.xilinx.com<br />
Figure 1: The first prototype of Sector Processor<br />
Figure 2: Prototype test crate.
Latency<br />
1<br />
1<br />
4<br />
2<br />
3<br />
2<br />
3<br />
3<br />
2<br />
Clock and Control<br />
Board<br />
Muon Sorter<br />
From<br />
Trigger Timing<br />
Control<br />
To<br />
Global Trigger<br />
BIT3 Controller<br />
Optical receivers<br />
Front FPGAs<br />
Lookup tables<br />
Channel link<br />
transmitters<br />
SR SR SR SR SR SR<br />
/ / / / / /<br />
SP SP SP SP SP SP<br />
MS<br />
CCB<br />
SR SR SR SR SR SR<br />
/ / / / / /<br />
SP SP SP SP SP SP<br />
Figure 3: Second Prototype Track-Finder Crate.<br />
Sector Receiver st.1<br />
Channel link receivers<br />
Bunch crossing analyzer (not implemented)<br />
Extrapolation units<br />
Sector Receiver st.2,3<br />
9 Track Assembler units (memory)<br />
Final selection unit 3 best out of 9<br />
Pt precalculation for best 3 muons<br />
Pt assignment (memory)<br />
Sector Receiver st.4<br />
Figure 4: Comparison of the first and pre-production prototypes.<br />
SR/SP Card<br />
(3 Sector Receivers +<br />
Sector Processor)<br />
(60° sector)<br />
From MPC<br />
(chamber 4)<br />
From MPC<br />
(chamber 3)<br />
From MPC<br />
(chamber 2)<br />
From MPC<br />
(chamber 1B)<br />
From MPC<br />
(chamber 1A)<br />
To DAQ<br />
First prototype data flow Pre-production prototype data flow<br />
From Muon Port Cards<br />
Total: 21 Total: 7<br />
To Muon Sorter<br />
Sector Processor<br />
Latency<br />
1<br />
1<br />
0<br />
1<br />
1<br />
1<br />
1<br />
1<br />
SR/SP board<br />
Optical receivers<br />
Front FPGAs<br />
Lookup tables<br />
Bunch crossing analyzer (not implemented)<br />
Sector Processor<br />
FPGA<br />
From Muon Port Cards<br />
Extrapolation units<br />
9 Track Assembler units<br />
Pt precalculation for<br />
9 muons<br />
Output multiplexor<br />
Pt assignment (memory)<br />
To Muon Sorter<br />
Final selection<br />
unit 3 best out<br />
of 9
The Sector Logic demonstrator<br />
of the Level-1 Muon Barrel Trigger<br />
of the ATLAS Experiment<br />
V. Bocci, A. Di Mattia, E. Petrolo, A. Salamon, R. Vari, S. Veneziano<br />
Abstract<br />
The ATLAS Barrel Level-1 muon trigger processes<br />
hit information from the RPC detector, identifying candidate<br />
muon tracks and assigning them to a programmable pT range<br />
and to a unique bunch crossing number.<br />
The on-detector electronics reduces the information<br />
from about 350k channels to about 400 32-bit data words sent<br />
via optical fiber to the so-called Sector Logic boards.<br />
The design and performance of the Sector Logic<br />
demonstrator, based on commercial and custom modules and<br />
firmware is presented, together with functionality and<br />
integration tests.<br />
I. THE ATLAS FIRST LEVEL MUON TRIGGER IN THE BARREL<br />
The ATLAS first level muon trigger in the barrel is<br />
based on fast geometric coincidences between three different<br />
planes of RPC trigger stations in the muon spectrometer. The<br />
trigger chambers are arranged in three different shells<br />
concentric with the beam line, each chamber has four planes<br />
of strips (two along η and two along φ) in order to reduce the<br />
fake trigger rate due to the cavern background. For the same<br />
reason the algorithm is performed both in η and in φ.<br />
The trigger algorithm is executed for two separate pT thresholds.<br />
• For the low-pT threshold, if a strip is hit in the pivot<br />
plane (RPC2) the trigger processor searches for an hit in<br />
the first trigger chamber plane (RPC1) looking inside a<br />
window whose axis is on the line that connects the hit<br />
point in RPC2 with the nominal interaction point, whose<br />
vertex is on the RPC2 plane and whose opening gives the<br />
cut on pΤ. A valid trigger is generated if RPC1 and RPC2<br />
are hit in coincidence.<br />
• For the high-pT threshold, if a valid trigger has been<br />
generated by the low-pT algorithm, the processor searches<br />
for an hit in the third plane RPC3, searching for a<br />
coincidence between the trigger pattern given by the lowpT<br />
algorithm and the hit on the RPC3 trigger station.<br />
The schematic of the trigger principle is depicted in Figure 1.<br />
The trigger processor is composed of various modules.<br />
• The low-pT Coincidence Matrix ASICs (CMAs) are<br />
mounted on the RPC2 trigger chambers and perform the<br />
fast coincidence between the signals coming from RPC1<br />
and RPC2. Each CM η board covers a region ∆η x ∆φ =<br />
INFN RM1 and University of Rome “La Sapienza”, INFN RM2<br />
andrea.salamon@roma2.infn.it<br />
Figure 1: Trigger principle for the ATLAS first level muon trigger in<br />
the barrel. The selection is performed using three dedicated trigger<br />
chambers. For the low-p T trigger if an hit is found in RPC2, an hit is<br />
searched for in the RPC1 trigger station inside a road defined by the<br />
p T cut. The same algorithm is applied for the high p T threshold using<br />
the low-p T trigger output and the RPC3 trigger station.<br />
0.1 x 0.2, while the CM φ board covers a region ∆η x ∆φ<br />
= 0.2 x 0.1.<br />
• The low-p T Pad Logic Boards are mounted on RPC2<br />
and collect the data from four coincidence matrices from<br />
a region ∆η x ∆φ = 0.2 x 0.2. The low-p T Pad board<br />
generates the low-p T trigger information and sends it to<br />
the high-p T trigger boards.<br />
• The high-p T Coincidence Matrix ASICs are mounted<br />
on the RPC3 trigger chambers and perform the fast<br />
coincidence between the signals coming from the low-p T<br />
trigger and the RPC3 trigger station. The CM η board<br />
covers a region ∆η x ∆φ = 0.1 x 0.2, while the CM φ<br />
board covers a region ∆η x ∆φ = 0.2 x 0.1.<br />
• The high-p T Pad Logic Boards are mounted on RPC3<br />
and collect the data from the low-p T board and from four<br />
high-p T coincidence matrix from a region ∆η x ∆φ = 0.2 x<br />
0.2. The high-p T Pad board merges the bending (η) and<br />
non bending (φ) views, selects the muon with the highest<br />
threshold, associates the muon with a Region of Interest<br />
∆η x ∆φ = 0.1 x 0.1 and with a unique bunch crossing<br />
number. Trigger informations are sent to the Sector Logic<br />
board.<br />
• The Sector Logic boards are located in the<br />
underground counting room. Each Sector Logic board<br />
covers a region ∆η x ∆φ = 1.0 x 0.4, receives data from
up to 7 high-p T pad logic boards and from 32 Tile<br />
Calorimeter trigger towers. The output of the Sector<br />
Logic board is sent to the Muon Central Trigger<br />
Processor Interface (MUCTPI).<br />
• The MUCTPI elaborates the data from the Sector<br />
Logic boards and sends its output to the Central Trigger<br />
Processor.<br />
In Figure 2 is reported a trigger slice from the RPC Front End<br />
electronics to the Muon Central Trigger Processor Interface<br />
and to the Read Out Buffer.<br />
Figure 2: Trigger slice from the RPC Front End electronics to the<br />
Muon Central Trigger Processor Interface and to the Read Out<br />
Buffers.<br />
II. SECTOR LOGIC FUNCTIONS<br />
Each Sector Logic board collects information from<br />
up to 7 high pT pad logic boards and form 32 Tile Calorimeter<br />
trigger towers. 64 Sector Logic boards are foreseen in the first<br />
level muon triggere in the barrel. Each Sector Logic board<br />
performs various functions.<br />
• Checks the correct timing of the input data. If some<br />
problem is found a flag is sent to the MUCTPI.<br />
• Performs the Tile Calorimeter coincidence. A muon<br />
candidate is accepted only if an energy deposit is found in<br />
a region of the Tile Calorimeter associated to the region<br />
of the muon spectrometer in which the muon candidate<br />
has been detected. This option will be used in case of<br />
high background levels and is fully programmable.<br />
• Performs the low-pT filter. For each low-pT muon<br />
coming from one of the input Pads it checks if an hit is<br />
found in the RPC3 trigger station. This check is<br />
performed at the sector level. This option is also to be<br />
used in case of high background levels and is fully<br />
programmable.<br />
• Solves η overlap between different Pads inside the<br />
sector and flags all the muons crossing a region<br />
overlapping with a neighbouring sector (this overlap is<br />
solved by the MUCTPI).<br />
• Selects the two muons with the two highest thresholds<br />
in the sector and associates each muon with a Region of<br />
Interest ∆η x ∆φ = 0.1 x 0.1 and with a unique bunch<br />
crossing number. If more than two muon candidates are<br />
found, the Sector Logic flags this condition to the Muon<br />
Central Trigger Processor Interface.<br />
III. SECTOR LOGIC HARDWARE IMPLEMENTATION<br />
The Sector Logic is implemented with a pipeline<br />
processor working synchronously with the 40 MHz LHC<br />
clock.<br />
Each Sector Logic chip receives the input from up to<br />
8 Pad Logic boards (8 x 12 bit @ 40 MHz) and from 32 Tile<br />
Calorimeter trigger towers (32 bit @ 40 MHz) and sends the<br />
output synchronously to the Muon Central Trigger Processor<br />
Interface (32 bit @ 40 MHz). A spare Pad input has been<br />
added to the maximum number of 7 Pads input foreseen at the<br />
current time. The low-pT filter and the Tile Calorimeter<br />
coincidence are fully programmable depending on the content<br />
of four configuration registers.<br />
Sector Logic processing is perfomed in five pipeline<br />
steps. Each pipeline block consists of an input D flip-flop<br />
register (containing the data from up to 8 Pads and from 32<br />
Tile Calorimeter trigger towers and the current result of the<br />
trigger algorithm) followed by the combinatorial logic<br />
implementing the desired functions on the current data. A<br />
block dagram of the Sector Logic chip architecture is reported<br />
in Figure 3.<br />
Figure 3: Block diagram of the architecture of the Sector Logic chip.<br />
The Sector Logic algorithm is implemented with a five steps<br />
pipeline, each step of the pipeline implementing one task of the<br />
algorithm.<br />
We implemented the Sector Logic functionalities in<br />
an FPGA. We used the EPF10K130E-2 of the ALTERA<br />
FLEX 10KE family as target device.<br />
A. Tile Calorimeter confirmation<br />
The first step of the Sector Logic pipeline is the Tile<br />
Calorimeter confirmation block. This block maps the input<br />
coming from the pads to the input from the Tile Cal. A muon<br />
candidate is accepted only if a corresponding track is found in<br />
the expected region of the Tile Calorimeter. This option is<br />
fully programmable for each threshold and thus can also be<br />
disabled.<br />
The Tile Calorimeter check can be programmed with<br />
two arrays EnTCCh(0:7,1:6) and SetTCCh(0:7,1:6,0:31)<br />
stored in the Sector Logic configuration registers. The Tile<br />
Calorimeter coincidence is enabled for all the muons crossing<br />
the i-th Pad with the j-th threshold if EnTCCh(i,j) is set to 1.
In the same way, for each muon passing in the i-th Pad with<br />
the j-th threshold, an energy deposit is searched in the k-th<br />
Tile Calorimeter trigger tower if SetTCCh(i,j,k) is set to1.<br />
Figure 4: Tile Calorimeter coincidence implementation. For a given<br />
Pad and a given threshold a muon candidate is accepted if one hit is<br />
found in one of the corresponding Tile Calorimeter trigger towers.<br />
The schematic of the Tile Calorimeter coincidence is<br />
reported in Figure 4. This coincidence is executed in parallel<br />
48 times, for each Pad and for each threshold.<br />
B. Outer Plane (OPL) low-p filter T<br />
The second step of the Sector Logic pipeline<br />
performs a filter on low-pT muons. A muon with one of the<br />
three low thresholds is confirmed only if an hit is found in the<br />
outer plane of the spectrometer (RPC3 trigger chambers). This<br />
coincidence is performed at the sector level. This option is<br />
fully programmable for each threshold and can also be<br />
disabled.<br />
The OPL check can be programmed with two arrays<br />
EnOPLCh(0:7,1:3) and SetOPLCh(0:7,1:3,0:7) stored in the<br />
Sector Logic configuration registers. The OPL coincidence is<br />
enabled for all the muons crossing the i-th Pad with the j-th<br />
threshold if EnOPLCh(i,j) is set to 1. In the same way, for<br />
each muon passing in the i-th Pad with the j-th threshold, an<br />
hit is searched in the k-th Pad of the RPC3 trigger station if<br />
SetOPLCh(i,j,k) is set to1.<br />
Figure 5: Low-p T filter implementation. For a given Pad and a given<br />
low-p T threshold a muon candidate is accepted if one hit is found in<br />
one of the corresponding RPC3 Pads.<br />
The schematic of the low-pT filter implementation is<br />
reported in Figure 5. This coincidence is executed in parallel<br />
24 times, for each Pad and for each low-pT threshold.<br />
C. Solve η overlap<br />
The η solving algorithm is aimed at avoiding to<br />
double count a muon passing in a region of overlap between<br />
two different chambers. The Sector Logic overlap solving<br />
algorithm looks at the data from the various Pads and if it<br />
finds two neighboring pads in which the same track has<br />
passed, discards one of the two tracks.<br />
The η overlap solving algorithm is performed in<br />
various combinatorial steps:<br />
• a check on all the overlap bits is performed; if a pad<br />
with the overlap bit set to 1 and the threshold set to 0<br />
(anomalous condition) is found, the overlap bit is reset;<br />
• the overlap solving algorithm is performed first on<br />
neighbouring Pads 0 and 1, 2 and 3, 4 and 5, 6 and 7<br />
(even overlap solving);<br />
• the same solving algorithm is performed next on<br />
neighbouring Pads 1 and 2, 3 and 4, 5 and 6 (odd overlap<br />
solving).<br />
D. Find 1 st track<br />
The fourth block of the pipeline selects the track with<br />
the highest threshold and associates to the muon candidate<br />
trigger informations. If there is more than one triggered track<br />
with the same threshold the track with lower pad number<br />
(lower η) is selected.<br />
The 1 st<br />
highest threshold selection algorithm is<br />
performed in various steps:<br />
• the highest threshold associated to each Pad is<br />
compared with the highest threshold associated to all the<br />
other pads. The output of these comparison are ANDed.<br />
Eight bits indicating if the pad’s triggered threshold is<br />
greater or equal than the thresholds associated to all the<br />
other pads are produced at the output of this block.<br />
A schematic of the highest threshold selection<br />
algorithm is reported in Figure 6.<br />
Figure 6: 1 st highest threshold search implementation. For each Pad<br />
the muon candidate threshold is compared with the thresholds of all<br />
the other Pads and the comparison output are ANDed for each Pad.
The output from the comparison block is sent in<br />
parallel to two different blocks:<br />
• the first block is composed of an encoder and a data<br />
filter which transmit to the next step of the pipeline all<br />
the Pads data except the data from the Pad corresponding<br />
to the highest threshold (to avoid double counting of the<br />
same highest threshold);<br />
• the second block is composed of a priority encoder<br />
and a selector which sends to the Sector Logic output<br />
pipeline the data corresponding to the Pad with the<br />
highest threshold.<br />
E. Find 2 nd track<br />
This step of the Sector Logic pipeline is identical to<br />
the previous step, and is performed on the data from all the<br />
Pads excepted the Pad with the highest threshold (which has<br />
been reset to 0). The result of the search is synchronized with<br />
the output from the previous step and is written in the part of<br />
the SL output frame dedicated to the 2 nd<br />
threshold. A flag is<br />
written in the output data pattern if more than two tracks were<br />
found in the sector.<br />
IV. THE SECTOR LOGIC DEMONSTRATOR<br />
F. The Multifunction Computing Core<br />
In order to demonstrate the functionalities of our<br />
implementation we used the Multi Function Computing Core<br />
MFCC 8441 commercial board from CES. This commercial<br />
hardware has been proven to be reliable and efficient solution<br />
for the purposes of our tests.<br />
The MFCC 8441 is a PCI Mezzanine Card which can<br />
be plugged on a RIO2 VME board also from CES.<br />
The MFCC board is composed of:<br />
• a POWER PC CPU running the Sector Logic C test<br />
control program;<br />
• a PPC-PCI bridge implemented with an Altera FPGA<br />
of the FLEX 10KE family;<br />
• on board SDRAM which can be used for demanding<br />
DAQ applications;<br />
• a Front-End FPGA which is full user programmable;<br />
the Sector Logic VHDL code is loaded in this FPGA, the<br />
VHDL code needed to interface the Front-End FPGA<br />
with the PPC bus, with the FE adaptor, with the SDRAM<br />
and with the EPROM is produced by CES;<br />
• a Front-End adaptor which must be designed for the<br />
specific application;<br />
• a flash EPROM storing the firmware for the PPC-PCI<br />
bridge and for the FE FPGA; the EPROM can be<br />
programmed with a dedicated download cable and from<br />
the on-board PPC.<br />
G. The test setup<br />
In order to use the Sector Logic VHDL code with the<br />
MFCC test board an interface with the CES VHDL code has<br />
been created. The architecture of this interface is depicted in<br />
Figure 7.<br />
The Sector Logic Core is interfaced with an Input<br />
Block constituted of 32-bit wide 32-bit deep RAMs, with a<br />
Parameter Block in which are stored the Sector Logic<br />
Initialization registers and the control registers and with the<br />
Output Block constituted of a 32-bit FIFO.<br />
Figure 7: High level Sector Logic schematics with the interface with<br />
the test board. The Sector Logic core receives the input from the<br />
Input Block, the configuration parameters from the the Parameters<br />
Blobk and sends its output to the Output Blobk. All the registers in<br />
the Input, Parameters and Output Bloks are accessible via the MFCC<br />
Power PC bus.<br />
A test session is performed in the following steps:<br />
• the Sector Logic parameters are loaded in the<br />
initialization registers;<br />
• the inputs are loaded by the PPC in the 32-bit internal<br />
FIFOs;<br />
• a series of 40 MHz clock cycles is applied, it is also<br />
possible to perform an infinite loop on the data loaded in<br />
the input RAM (this feature has been used in the Sector<br />
Logic MUCTPI integration tests);<br />
• the data are elaborated by the FE FPGA which has<br />
been programmed with the Sector Logic demonstrator<br />
code;<br />
• the output data are sent both to the FE Adaptor and to<br />
the output FIFO; the data stored in the output FIFOs can<br />
be read by the PPC and analized and checked off-line, the<br />
data at the output of the FE Adaptor can be analized with<br />
the scope or sent to the MUCTPI.<br />
H. MFCC Front End adaptor card<br />
The Frontend Adaptor card has been designed to<br />
translate the 32 Sector Logic output bit from the FE FPGA to<br />
suitable LVDS logic levels to be sent along a 10 meters 40<br />
MHz parallel cable to the MUCTPI.<br />
In Figure 8 is reported a photo of the MFCC with the<br />
MUCTPI interface card.
Figure 8: Layout of the MFCC with the FE adaptor card translating<br />
the MFCC Sector Logic output to LVDS logic levels used to<br />
interface the Sector Logic demonstrator with the MUCTPI prototype.<br />
V. SECTOR LOGIC TESTS<br />
Two kind of thests were performed to validate and<br />
test our design: off-line logic tests and integration tests with<br />
the MUCTPI prototype.<br />
I. Off-line logic tests<br />
The input data from Pads and from Tile Calorimeter<br />
trigger towers were loaded in the input RAM, elaborated and<br />
read from the output FIFO. The output data were checked<br />
with a C++ Sector Logic simulation program. This C++ code<br />
will be inserted in the official first level muon trigger<br />
simulation program.<br />
J. Integration tests with the MUCTPI<br />
The Sector Logic demonstrator and the MUCTPI<br />
prototype were connected with a 10 meters 32 LVDS logic<br />
levels transmission cable. The Sector Logic and the MUCTPI<br />
run with the same 40 MHz LHC clock, but the two clock<br />
signals are not required to be in phase. All the Sector Logic<br />
output data are assumed to be in phase. The data are sampled<br />
on the falling edge of the MUCTPI clock signal.<br />
Theree kind of tests and measurements were<br />
performed.<br />
• Phase measurement. In order to allow a correct<br />
sampling the phase spread of the changing edge of the<br />
Sector Logic output data must be low. The phase spread<br />
of a given bit at the output of the MUCTPI has been<br />
measured with a TDC and a RMS less than 1 ns was<br />
found.<br />
• After the phase spread measurement has been<br />
performed, a window sampling depth measurement were<br />
performed. A fixed sequence of 27 input patterns were<br />
loaded in the input RAMs and an infinite test loop on the<br />
input data were started. The length of the pattern was<br />
chosen in order to be a submultiple of the number of BCs<br />
in an orbit. The data at the input of the MUCTPI are<br />
sampled at a given BC in the orbit and so the MUCTPI<br />
receives always the same known input data from the<br />
Sector Logic. The phase of the sampling edge was moved<br />
by 1 ns steps inside the 25 ns clock period corresponding<br />
to the chosen BC and the sampled data are checked. The<br />
width of the allowed sampling window was found to be<br />
19 ns over 25 ns.<br />
• Data integrity check. Correct data transmission was<br />
checked over a one night run and no error was found.<br />
VI. CONCLUSIONS<br />
The Sector Logic demonstrator has been developed<br />
using a commercial board, the PCI Mezzanine Card MFCC<br />
8441 from CES with dedicated firmware running on FPGA.<br />
A custom FE Adaptor Card has been designed to<br />
connect the Sector Logic demonstrator with the Muon<br />
Central Trigger Processor Interface.<br />
The design has been checked with the official Sector<br />
Loigc C++ simulation program.<br />
Various kind of tests were performed to validate the<br />
design. The performance of the Sector Logic demonstrator are<br />
adequate for the 40 MHz operation and maximum latency of<br />
125 ns.<br />
This work has proven that the use of commercial<br />
hardware is a valid solution during the first part of the<br />
development of custom boards, because it reduces the<br />
demonstrator development time and gives the designer good<br />
support during the test phase.<br />
The first Sector Logic VME board prototype is<br />
currently been designed on the basis of the present<br />
demonstrator.<br />
VII. ACKNOWLEDGMENTS<br />
We would like to thank F. Cidronelli and R. Lunadei<br />
(INFN RM1) for designing the layout of the FE Adaptor card<br />
and K. Nagano and R. Spiwoks (CERN) for the Sector Logic<br />
MUCTPI integration tests.<br />
VIII. REFERENCES<br />
V. Bocci, E. Petrolo, A. Salamon, R. Vari, S.<br />
Veneziano, Prototype Slice of the Level-1 Muon Trigger in<br />
the Barrel Region of the ATLAS Experiment, Theese<br />
Proceeedings<br />
V. Bocci, G. Chiodi, E. Gennari, E. Petrolo, A.<br />
Salamon, R. Vari, S. Veneziano, Radiation test and<br />
application of FPGAs in the Atlas Level 1 Trigger, Theese<br />
Proceeedings<br />
P. Farthouat, Interfaces and overlaps in the LVL1<br />
muon trigger system, Revision 4<br />
http://www.ces.ch
Abstract<br />
One Size Fits All: Multiple Uses of Common Modules in the ATLAS<br />
Level-1 Calorimeter Trigger<br />
G. Anagnostou, P. Bright-Thomas, J. Garvey, S. Hillier, G. Mahout, R. Staley,<br />
W. Stokes, S. Talbot, P. Watkins, A. Watson<br />
School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK<br />
R. Achenbach, P. Hanke, W. Hinderer, D. Kaiser, E.-E. Kluge, K. Meier,<br />
U. Pfeiffer, K. Schmitt, C. Schumacher, B. Stelzer<br />
Kirchoff-Institut für Physik, University of Heidelberg, D-69120 Heidelberg, Germany<br />
B. Bauss, K. Jakobs, C. Nöding, U. Schäfer, J. Thomas<br />
Institut für Physik, University of Mainz, D-55099 Mainz, Germany<br />
E. Eisenhandler, M.P.J. Landon, D. Mills, E. Moyse<br />
Physics Department, Queen Mary, University of London, London E1 4NS, UK<br />
P. Apostologlou, B.M. Barnett, I.P. Brawn, J. Edwards, C.N.P. Gee, A.R. Gillman,<br />
R. Hatley, K. Jayananda, V.J.O. Perera, A.A. Shah, T.P. Shah<br />
Rutherford Appleton Laboratory, Chilton, Didcot OX11 0QX, UK<br />
C. Bohm, S. Hellman, S.B. Silverstein<br />
Fysikum, University of Stockholm, SE-106 91 Stockholm, Sweden<br />
The architecture of the ATLAS Level-1 Calorimeter Trigger<br />
has been improved and simplified by using a common module<br />
to perform different functions that originally required three<br />
separate modules. The key is the use of FPGAs with multiple<br />
configurations, and the adoption by different subsystems of a<br />
common high-density custom crate backplane that takes care<br />
to make data paths equal widths and includes minimal<br />
VMEbus. One module design can now be configured to count<br />
electron/photon and tau/hadron clusters, or count jets, or form<br />
missing and total transverse-energy sums and compare them to<br />
thresholds. In addition, operations are carried out at both crate<br />
and system levels by the same module design.<br />
I. INTRODUCTION<br />
The ATLAS Level-1 Calorimeter Trigger (figure 1) [1] uses<br />
reduced-granularity data from ~7200 ‘trigger towers’, 0.1×0.1<br />
in η–φ, covering all of the ATLAS electromagnetic and<br />
hadronic calorimeters. After digitisation and assignment of<br />
each pulse to the correct 25-ns bunch crossing in the<br />
Preprocessor subsystem, the trigger algorithms (figure 2) are<br />
Corresponding author: Ian Brawn (i.p.brawn@rl.ac.uk)<br />
executed in two parallel subsystems. The Cluster Processor<br />
(CP) finds and counts isolated electron/photon and tau/hadron<br />
clusters, while the Jet/Energy-sum Processor (JEP) finds and<br />
counts jets, as well as adding the total and missing transverse<br />
energy (ET). The JEP also has logic to trigger on jets in the<br />
forward calorimetry, and on approximate total ET in jets.<br />
Cluster Processor Modules (CPMs), each covering an area of<br />
∆φ=90˚ × ∆η~0.4, send the number of e/γ and tau/hadron<br />
clusters they have found, up to a maximum of seven (three<br />
bits), to two merger modules that sum cluster multiplicities.<br />
One merger module handles 8 electron/photon threshold sets<br />
(each set being a combination of cluster, e.m. isolation, and<br />
hadronic isolation ET), and the other handles 8 threshold sets<br />
that can each be programmed to be e/γ or tau/hadron. The<br />
maximum multiplicity for each threshold set is also seven. The<br />
multiplicity summing is in two stages: first for the 14 CPMs in<br />
each CP crate, and then for the four-crate CP subsystem. In<br />
the original design [2] these were Cluster Merger Modules,<br />
fed by cables from the CPMs to a separate crate. The final<br />
‘hit’ multiplicity results are sent to the Central Trigger<br />
Processor (CTP).
On<br />
Detector<br />
In<br />
Trigger<br />
Cavern<br />
LAr<br />
(em)<br />
~7000 analogue links<br />
PPMs<br />
9-bit jet elements<br />
JEP<br />
PPr<br />
Calorimeters<br />
Analogue Sum<br />
Receiver<br />
Preprocessor<br />
10-bit FADC<br />
FIFO, BCID<br />
Look-up table<br />
2x2 sum BC-mux<br />
10-bit serial links:<br />
400 Mbit/s (~10 m)<br />
Figure 1: Overall architecture of the ATLAS Calorimeter Trigger.<br />
Figure 2: Calorimeter trigger algorithms.<br />
Tile/LAr<br />
(had)<br />
twisted pair,
performance, either on the individual JEM sums in each crate<br />
or on the inter-crate sums. The optimal scaling was to use the<br />
two scale bits to multiply by 1, 4, 16 or 64. At the same time,<br />
it was shown that FPGA code for summing the hit<br />
multiplicities, or for computing total and missing transverse<br />
energy, could be run in the same type of FPGAs. The<br />
multiplication needed for the energy scaling can be done by<br />
bit-shifting to keep the latency low. The JEP then has two<br />
CMMs in each crate: one for counting jet multiplicities, and<br />
one for summing transverse energy.<br />
We thus end up with just one type of merger module, for both<br />
CP and JEP subsystems, and for both hit-counting and<br />
transverse-energy summing. Furthermore, this one module<br />
design contains both the crate-level and system-level logic.<br />
Which operations they carry out will be determined<br />
automatically by the crate and slot that they occupy.<br />
III. COMMON MERGER MODULE DESIGN<br />
A block diagram of the CMM is shown in figure 3. At the core<br />
of the design are the two blocks labeled Crate Merging Logic<br />
and System Merging Logic. These blocks contain all of the<br />
logic that is specific to one or more versions of the CMM. All<br />
of the other logic shown is common to all versions. The data<br />
widths shown are the maximum needed to implement all of the<br />
required versions of CMM.<br />
Each CMM receives data from the local crate via a maximum<br />
of 400 backplane links. These data are re-timed to the system<br />
clock and sent to the Crate Merging Logic. The data output<br />
from the Crate Merging Logic are sent to System Merging<br />
Logic, either on the same CMM (in the case of system-level<br />
CMMs) or on a remote CMM (in the case of crate-level<br />
CMMs). The transmission of these data between CMMs is<br />
performed using parallel LVDS cable links.<br />
On system-level CMMs, the System Merging Logic receives a<br />
maximum of 50 bits of data from the local Crate Merging<br />
Logic, and up to 75 bits of data from up to three remote cratelevel<br />
CMMs. Data received from remote CMMs are re-timed<br />
to the board clock and data from the local crate merging logic<br />
are fed through a pipeline delay to compensate for any<br />
difference in the latency of the local and remote data paths.<br />
The results from the System Merging Logic are fed to the CTP<br />
via LVDS cable links. The System Merging logic on cratelevel<br />
CMMs is redundant.<br />
The core of the CMM logic is implemented in two large<br />
FPGAs, labeled Crate FPGA and System FPGA. These<br />
implement the following logic:<br />
• Crate FPGA: Crate Merging Logic, Backplane Receiving<br />
Logic, Event Data Readout, Readout Control.<br />
• System FPGA: System Merging Logic, Cable Receiving<br />
Logic, Event Data Readout, RoI Data Readout.<br />
The main motivation for using FPGAs on the CMM is the<br />
flexibility they introduce into the design. By choosing two<br />
large devices rather than several smaller ones this flexibility is<br />
increased, as the number of hard-wired interconnections at<br />
board level is reduced. Both the Crate and System FPGAs are<br />
implemented with Xilinx XCV1000E devices. This device<br />
was chosen to meet the I/O requirement of the Crate FPGA<br />
and the RAM requirement of the System FPGA. It contains<br />
approximately 1.5 million gates including 96 blocks of 4kbit<br />
RAM. It is a fine-pitch ball-grid array package, with 660 pins<br />
of user-I/O.<br />
From Crate<br />
CMM<br />
From CPMs<br />
or JEMs<br />
Figure 3: A block diagram of the Common Merger Module.<br />
from<br />
Cable from<br />
Receivers Cable<br />
Receivers<br />
from<br />
Cable from<br />
Cable<br />
Receivers<br />
Receivers<br />
from<br />
Cable from<br />
Receivers Cable<br />
Receivers<br />
Figure 4: The system-merging logic of the e/γ System-level CMM.<br />
from<br />
Cable<br />
Receivers<br />
from<br />
Local Crate<br />
Merging<br />
from<br />
Local Crate<br />
Merging<br />
from<br />
Local Crate<br />
Merging<br />
25<br />
25<br />
from<br />
24<br />
local from Crate<br />
Local Merging Crate<br />
Merging<br />
75 bit<br />
Cable<br />
Receive<br />
and Re-time<br />
400 bit<br />
CMOS<br />
backplane<br />
Receive<br />
and Re-time<br />
TCM<br />
Interface<br />
VME<br />
Interface<br />
25<br />
50<br />
400<br />
Parity<br />
Check<br />
(CMM 1)<br />
Parity<br />
Check<br />
(CMM 2)<br />
Parity<br />
Check<br />
(CMM 3)<br />
16<br />
15<br />
17<br />
16<br />
16<br />
8x3<br />
Crate<br />
Merging<br />
Logic<br />
Event Data Readout<br />
Dual-port RAMs<br />
Playback<br />
FIFO<br />
75<br />
Shift Register<br />
Parity<br />
Check<br />
Parity<br />
Check<br />
Parity<br />
Check<br />
3-bit add<br />
8x3<br />
with<br />
3-bit<br />
over-<br />
add<br />
with<br />
3-bit<br />
flowover-<br />
add<br />
with<br />
3-bit<br />
flowover-<br />
add<br />
with<br />
3-bit<br />
flowover-<br />
add<br />
with<br />
3-bit<br />
flowover-<br />
add<br />
with<br />
3-bit 4-input<br />
flowover-<br />
add<br />
with<br />
flow3-bit<br />
over-<br />
saturating flow<br />
adder<br />
15<br />
16<br />
16<br />
Pipeline<br />
Delay<br />
ET<br />
sum &<br />
Theshold<br />
Ex<br />
sum<br />
Ey<br />
sum<br />
ET to RoI<br />
Ex to RoI<br />
Parity<br />
Error<br />
Counter<br />
&Map<br />
Ey to RoI<br />
System<br />
Merging<br />
Logic<br />
Vector<br />
Add &<br />
Threshold<br />
(LUT)<br />
Parity<br />
Generator<br />
Parity<br />
Error<br />
Counter<br />
& Map<br />
Figure5: The system-merging logic of the Energy System-level<br />
CMM.<br />
50<br />
RoI Readout<br />
Dual-port RAM<br />
FIFO<br />
Shift Register<br />
Readout Controller<br />
24<br />
46<br />
Parity<br />
Generation<br />
ClusHitSys22Nov00.cnv<br />
3<br />
3<br />
Parity<br />
Check<br />
Results<br />
50 bit<br />
Cable<br />
Transmit<br />
46 bit<br />
CTP<br />
Interface<br />
Glink<br />
Glink<br />
Serial Link<br />
to R-ROD<br />
Serial Link<br />
to D-ROD<br />
Overview07Dec00.cvs<br />
25<br />
7<br />
To System<br />
CMM<br />
To CTP<br />
To CTP<br />
Interface<br />
To CTP<br />
Interface
The ATLAS level-1 calorimeter trigger requires the CMM to<br />
perform a number of different functions (see table 1). For a<br />
CMM to implement a function the specific configuration files<br />
for that function must be loaded into the Crate and System<br />
FPGAs. On board every CMM are flash memories that house<br />
all configuration files, so that every CMM has the potential to<br />
perform any of the functions listed in table 1. On power up,<br />
the CMM automatically configures itself to perform one of<br />
these functions, determined by the geographical address of the<br />
module.<br />
CMM Module Types<br />
e/γ Crate-level CMM<br />
e/γ System-level CMM<br />
τ/hadron Crate-level CMM<br />
τ/hadron System-level CMM<br />
Jet Crate-level CMM<br />
Jet System-level CMM<br />
Energy Crate-level CMM<br />
Energy System-level CMM<br />
Table 1: CMM module types.<br />
Figures 4 and 5 show two examples of different logic designs<br />
that can be implemented in the System FPGA. Figure 4 shows<br />
the System Merging Logic required by the e/γ subsystem. This<br />
consists mainly of 7-bit adder trees which sum the e/γ<br />
multiplicities over all crates for each of 8 thresholds. Figure 5,<br />
on the other hand, shows the system-merging logic required to<br />
perform energy summation. Here the total ET, Ex and Ey<br />
values are formed by summation. A bank of look-up tables<br />
(LUTs) is then used to apply thresholds to these values to<br />
produce the number of total-ET and missing-ET hits. In all<br />
cases, the output from the System Merging logic is sent to the<br />
CTP.<br />
IV. COMMON BACKPLANE<br />
The use of the CMM in both the CP and JEP subsystems<br />
means that these subsystems require very similar backplanes.<br />
It can be seen from table 2 that, with the exception of speed,<br />
the requirements of the CP subsystem are a subset of those of<br />
the JEP subsystem. A backplane capable of hosting the JEP<br />
subsystem can therefore also be used to host the CP<br />
subsystem, provided the fan-in/out links between modules are<br />
capable of operating at 160 MHz. To take advantage of this,<br />
and rationalise the design of the Level-1 Calorimeter Trigger<br />
further, a common backplane has been designed for these two<br />
subsystems.<br />
The common backplane is 9U high (400.05 mm) and 84 HP<br />
wide (426.72 mm). It can accommodate up to 21 modules,<br />
comprised of the following: 16 JEMs or 14 CPMs, 2 CMMs,<br />
1 Timing Control Module (TCM) and 2 VME controllers.<br />
Most of the tracks on the backplane carry data fanned between<br />
neighbouring CPMs/JEMs, or data transferred from these<br />
modules to CMMs. There are also timing signals and a<br />
CANbus that is used to monitor temperatures and voltages<br />
within the crate.<br />
Due to the large number of signal tracks on the backplane it is<br />
not possible to accommodate a full VMEbus. Instead a custom<br />
VME bus is used, called VME – –. This allows only A24 D16<br />
VME cycles using a minimal set of VME lines: SYSRESET,<br />
A[23:1], D[15:0], DS0*, WRITE* and DTACK*. A custom<br />
adapter card is needed to provide the interface between the<br />
crate and a standard VME64 controller.<br />
CP subsystem crate JEP subsystem crate<br />
14 CPMs 16 JEMs<br />
CPM input from pre-processor JEM input from pre-processor<br />
= 80 serial links via 20 cable = 88 serial links via 24 cable<br />
assemblies<br />
assemblies<br />
CPM–CPM fan-in/out = 320 JEM–JEM fan-in/out = 330<br />
single-ended point-to-point single-ended point-to-point<br />
links @ 160 MHz<br />
links @ 80 MHz<br />
Data input to each CMM from Data input to each CMM from<br />
CPMs = 350 single-ended JEMs = 400 single-ended<br />
point-to-point links @ 40 MHz point-to-point links @ 40 MHz<br />
TTC, CPU, DCS (CANbus) TTC, CPU, DCS (CANbus)<br />
required<br />
required<br />
Table 2: Comparison of the JEP and CP subsystem crate backplane<br />
requirements.<br />
In addition to the signal tracks across the backplane, the<br />
backplane must also accommodate the serial links that bring<br />
data from the Preprocessor system to the CPMs/JEMs. These<br />
are brought to the back side of the backplane via untwisted<br />
shielded pair cable assemblies. These assemblies are mated to<br />
long through-pins on the rear of the backplane, and passed<br />
directly through the backplane to the processor modules on<br />
the other side. The same system of through pins is used on the<br />
CMM connectors to receive 84 twisted-pair cables carrying<br />
data from CMMs in remote crates.<br />
The connections between the backplane and the modules are<br />
implemented using AMP Z-pack (Compact PCI) connectors.<br />
These feature 5 rows of pins at a 2 mm pitch, allowing a total<br />
of 820 pins to be connected to each module. A signal to<br />
ground ratio of 4:3 is used on these pins to minimise<br />
interference between the signals.<br />
V. EXAMPLE OF FLEXIBILITY: NEW ALGORITHMS<br />
This backplane just described, combined with the use of<br />
FPGAs in both the CMM and JEM designs, allows us to add<br />
some new trigger algorithms that have been requested by<br />
ATLAS but were not foreseen in the original design. No doubt<br />
other variations will appear in the future.<br />
• The forward calorimetry, covering rapidities from 3.2 to<br />
4.9, was originally included in the trigger only because it<br />
was needed to improve the missing-ET resolution.<br />
However, in addition to allowing extension of the normal<br />
jet trigger into this range, it has recently been proposed<br />
that certain Higgs decays via Ws (i.e., the ‘invisible Higgs’<br />
channel) might be picked up by a trigger on jets in the<br />
FCAL in conjunction with missing-ET. The flexibility of<br />
the FPGA logic in the JEMs allows forward jets to be<br />
found on their own, and the logic in the CMM can be<br />
altered to count them separately.<br />
• A trigger on approximate total ET in jets was going to be<br />
done in the CTP. This multiplies the number of jets<br />
exceeding each jet threshold by the value of the threshold,<br />
and compares the estimated total jet ET obtained with some
total jet-ET thresholds. This can now be done in the final<br />
subsystem-level jet-counting FPGA, which is more logical<br />
and appropriate.<br />
• Triggers on total ET can be spoiled by noise, particularly if<br />
it is coherent. Simulation indicates that matters might be<br />
improved by requiring local regions to exceed a low<br />
threshold value if they are to be added to the total. This<br />
could, of course, be done in the JEMs and simply replace<br />
the normal total-ET trigger. However, if it is desired to use<br />
this trigger in parallel with the normal total-ET trigger, the<br />
use of FPGA logic in both the JEMs and CMMs allows<br />
this to be done by using some of the jet logic.<br />
VI. OTHER COMMON MODULES: TCM AND ROD<br />
In addition to the hardware described above, two other<br />
modules in the ATLAS Level-1 Calorimeter Trigger perform<br />
multiple roles. A common Readout Driver handles both<br />
readout data and level-1 trigger regions-of-interest in both CP<br />
and JEP. This module is described in an accompanying<br />
paper.<br />
A common Timing Control Module has also been designed for<br />
use in the CP, JEP, and Preprocessor subsystems. It provides<br />
the interface between the crates in these subsystems and the<br />
ATLAS TTC and DCS networks. One difficulty in the design<br />
of the TCM is that the Preprocessor and CP/JEP crates use<br />
different formats and connectors to implement their VME<br />
buses. To overcome this problem the TCM uses and an<br />
Adapter Link Card (ALC) to house the VME interface. The<br />
ALC is essentially a daughter card for the TCM. It differs<br />
from normal daughter cards, however, in that it fits into a cutout<br />
section at the rear of the TCM and lies flush with that<br />
card. Two ALCs have been designed, to implement the VME<br />
interfaces for the Preprocessor and CP/JEP systems.<br />
VII. STATUS AND TESTING<br />
Prototype versions of the CMM and the common backplane<br />
have been designed, and will shortly be sent out for<br />
manufacture. A prototype TCM has been manufactured and is<br />
currently undergoing stand-alone tests, and a prototype<br />
common ROD module exists and its interfaces with other<br />
ATLAS subsystems have been tested at CERN.<br />
In March 2002, a complete vertical ‘slice’ of the ATLAS<br />
Level-1 Calorimeter Trigger will be built and tested. This will<br />
include prototype versions of all of the hardware elements in<br />
the system, including all of the common hardware described<br />
above.<br />
VIII. CONCLUSIONS<br />
We have shown how the use of programmable FPGA logic<br />
has allowed us to implement what were originally three<br />
separate kinds of merger modules as a single design. Although<br />
two of the three were fairly similar, the energy summation is<br />
quite a different task from hit counting, but by making the<br />
number of input and output signals the same and by using<br />
versatile and powerful FPGAs it could still be accommodated.<br />
This reduction in the number of different module types saves<br />
on design effort and on non-recurrent engineering costs, and<br />
reduces the number of spare modules required.<br />
Although the use of a common custom backplane in the CP<br />
and JEP subsystems was mandated by the use of the Common<br />
Merger Module, the gains just mentioned make it too a useful<br />
simplification to the trigger system.<br />
The use of a common Readout Driver Module for the two<br />
subsystems, again made possible by the use of programmable<br />
logic, and the use of a common Timing Control Module<br />
throughout all three calorimeter trigger subsystems, also<br />
carries the same advantages.<br />
REFERENCES<br />
[1] ATLAS Level-1 Calorimeter Trigger home page:<br />
http://hepwww.pp.rl.ac.uk/Atlas-L1<br />
[2] ATLAS Level-1 Trigger Technical Design Report:<br />
http://atlasinfo.cern.ch/Atlas/GROUPS/DAQTRIG/TDR/tdr.ht<br />
ml<br />
[3] M. K. Jayananda, ATLAS Level-1 Calorimeter<br />
Trigger — Simulation of the backplane for Common Merger<br />
Module, ATL-DAQ-2000-004.<br />
[4] R. Dubitzky and K. Jakobs, Study of the performance<br />
of the Level-1 Pt miss Trigger, ATL-DAQ-99-010
The Final Multi-Chip Module<br />
of the ATLAS Level-1 Calorimeter Trigger Pre-Processor<br />
G. Anagnostou, P. Bright-Thomas, J. Garvey, S. Hillier, G. Mahout, R. Staley, W. Stokes, S. Talbot,<br />
P. Watkins, A. Watson<br />
School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK<br />
R. Achenbach, P. Hanke, W. Hinderer, D. Kaiser, E.-E. Kluge, K. Meier, U. Pfeiffer, K. Schmitt,<br />
C. Schumacher, B. Stelzer<br />
Kirchoff-Institut für Physik, University of Heidelberg, D-69120 Heidelberg, Germany<br />
B. Bauss, K. Jakobs, C. Nöding, U. Schäfer, J. Thomas<br />
Institut für Physik, University of Mainz, D-55099 Mainz, Germany<br />
E. Eisenhandler, M.P.J. Landon, D. Mills, E. Moyse<br />
Physics Department, Queen Mary, University of London, London E1 4NS, UK<br />
P. Apostologlou, B.M. Barnett, I.P. Brawn, J. Edwards, C.N.P. Gee, A.R. Gillman, R. Hatley, K. Jayananda,<br />
V.J.O. Perera, A.A. Shah, T.P. Shah<br />
Rutherford Appleton Laboratory, Chilton, Didcot OX11 0QX, UK<br />
C. Bohm, S. Hellman, S.B. Silverstein<br />
Fysikum, University of Stockholm, SE-106 91 Stockholm, Sweden<br />
Corresponding author: Werner Hinderer (hinderer@kip.uni-heidelberg.de)<br />
Abstract<br />
The final Pre-Precessor Multi-Chip Module (PPrMCM)<br />
of the ATLAS Level-1 Calorimeter Trigger is presented. It<br />
consists of a four-layer substrate with plasma-etched vias<br />
carrying nine dies from different manufacturers. The task of<br />
the system is to receive and digitize analog input signals from<br />
individual trigger towers, to perform complex digital signal<br />
processing in terms of time and amplitude and to produce<br />
two independent output data streams. A real-time stream<br />
feeds the subsequent trigger processors for recognizing trigger<br />
objects, and the other provides deadtime-free readout of the<br />
Pre-Processor information for the events accepted by the<br />
entire ATLAS trigger system. The PPrMCM development has<br />
recently been finalized after including substantial experience<br />
gained with a demonstrator MCM.<br />
I. INTRODUCTION<br />
The event selection at the ATLAS experiment requires a fast<br />
three level Trigger system for the selection of physics processes<br />
of interest. The first trigger level (Level-1 Trigger) is designed<br />
to reach an event rate reduction from the 40 MHz LHC bunch-<br />
crossing rate down to the first level accept rate of 75 kHz -<br />
100 kHz [1]. The Level-1 Trigger is composed of a number<br />
of building blocks - the Calorimeter Trigger, the Muon Trigger<br />
and the Central Trigger Processor. The input to the Level-1<br />
Trigger for the calorimeter part is based on reduced granularity.<br />
Analog calorimeter signals are summed to ’trigger towers’ on a<br />
basis of a two dimensional grid with steps of 0.1 in the � and �<br />
direction. This is done separately for the Electromagnetic and<br />
the Hadronic Calorimeter. It amounts to about 7200 signals<br />
which are then transmitted electrically via twisted pair cables<br />
to the Level-1 Trigger.<br />
Figure 1 shows a block diagramm of the Calorimeter<br />
Trigger. The Pre-Processor at the front-end of the Calorimeter<br />
Trigger links the ATLAS calorimeters with the subsequent<br />
object finding processors - the Cluster Processor and the<br />
Jet/Energy-Sum Processor.<br />
The maximum latency to find a Level-1 trigger decision<br />
is 2.0 �s including cable delays. This and the large number<br />
of trigger tower signals requires a compact system with fast<br />
hard-wired algorithms implemented in application-specific
Calorimeter<br />
~7200<br />
anal. summed<br />
trigger towers<br />
0.1 x 0.1<br />
~7200 anal.<br />
electr. signals<br />
em. & had.<br />
Pre−Processor (PPr)<br />
digitization, 10bit<br />
BCID<br />
energy cal. (LUT)<br />
data transmission<br />
pre−sums of<br />
Jet−elements<br />
BC−mux<br />
readout<br />
Phi duplication<br />
0.2 x 0.2 jet−elem.<br />
em. and had.<br />
Cluster Processor (CP)<br />
|Eta|
Analog In<br />
¯ one four-channel PPrASIC, providing readout and<br />
preprocessing;<br />
¯ one timer chip (Phos4) for the phase adjustment of the<br />
FADC strobes with respect to the analog input signals;<br />
¯ three Bus LVDS Serializers, 10-bits at 40 MHz<br />
(400 Mbit/s user data rate, 480 MBd including start- and<br />
stop bit);<br />
Digitization<br />
ADC FIFO<br />
10−bit delay<br />
# BC<br />
fine<br />
sync.<br />
0−25ns<br />
Digital signal processing Transmission<br />
(ADC) (PPrASIC)<br />
(LVDS Serializer)<br />
(Phos4)<br />
real−time data path (40 MHz)<br />
BCID: LUT BC− par. to<br />
FIR +<br />
mux ser.<br />
Peak,<br />
sat.,<br />
ext.<br />
10−bit<br />
40MHz<br />
readout data path (100 kHz)<br />
Playback & Monitoring<br />
Figure 3: Block diagram of the final MCM<br />
+<br />
par. to<br />
ser.<br />
10−bit<br />
40MHz<br />
480 MBit/s<br />
to CP<br />
480 MBit/s<br />
to JEP<br />
Figure 3 shows the preprocessing of one trigger tower<br />
signal. Four such channels are combined on one MCM. The<br />
real-time signal processing flows from the left to the right.<br />
First, the FADCs digitizes the analog trigger tower signals at<br />
40 MHz with 12-bit resolution (only 10-bits are used, the two<br />
lowest significant bits are not connected) in a range of 1 V<br />
peak-to-peak around the internal generated 2.4 V reference<br />
voltage. Each FADC die generates its own reference voltage.<br />
The offset adjustment and scaling from the 2.5 V input signal<br />
range to this 1 V range is done by the Analog Input board<br />
shown in figure 2. Next, the four 10-bit data buses emanating<br />
from the four FADCs are each digitally preprocessed inside<br />
the PPrASIC. The PPrASIC output is interfaced to three<br />
LVDS Serializer chips. In the case of the data transmission<br />
to the Cluster Processor (CP), a bunch-crossing multiplexing<br />
scheme (BC-mux) is applied, so that one LVDS Serializer<br />
transmits the data from two FADCs. Due to the coarser<br />
jet-elements the LVDS Serializer used for data transmission to<br />
the Jet/Energy-Sum Processor (JEP) transmits the data from<br />
four channels. Figure 3 also shows the second independent<br />
output data stream produced by the MCM. This second stream<br />
allows pipelined readout of raw trigger input data as well as<br />
� Ì values after the lookup table (LUT) in order to tell what<br />
has caused a trigger and to provide diagnostic information.<br />
It allows the monitoring of the performance of the trigger<br />
system and the injection of test data for trigger system tests.<br />
The function of the readout pipelines in the Pre-Processor is<br />
equivalent, but independent of those of the detector readout.<br />
The Level-1 Trigger captures its own event data as soon as it<br />
has triggered. Two sets of pipeline memories capture the event<br />
data in the Pre-Processor. One records the raw FADC data<br />
at the Pre-Processor input and one records after the lookup<br />
table. Without introducing deadtime to the readout, the readout<br />
data path can record data from the pipeline memories up to a<br />
Level-1 accept rate of 100 kHz for five time-slices including<br />
the BCID result.<br />
A. Design Experience<br />
Considerable design experience has been gained from<br />
a demonstrator MCM which has been built, simulated and<br />
successfully operated (see [3] for details). This demonstrator<br />
MCM was designed with the same feature size (100 �m) as<br />
the final MCM and it was fabricated in the same laminated<br />
MCM-L process described in the following subsection.<br />
B. MCM production technique<br />
The MCM technology can be classified by its substrate<br />
type. It is refered to as an MCM-L (laminated) technology.<br />
This technique was chosen to combine small feature sizes<br />
with low prices. The design process of the laminated<br />
multi-layer structure is based on an industrially-available<br />
production technique for high-density printed circuit boards.<br />
The process, which is offered by Würth Elektronik [5] is<br />
called TWINflexÖ . It is characterized by its use of plasma<br />
etched micro-vias, where plasma is used for ‘dry’ etching<br />
of insulating material (Polyimide). Plasma etching enables<br />
precise via contacts between layers with a diameter of 100 �m<br />
down to 50 �m.<br />
The body of the MCM is a combination of three flexible<br />
Polyimide foils laminated on a rigid copper substrate to form<br />
four routing layers. The layer cross-section consists of a core<br />
foil of 50 �m thickness, which carries 18 �m copper plates on<br />
either side. Plasma etching is used for ‘buried’ via connections<br />
to adjacent layers and routing structures are formed in copper<br />
using conventional etching techniques. The core foil is<br />
surrounded by outer foils of 25 �m Polyimide, which are<br />
copper plated only on one side. The actual contact through<br />
the core foil is accomplished with electroplated copper and<br />
after that, the routing structures are formed. The electroplating<br />
process increases the track thickness from 18 �m to25�m.<br />
The application of adhesive accomplishes laminating.<br />
Routing<br />
structures<br />
Staggered<br />
via<br />
Figure 4: Cross-section of the flexible MCM part after laminating.<br />
Staggered vias are used for the connection through all layers.<br />
Figure 4 shows the final laminated and flexible part of the<br />
MCM. A combination of three vias (staggered vias) is needed<br />
to accomplish a contact from the top to the bottom layer.<br />
Finally the flexible part is glued onto a copper substrate of<br />
800 �m thickness.<br />
Due to the high power dissipation of the FADCs (0.6 W for<br />
each), for all four FADCs staggered vias were grouped as close<br />
as possible to form thermal vias which provide good thermal<br />
conductance to the substrate.<br />
Figure 5 shows the final MCM cross-section. Components<br />
such as capacitors and resistors are connected to the multi-layer<br />
structure using surface-mount technology (SMD). On each<br />
end a 60 pin SMD connector from SAMTEC (BTH030)<br />
connects the MCM to the Pre-Processor Module. The chips are
2.0 cm<br />
Figure 6: Partly assembled MCM substrate.<br />
Phos4<br />
SMD connector Silicone gel Chip attach SMD component Metal lid<br />
FADCs<br />
four-layer<br />
Cu-polyimide<br />
Cu substrate<br />
Figure 5: Side view of the hermetically sealed MCM. High-density<br />
SMD connectors were used to allow quick replacement upon<br />
component failure.<br />
encapsulated with a lid in between the two SMD connectors.<br />
The lid will be glued with electrically conducting epoxy to<br />
the layer compound. It will act as an EMI-shielding device<br />
and it will be filled with a silicone gel to remove atmosphere<br />
and to protect the dies from moisture. On the backside of the<br />
substrate an 8 mm Al heatsink is glued to it.<br />
C. MCM layout<br />
This section describes the physical layout of the MCM.<br />
The following points were considered in the design of the final<br />
MCM:<br />
¯ Analog and digital parts were separated: this applies to<br />
power and ground, the signal routing and the placement<br />
of the dies.<br />
¯ Broad power traces (� � �m) were used to limit the<br />
voltage drop, the width of the other traces are usually<br />
only �m.<br />
¯ For each die at least two decoupling capacitors were<br />
used.<br />
¯ The clock distribution was done for each die individually<br />
using short traces, this ensures a uniform propagation<br />
delay for all clock signals.<br />
¯ A bond pad size of 150 �m ¢ 300 �m was used. This<br />
size is large enough for wire-bonding, even if one needs<br />
to probe at bonding pads during the MCM test or to place<br />
a second bond.<br />
¯ Copper shapes beneath each die are required to connect<br />
the die substrate with its voltage potential.<br />
7.0 cm<br />
footprint of<br />
the PPrASIC<br />
LVDS Serializers<br />
¯ A solder mask is used to prevent short circuits during<br />
soldering of SMD components.<br />
¯ On the top layer, a cross-hatched ground shape<br />
surrounds bonding and SMD pads. This reduces the<br />
electromagnetic influence of signals to each other and it<br />
stabilizes the ground potential. A cross-hatched shape<br />
is needed because drying moisture coming out of the<br />
cross-section can destroy the MCM.<br />
A partly assembled MCM is shown in Figure 6 prior to<br />
final hermetic encapsulation. The PPrASIC is missing on the<br />
picture. The layout has a form factor of 2.0 cm ¢ 7.0 cm. The<br />
total power consumption is 5.2 W. 932 vias were used, the total<br />
line length is about 2.5 m.<br />
IV. FUNCTIONAL TEST OF THE MCM<br />
As more than 3200 MCMs have to be tested, an automated<br />
test able to say ’well working’ or ’defective’ within minutes is<br />
required. Figure 7 shows the necessary hardware for such a<br />
test.<br />
sync.<br />
PC with dual head<br />
video card<br />
Video<br />
RAM<br />
R<br />
G<br />
B<br />
R<br />
G<br />
B<br />
set−up<br />
PC with:<br />
analog<br />
input<br />
board<br />
Figure 7: MCM test-setup.<br />
MCM test board<br />
PPrMCM<br />
HDMC=<br />
Hardware Diagnostics, Monitoring<br />
and Control software<br />
compare with vectors<br />
VME Motherboard with<br />
Common Mezzanine Cards<br />
(CMC)<br />
LVDS receiver CMC<br />
real−time data path<br />
CMC card<br />
readout data path<br />
configure<br />
readout<br />
First of all, a dual head video card is used as a signal<br />
generator. The advantages of using a video card as a signal<br />
generator are that a video card is very cheap, very fast and<br />
arbitrary analog output signals can be programmed. A dual<br />
head video card has six analog outputs. Out of these six<br />
outputs four are chosen to provide the analog stimilus signals<br />
VME
for the test. The signals are conditioned by the Analog Input<br />
board, the same board which is used on the Pre-Processor<br />
Module. The conditioned signals are received by the MCM<br />
to be tested. Both, the MCM and the Analog Input board are<br />
plugged on a test board. The output of the real-time data path<br />
is received by a LVDS receiver CMC card. The data of the<br />
readout path are received by a Xilinx FPGA, located on a<br />
second CMC card. This CMC card also hosts a large SRAM<br />
which buffers the data of the readout path as well as the data of<br />
the real-time path which is transmitted from the LVDS receiver<br />
CMC card to this CMC card. The memory can be read out by<br />
VME and thus data can be transmitted to a PC.<br />
The test will be set-up and analysed by the Hardware<br />
Diagnostics, Monitoring and Control software (HDMC) which<br />
was developed by the ATLAS group of Heidelberg. The<br />
output of the MCM can now be compared with expected<br />
results gained from a mixed signal simulation of the full MCM<br />
including the chip logic. Because of the analog part of the<br />
MCM the check will have boundaries within which the result<br />
can be seen as correct.<br />
V. MASS PRODUCTION AND QUALITY<br />
ASSURANCE<br />
The development and design of the MCM was done by<br />
the University of Heidelberg, whereas the final production of<br />
3200 MCMs needs to be done in cooperation with external<br />
companies. The four-layer MCM substrate including the<br />
800 �m copper carrier is done by Würth Elektronik [5]. The<br />
mounting of dies and SMD components, wire-bonding and<br />
encapsulation will be done by Hasec [4].<br />
The following list provides the sequence for mass<br />
production and quality assurance:<br />
1 Production: Substrate layer compound<br />
2 Test : Electrical test<br />
3 Test : Test bonding<br />
4 Assembly : Silk-screen printing of solder paste<br />
5 Assembly : Placement of SMD components<br />
6 Assembly : SMD reflow soldering<br />
7 Assembly : Chip mounting<br />
8 Assembly : Ultrasonic wire-bonding<br />
9 Test : MCM test with the test system shown in section IV<br />
10 Assembly : Repair of defective MCMs<br />
11 Assembly : Encapsulation, lid and silicone gel<br />
12 Test : Performance test on the Pre-Processor Module<br />
VI. CONCLUSIONS<br />
The design of the final MCM benefited from the design<br />
experience gained by a demonstrator MCM. Compared to<br />
this demonstrator MCM the final MCM presented here is less<br />
demanding in terms of power, temperature and link speed and<br />
hence it can achieve an improved reliability. An extensive<br />
test will be done in the near future when the so called ’slice<br />
test’ starts. It is planned to assemble the whole preprocessing<br />
chain with the subsequent object-finding processors for several<br />
hundred analog trigger tower signals. This test will show if the<br />
final MCM meets all the requirements.<br />
VII. REFERENCES<br />
[1] ATLAS Level-1 Trigger Group<br />
ATLAS First-Level Trigger Technical Design Report<br />
ATLAS TDR-12, CERN/LHCC/98-14, CERN, Geneva 24<br />
June 1998<br />
http://atlasinfo.cern.ch/Atlas/GROUPS/DAQTRIG/TDR/tdr.html<br />
[2] Pre-Processor Module<br />
Specification of the Pre-Processor Module (PPM) for the ATLAS<br />
Level-1 Calorimeter Trigger<br />
The ATLAS Heidelberg Group<br />
http://wwwasic.kip.uni-heidelberg.de/atlas/docs/modules.html<br />
[3] Pfeiffer, U.<br />
A Compact Pre-Processor System for the ATLAS Level-1<br />
Calorimeter Trigger<br />
PhD Thesis, Institut für Hochenergiephysik der Universität<br />
Heidelberg, Germany 19 October 1999<br />
http://wwwasic.kip.uni-heidelberg.de/atlas/docs.html<br />
[4] HASEC-Elektronik.<br />
http://www.hasec.de<br />
[5] Würth Elektronik<br />
http://www.wuerth-elektronik.de
Prototype Readout Module for the ATLAS Level-1 Calorimeter Trigger Processors<br />
G. Anagnostou, P. Bright-Thomas, J. Garvey, S. Hillier, G. Mahout,<br />
R. Staley, W. Stokes, S. Talbot, P. Watkins, A. Watson<br />
School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK<br />
R. Achenbach, P. Hanke, W. Hinderer, D. Kaiser, E.-E. Kluge, K. Meier,<br />
U. Pfeiffer, K. Schmitt, C. Schumacher, B. Stelzer<br />
Kirchhoff Institut für Physik, University of Heidelberg, D-69120 Heidelberg, Germany<br />
B. Bauss, K. Jakobs, C. Nöding, U. Schäfer, J. Thomas<br />
Institut für Physik, Universität Mainz, D-55099 Mainz, Germany<br />
E. Eisenhandler, M.P.J. Landon, D. Mills, E. Moyse<br />
Physics Department, Queen Mary, University of London, London E1 4NS, UK<br />
P. Apostologlou, B.M. Barnett, I.P. Brawn, J. Edwards, C.N.P. Gee,<br />
A.R. Gillman,R. Hatley, V. Perera, A.A. Shah, T.P. Shah<br />
Rutherford Appleton Laboratory, Chilton, Didcot OX11 0QX, UK<br />
C. Bohm, S. Hellman, S.B. Silverstein<br />
Fysikum, University of Stockholm, SE-106 91 Stockholm, Sweden<br />
Corresponding author: Gilles Mahout (gm@hep.ph.bham.ac.uk)<br />
Abstract<br />
The level-1 calorimeter trigger consists of three<br />
subsystems, namely the Preprocessor, electron/photon and<br />
tau/hadron Cluster Processor (CP), and Jet/Energy-sum<br />
Processor (JEP). The CP and JEP will receive digitised<br />
calorimeter trigger-tower data from the Preprocessor and will<br />
provide trigger multiplicity information to the Central Trigger<br />
Processor and region-of-interest (RoI) information for the<br />
level-2 trigger. It will also provide intermediate results to the<br />
data acquisition (DAQ) system for monitoring and diagnostic<br />
purposes. This paper will outline a readout system based on<br />
FPGA technology, providing a common solution for both<br />
DAQ readout and RoI readout for the CP and the JEP. Results<br />
of building a prototype readout driver (ROD) module will be<br />
presented, together with results of tests on its integration with<br />
level-2 and DAQ modules.<br />
I. INTRODUCTION<br />
The ATLAS Level-1 Calorimeter Trigger is described in<br />
[1]. It consists of three subsystems: the Preprocessor (see<br />
accompanying paper on MCM), the electron/photon and<br />
tau/hadron Cluster Processor (CP), and Jet/Energy-sum<br />
Processor (JEP). The CP and JEP will receive digitised<br />
calorimeter trigger-tower data from the Preprocessor, and will<br />
provide multiplicity information on trigger objects to the<br />
Central Trigger Processor via Common Merger Modules<br />
(CMMs). For a more detailed overview of the trigger, see the<br />
accompanying talk on “One Size Fits All”. Using Readout<br />
Driver (ROD) modules (fig. 1), the CP and JEP must also<br />
provide region-of-interest (RoI) information for the level-2<br />
trigger, and readout data to the data acquisition (DAQ) system<br />
for monitoring and diagnostic purposes.<br />
The ROD modules used for the Cluster Processor and the<br />
Jet/Energy-sum Processor are based on FPGA technology. We<br />
will use one common design for both subsystems, using<br />
appropriate firmware to handle the several different types of<br />
RoI and trigger readout data.<br />
In order to see if such a design is feasible, a prototype<br />
ROD module has been built. We will first summarise the<br />
requirements and functionality of this prototype. The readout<br />
process and hardware implementation are then described,<br />
followed by first test results from both standalone tests and<br />
integration with ATLAS level-2 trigger and DAQ modules.
Cluster Processor Jet/Energy Jet Energy Processor<br />
x4 ◊4 x2 ◊2<br />
C C C<br />
M P P<br />
M M M<br />
14<br />
C T<br />
M C<br />
M M<br />
G-link<br />
T T R R R R R<br />
T T O O O O O<br />
C C D D D D D<br />
v v<br />
x i<br />
S-link<br />
DAQ<br />
JC<br />
J J<br />
M E E<br />
M M M<br />
16<br />
◊2 x2<br />
Level-2<br />
SC<br />
T<br />
M C<br />
M M<br />
Figure 1: Readout path of the Cluster and Jet/Energy Processors.<br />
II. TRIGGER REQUIREMENTS<br />
The CP and JEP find trigger objects of various types for<br />
each bunch crossing, and send the multiplicity for each type to<br />
the Central Trigger Processor. The CP identifies isolated<br />
electron/photon showers (between 8 and 16 sets of thresholds<br />
in ET) and isolated single-hadron/tau candidates (up to 8<br />
threshold sets in ET for a grand total of 16 threshold sets). The<br />
JEP identifies jets above 8 ET thresholds, as well as triggering<br />
on missing-ET, total-ET, forward jets, and total jet ET. Coordinates of all the objects found, as well as the total<br />
value and components of ET, must be sent to the level-2<br />
trigger as RoI information for all bunch crossings which<br />
generate a level-1 trigger in the CTP.<br />
III. READOUT DRIVER REQUIREMENTS<br />
The Readout Driver module collects data from CP<br />
Modules (CPMs and CMMs) and JEP Modules (JEMs and<br />
CMMs). It formats the data and sends it on to the DAQ and to<br />
level-2 (RoIs). The ROD requirements are as follows:<br />
• Collect data from several trigger processing modules<br />
• Read out data from more than one consecutive bunchcrossing<br />
(‘slice’) if required<br />
• Perform error detection on received data<br />
• Process data, including zero suppression when needed<br />
• Format DAQ and RoI data<br />
• Interface to the readout S-link [2]<br />
• Monitor the data sent on the S-link via a spy<br />
• Receive TTC signals and commands [3]<br />
• Interface to CTP (ROD busy signal)<br />
• Operate at level-1 event rate of up to 100 kHz<br />
IV. READOUT PROCESS<br />
Readout data for monitoring trigger operation and<br />
performance comprise the input data and results of the CPMs,<br />
JEMs and CMMs. RoI data for discrete trigger objects comes<br />
from the CPMs and JEMs, and energy-sum results from<br />
CMMs in the JEP. Because level-1 triggers are asynchronous,<br />
we have chosen to use 20-bit G-links [4] to transmit the data<br />
to the RODs, with two links per trigger module — one for<br />
DAQ data and one for RoIs. The RODs receive and buffer the<br />
serialised 20-bit data using a G-link receiver mezzanine board<br />
On receipt of the level-1 accept signal (L1A), the ROD’S<br />
control logic places the event number and the bunch-crossing<br />
number (BCID) generated by the TTC [3] into the event and<br />
the BCID buffer. It also checks how many ‘slices’ (1–5) of<br />
trigger-tower data to read out. When the ROD receives the Glink<br />
Data AVailable (DAV) signal, the controller takes the<br />
event number and the BCID number from these buffers and<br />
places them in the header buffer FIFO and then initiates the<br />
zero-suppression logic. The control logic also checks the<br />
received BCID number against the TTC-generated number<br />
and flags an error if they are not in agreement.<br />
The controller also monitors the Xoff signal from the<br />
DAQ Readout Subsystem (ROS) to stop any data transfer in<br />
case it is getting full. Since the Xoff prevents the data transfer<br />
out of the ROD module, the incoming data may fill the ROD<br />
buffers. In this situation, before the ROD buffers are<br />
completely full the ROD must indicate to the Central Trigger<br />
Processor via the BUSY signal to stop sending data.<br />
The principle of data transfer for the RoIs is the same<br />
except for the bit-field definitions, and the additional<br />
requirement for terminating RoI transmission to level-2 if<br />
there are too many RoIs in the event.<br />
V. IMPLEMENTATION OF THE PROTOTYPE<br />
A ROD prototype has been built in order to demonstrate<br />
the data transfer from multiple data sources to a data sink. The<br />
prototype has four input channels, whereas the final ROD will<br />
probably have 18 channels. Fig. 2 shows its block diagram<br />
and fig. 3 a photograph. Its functionality is as follows:<br />
• Input on 800 Mbit/s G-link Rx daughter cards<br />
• Compare transmitted bunch-crossing numbers with onboard<br />
TTCrx-generated numbers, and set error flag in<br />
readout data if they do not match<br />
• Perform parity check on the incoming data<br />
• Perform zero suppression on trigger-tower data for each<br />
channel, if needed<br />
• Write formatted data to FIFO buffers<br />
• Transmit data to the RoIB and the DAQ Readout<br />
Subsystem using S-link<br />
• Provide an event buffer, accessible via VME, to spy on<br />
the S-link data<br />
• Provide a PCI interface for processing the spy-buffer data<br />
using an FPGA or PCI DCP mezzanine card
The implementation has the following features:<br />
• Triple-width 6U VME module<br />
• Four Common Mezzanine Card positions:<br />
o One G-link (HDMP-1024) 4-channel daughter card<br />
o Two S-link daughter cards<br />
o One position for a commercial PMC co-processor card<br />
• All processing and data-handling carried out by FPGAs<br />
• The same module with different firmware will handle<br />
CPMs, JEMs or CMMs, and DAQ and/or RoI data<br />
• Off-the-shelf S-link card to transfer data out<br />
• Spy on events for monitoring using 32 kbyte buffer<br />
• TTCdec decoder card [5] to interface to TTC system<br />
Note that for testing purposes the ROD can send duplicate<br />
copies of the same fragments in parallel over both S-links.<br />
VME Interface<br />
CH 1<br />
CH 4<br />
XOFF<br />
TCM<br />
Receiver<br />
CMC<br />
G-Link<br />
Rx<br />
D0<br />
D19<br />
G-Link<br />
Rx<br />
TTCrx<br />
DAV<br />
D0<br />
D19<br />
DAV<br />
VME<br />
registers<br />
DAV 1-4<br />
LHC clock<br />
L1A<br />
BCnt[11:0]<br />
BCntStr<br />
BCntRes<br />
Shift register output from the FIFO<br />
Controller<br />
DATA FPGA<br />
Shift register output from the FIFO<br />
Shift register output from the FIFO<br />
DATA FPGA<br />
Shift register output from the FIFO<br />
Dout [7:0]<br />
DQ[3:0]<br />
DoutStr<br />
SubAddr [7:0]<br />
Brcst[7:2]<br />
BrcstStr1<br />
BrcstStr2<br />
Rd-En [0:3]<br />
FPGA<br />
Trigger Type<br />
FIFO<br />
BCID Number n<br />
BCID Number 1<br />
BCID FIFO<br />
Event Number n<br />
Event Number 1<br />
Parity Check<br />
&<br />
Zero<br />
Suppression<br />
Logic<br />
Parity Check<br />
&<br />
Zero<br />
Suppression<br />
Logic<br />
En-Zero suppression<br />
start and end marker<br />
start and end marker<br />
Interrupt to the SBC<br />
ROD Busy<br />
To Header + controller<br />
To Header + controller<br />
Data<br />
Wr-En<br />
Rd-En[0]<br />
Data<br />
Wr-En<br />
Rd-En[3]<br />
Sub-header<br />
Data Buffer<br />
Data Buffer<br />
S-Link (CMC)<br />
Spy Event Buffer<br />
(Selected<br />
Sample for<br />
Local<br />
Monitoring)<br />
FPGA(DSP)<br />
Header<br />
Data<br />
Sub-header<br />
Status Register<br />
Status Count<br />
Data Count<br />
Trailer<br />
S-Link (CMC)<br />
S-Link interface<br />
PCI interface<br />
Spy buffer manager<br />
Interface to Single<br />
Board Computer (SBC)<br />
PCI Interface<br />
Figure 2: Block diagram of prototype ROD module.<br />
A. Standalone test<br />
VI. TEST RESULTS<br />
A Data Source/Sink module (DSS) [5] has been designed<br />
and built, to allow a variety of tests on different modules. The<br />
use of daughter cards allows several different types of data<br />
(1)<br />
(2)<br />
32<br />
ROD Busy<br />
To the SBC via VME<br />
DAQ<br />
RoI<br />
transmitters and receivers to be used. FPGAs on the DSS can<br />
be used to generate pseudo-random data for transmission, and<br />
the data received can be compared automatically with what<br />
was transmitted in order to search for errors at very high<br />
speeds, thus permitting detection of bit errors with very high<br />
sensitivity. Data memories on the DSS are also accessible via<br />
VME for various other types of monitoring.<br />
TTC<br />
TTC<br />
Module<br />
Figure 3: The 6U prototype ROD module.<br />
DSS Module ROD Module<br />
G-Link Tx<br />
(Emulate CPM)<br />
S-Link (D)<br />
(Emulate ROB)<br />
Data<br />
Data/Control Signal<br />
XOFF<br />
G-Link Rx<br />
S-Link (S)<br />
Figure 4: The standalone test setup.<br />
TTC<br />
TTC<br />
Module<br />
For the ROD prototype test, a 4-channel G-link transmitter<br />
card emulated the readout logic for DAQ and RoI data on the<br />
CPMs to feed input data to the ROD, and an S-link receiver<br />
card was used as the destination for output data transmitted by<br />
the ROD. This arrangement is shown in fig. 4.<br />
The DSS firmware was configured to generate a packet of<br />
serial data on the G-links following each level-1 accept<br />
received from the TTC. The packet content was obtained from
an internal DSS memory which was pre-loaded with data<br />
corresponding to a range of RoI content at different stages<br />
during the tests. Data arriving over the S-link was captured in<br />
a 32 kbyte memory and could later be read from VME.<br />
For much of the testing a sequence of closely spaced L1A<br />
signals was generated by a burst-mode pulse generator and<br />
distributed by the TTC system. On each L1A, RoI data were<br />
generated and transmitted by the DSS, received and<br />
reformatted by the ROD, transmitted over S-link, received by<br />
the second DSS daughter card, and captured in DSS memory.<br />
The number of L1As per burst was chosen so that all S-link<br />
RoI packets generated in one burst by the ROD could be<br />
recorded in the DSS memory without overwriting.<br />
For each burst, the recorded S-link data were read out by<br />
the online computer and compared with what was expected.<br />
All events in the burst were different, with a cyclical pattern<br />
used to check correct ROD processing of a variety of RoI<br />
data. The test program could not check the bunch-crossing<br />
number copied into each event from the TTC by the ROD, but<br />
it did check that the event number increased monotonically<br />
through the run.<br />
It was found that this test could routinely be run overnight<br />
without any errors at L1A burst frequencies beyond 800 kHz,<br />
exceeding the required 100 kHz by a large factor. It should be<br />
noted that the DSS could sustain incoming S-link data at the<br />
full speed of the S-link, so did not normally use the S-link’s<br />
flow control features.<br />
B. Integration tests<br />
The primary purpose of the tests was to check that data<br />
could be transferred completely and correctly from the ROD<br />
to both the level-2 RoI Builder (RoIB) [6] module and the<br />
DAQ Readout Subsystem (ROS).<br />
1) Test setup<br />
A diagram of the integration test setup is shown in fig. 5.<br />
At different times, both the RoIB and the ROS received the<br />
data from the ROD prototype. The RoIB consisted of an input<br />
card and the main RoIB card. The input card received S-link<br />
data from the ROD, and then prepared and sent two identical<br />
copies to the RoIB card, which required at least two input<br />
fragments to assemble composite RoI S-link output packets.<br />
The ROS consisted of the interface layer to the complete ROS<br />
subsystem, running on a conventional Pentium-based PC.<br />
Three different physical implementations of the S-link<br />
were available for use in the tests. All tests with the RoIB<br />
used an electrical cable link developed at Argonne National<br />
Laboratory (ANL). The links from ROD to DSS were the<br />
CERN electrical S-link, and the link to the ROS used the<br />
ODIN optical S-link [2].<br />
2) Results of low-rate tests with RoI Builder<br />
For tests with the RoIB, events were transferred in bursts,<br />
and the L1A frequency gradually raised until errors occurred,<br />
which on investigation proved to be related to incomplete<br />
implementation of the S-link flow control protocol.<br />
DSS ROD<br />
Glink Tx<br />
CMC<br />
Slink Rx<br />
CMC<br />
TTC<br />
GlinkRx<br />
CMC<br />
SlinkTx<br />
CMC<br />
SlinkTx<br />
CMC<br />
PC PC ROS<br />
ROS<br />
Slink/PC<br />
Slink I Rx<br />
CMC<br />
RoIB:<br />
Input RoI Boar Board<br />
B d B d<br />
Slink Rx<br />
CMC<br />
Figure 5: Schematic view of the integrated test installation.<br />
Testing then continued at lower frequencies, where the<br />
protocol errors did not appear. An overnight run with 124byte<br />
events containing 16 RoIs was successfully completed<br />
without error. The events were generated in bursts of 1024,<br />
2 ms apart in a 3 s cycle, which averaged 418 Hz. In total,<br />
2.1×10 7 events, corresponding to 2.1×10 10 bits, were sent and<br />
received without error. The 1024 events in each burst were all<br />
different in their RoI content. No events were lost and none<br />
was duplicated.<br />
3) ROD BUSY performance<br />
Data entering the ROD from the four serial G-links were<br />
processed and placed into four FIFOs to await readout. A<br />
BUSY threshold inside the ROD was constantly compared<br />
with the occupied depth of each FIFO, leading to assertion of<br />
the front-panel BUSY signal whenever one or more of the<br />
FIFOs was filled up to or beyond the threshold level. The<br />
normal operation of the BUSY signal provided a useful tool to<br />
monitor the ROD performance.<br />
Figure 6: Timing relationship between L1A (upper trace) and BUSY<br />
signal (lower trace), for 16 RoIs. The busy threshold is set at 3.<br />
Fig. 6 shows the behaviour of the BUSY signal when the<br />
threshold was set to the artificially low value of 3. This<br />
threshold was reached as soon as 3 RoIs had entered the
FIFOs, and remained asserted until the depth of the last FIFO<br />
to be emptied fell below 3 again. For events with 16 RoIs, the<br />
theoretical minimum busy time (to transfer 13 RoIs over the<br />
S-link) is 325 ns. This is in good agreement with the measured<br />
time of approximately 400 ns.<br />
4) Latency measurements<br />
The latency of the system was measured using the above<br />
configuration and monitoring various test points using a<br />
digital oscilloscope. The test points included the L1A, DAV,<br />
ROD BUSY and S-link control bit (LFF). The readout<br />
sequence is illustrated in fig. 7, for events with 8 RoIs.<br />
Transmission on the S-link started 2100 ns after the original<br />
L1A. The complete sequence of timing is shown in fig. 8. The<br />
total time to the input of the RoIB is less than 3 µs. The ROD<br />
itself has a latency of less than 1 µs.<br />
LVL1A<br />
S-link<br />
LFF<br />
Busy<br />
Figure 7: Readout latency in the DSS/ROD system for 8 RoIs.<br />
L1A to TTC L1A from TTC DAV BOF LFF on LFF off<br />
TTC<br />
DSS<br />
G-Link<br />
CPROD<br />
S-Link<br />
RoIB<br />
150 450 550 950 525 1750<br />
>4 x 25 =100 22 x25=550 23 x25=525 >23 x 66.6 = 1533<br />
2625<br />
Figure 8: Overall timing from L1A to end of RoIB<br />
input card readout.<br />
ns Measured<br />
Expected<br />
5) Tests with the Readout Subsystem<br />
For testing with the ROS, the S-link interfaces used the<br />
ODIN optical S-link. It was found that event frequencies of up<br />
to 20 kHz could be sustained into the ROS with full event<br />
checking, and that instantaneous L1A frequencies of up to<br />
660 kHz could be sustained, in bursts of 127 events.<br />
Very careful control of the ROD was necessary. The DSS<br />
module needed to be reset after each burst of events, and the<br />
checking software required an exact repeating sequence of<br />
127 events. It was therefore not possible to use the BUSY<br />
signal to suppress L1As if the ROD memories became full, so<br />
it was essential to wait for the ROD FIFOs to empty<br />
completely before triggering the next event burst. This<br />
constraint had not been fully appreciated before the test, and<br />
has implications for future development of the DSS firmware<br />
and other supporting test hardware.<br />
6) Combined tests with the RoIB and ROS<br />
A series of short tests were made with data transmitted<br />
over S-link both to the RoIB and to the ROS. This represented<br />
exactly the connectivity to be used in the production trigger,<br />
where the RoI fragments will be sent both to the RoIB and to<br />
the main DAQ readout system.<br />
Running with bursts of 127 different events spaced 1 ms<br />
apart, several runs of about 2M events were performed, after<br />
which the first data error typically appeared. The same errors<br />
were detected by software in the RoIB and ROS. Investigation<br />
revealed a firmware problem related to the emptying of the<br />
ROD FIFOs. This was understood, but it was not possible to<br />
obtain a new firmware version before the end of the tests. This<br />
test nevertheless established that the ROD could transfer Slink<br />
data concurrently to two downstream modules.<br />
VII. CONCLUSIONS<br />
The prototype ROD is one of the first ATLAS trigger<br />
modules to show the feasibility of flexible design using FPGA<br />
technology. Firmware can be dedicated to either DAQ or RoI<br />
readout. Test results have shown that data could be passed at<br />
rates higher that the required level-1 rate with no errors<br />
detected over long runs. No hardware design fault was found<br />
and the problems occurred in the firmware, which can be<br />
easily corrected.<br />
The integration test with downstream modules was<br />
essential to the understanding of the interfaces between level-<br />
1 and the level-2 and dataflow (ROS) systems. Tests were not<br />
complete and further work still needs to be done, but good<br />
experience has been gained and first results are very<br />
encouraging.<br />
REFERENCES<br />
[1] ATLAS First-Level Trigger Technical Design Report,<br />
CERN/LHCC/98-14 and ATLAS TDR-12, 30 June 1998:<br />
http://atlasinfo.cern.ch/Atlas/GROUPS/DAQTRIG/TDR/t<br />
dr.html<br />
[2] CERN S-link Specification:<br />
http://www.cern.ch/HIS/s-link<br />
[3] Timing, trigger and control system (TTC):<br />
http://www.cern.ch/TTC/inro.html<br />
[4] Agilent G-link information:<br />
http://www.semiconductor.agilent.com:80/cgibin/morpheus/home/home.jsp<br />
[5] Calorimeter trigger modules:<br />
http://hepwww.rl.ac.uk/atlas-l1/Modules/Modules.html<br />
[6] R.E. Blair et al., A Prototype RoI Builder for the Second<br />
Level Trigger of ATLAS Implemented in FPGAs,<br />
ATLAS note ATL-DAQ-99-016, December 1999.
Prototype Slice of the Level-1 Muon Trigger in the Barrel Region of the<br />
ATLAS Experiment<br />
V.Bocci, G.Chiodi, S.Di Marco, E.Gennari, E.Petrolo, A.Salamon, R.Vari, S.Veneziano<br />
Abstract<br />
The ATLAS barrel level-1 muon trigger system makes use<br />
of the Resistive Plate Chamber detectors mounted in three<br />
concentric stations. Signals coming from the first two RPC<br />
stations are sent to dedicated on detector ASICs in the low-pT<br />
PAD boards, that select muon candidates compatible with a<br />
programmable pT cut of around 6 GeV, and produce an output<br />
pattern containing the low-pT trigger results. This information<br />
is transferred to the corresponding high-pT PAD boards, that<br />
collect the overall result for low-pT and perform the high-pT<br />
algorithm using the outer RPC station, selecting candidates<br />
above a threshold around 20 GeV. The combined information<br />
is sent via optical fibre to the off-detector optical receiver<br />
boards and then to the Sector Logic boards, that count the<br />
muon candidates in a region of ∆η×∆φ=1.0×0.1, and encode<br />
the trigger results. The elaborated trigger data is sent to the<br />
Central Trigger Processor Muon Interface on dedicated<br />
copper link. The read-out data for events accepted by the<br />
level-1 trigger are stored on-detector and then sent to the offdetector<br />
Read-Out Drivers via the same receiver boards used<br />
for trigger data, sharing the bandwidth.<br />
A trigger slice is made of the following components: a<br />
splitter, a low-pT PAD board, containing four Coincidence<br />
Matrix boards; a high-pT PAD board, containing 4 CM boards<br />
and the optical link transmitter; an optical link receiver; a<br />
Sector Logic board; a Read-Out Driver board.<br />
I. THE ATLAS BARREL LEVEL-1 MUON TRIGGER<br />
The ATLAS muon spectrometer in the barrel, which<br />
covers a pseudorapidity region equal to |η| < 1.05, makes use<br />
of the Multi Drift Tube detectors for particle track precise<br />
measurement, and the Resistive Plate Chamber detectors for<br />
triggering.<br />
The barrel first level muon trigger has to process the full<br />
granularity data (about 350.000 channels) of the trigger<br />
chambers [1]. The latency time is fixed and less then 2.5 µs.<br />
The maximum data output frequency to the higher-level<br />
triggers is 100 kHz.<br />
A. Trigger Algorithm<br />
Level-1 trigger main functions are:<br />
− identification of the bunch crossing corresponding to<br />
the event of interest;<br />
− discrimination of the muon transverse momentum pT;<br />
INFN Sezione di Roma, P.le Aldo Moro 2, 00185 Rome, Italy<br />
Riccardo.Vari@roma1.infn.it<br />
− fast and coarse muon tracking, used for higher-level<br />
trigger processors;<br />
− second coordinate measurement in the non-bending<br />
projection with a resolution of ~1 cm.<br />
The level-1 trigger is able to operate with two different<br />
transverse momentum selections, providing a low-pt trigger<br />
(pT ~ 5.5 GeV), and a high-pT trigger (pT ~20GeV).To<br />
reduce the rate of accidental triggers, due to the low energy<br />
background particles in the ATLAS cavern, the algorithm is<br />
performed in both η and φ, for both low-pT and high-pT<br />
triggers. Barrel precision MDT chambers can only measure<br />
the bending coordinate, thus the φ projection is used to give to<br />
the experiment the non-bending muon coordinate with a<br />
resolution of ~1 cm. The measured non-bending coordinate is<br />
used in addition with the data coming from MDT detectors for<br />
precise particle track reconstruction.<br />
A section view of the trigger system is represented in<br />
Figure 1, showing where the three RPC stations are located<br />
inside the ATLAS Muon Spectrometer. The ATLAS muon<br />
trigger system is composed by three RPC stations. The RPC<br />
detectors are mounted on the MDT chambers.<br />
Figure 1: The ATLAS Muon Spectrometer Layout<br />
Each RPC chamber is readout by two planes of orthogonal<br />
strips. The η strips give the bending projection, while the φ<br />
strips give the non-bending one.<br />
Muon pT selection is performed by a fast coincidence<br />
between strips of different RPC planes. The number of planes<br />
in the whole trigger system has been chosen in order to<br />
minimise accidental coincidences and to optimise efficiency.<br />
For accidental counting reduction, the trigger operates in both<br />
bending and non-bending projection.
Figure 2 shows the trigger scheme. The low-pT algorithm<br />
makes use of information generated from the two Barrel<br />
Middle stations RPC1 and RPC2. The first stage of the<br />
algorithm is performed separately and independently in the<br />
two η and φ projections. If a track hit is generated in the<br />
RPC2 doublet (pivot plane), a search for the same track is<br />
made in the RPC1 doublet, within a window whose centre is<br />
defined by an infinite momentum track coming from the<br />
interaction point. The width of the window is programmable<br />
and selects the desired cut on pT (the smaller the window, the<br />
higher the cut on pT). Three programmable pT thresholds in<br />
each projection can be applied simultaneously. To cope with<br />
the background from low energy particles in the cavern, a<br />
majority coincidence of the four hits of the two doublets in<br />
each projection is required.<br />
The high-pT algorithm makes use of the result of the lowpT<br />
trigger system and the hits available in the RPC3 station. A<br />
coincidence between the 1/2 majority of the RPC3 doublet<br />
and the low-pT trigger pattern is required.<br />
Figure 2: The ATLAS Barrel Level-1 Muon Trigger<br />
B. Trigger Segmentation<br />
The ATLAS barrel trigger system is composed of two<br />
independent trigger subsystems, the first in the region of<br />
positive η values, called barrel system 0, the second, the<br />
barrel system 1, in the negative η region [2]. Each barrel<br />
subsystem(6.7m
Four low-pT CMAs, covering a total region of ∆η×∆φ ~<br />
0.2×0.2, are mounted on a low-pT PAD board, the four highpT<br />
CMAs are mounted on a high-pT PAD board. The low-pT<br />
PAD board collects data coming from the four low-pT CMAs,<br />
and sends trigger data to the high-pT PAD board, which<br />
collects low-pT and high-pT data and serially sends data offdetector<br />
via an optical fibre. The optical receiver, located in<br />
the counting room, receives serial data from the optical fibre,<br />
and sends them to the Sector Logic board and to the Read-Out<br />
Driver.<br />
The on-detector electronics will be mounted on top on the<br />
RPC detectors as shown in Figure 6. A Splitter Box contains<br />
two or three splitter boards, depending on the required fanout.<br />
In order to reduce the number of interconnections, each<br />
couple of φ strips in one RPC detector belonging to two<br />
adjacent RPC chambers is wire or-ed on detector.<br />
Figure 6: Schematic view of a Barrel Middle RPC station<br />
A. Coincidence Matrix ASIC<br />
The CMA is the core of the level-1 trigger logic, its main<br />
functions are:<br />
− incoming signal timing alignment;<br />
− input and output signal masking;<br />
− de-clustering algorithm execution;<br />
− majority logic;<br />
− trigger algorithm execution;<br />
− level-1 latency memory data storing;<br />
− readout data formatting, storing and serial<br />
transmitting.<br />
A schematic view of the chip internal architecture and its<br />
main block division is represented in Figure 7.<br />
The CMA can be programmed to perform either the lowpT<br />
or the high-pT trigger algorithm. The chip can be used as a<br />
η CMA, covering a region ∆η×∆φ ~0.2×0.1, or as a φ CMA,<br />
covering a region of ∆η×∆φ ~0.1×0.2.<br />
The chip has 2×32 + 2×64 inputs for the front-end signals<br />
[4]. In the low-pT CMAs the 2×32 inputs are connected to the<br />
front-end signals, either η strips or φ strips, coming from a<br />
doublet of the RPC2 pivot plane, while the 2×64 inputs are<br />
connected to the signals coming from the RPC1 doublet. For<br />
the high-pT CMAs the first 32 inputs are connected to the<br />
output trigger signals coming from the low-pT PAD board, the<br />
second 32 inputs are not used, while the 2×64 inputs are<br />
connected to the signals coming from the RPC3 doublet.<br />
The CMA aligns in time the input signals in step of oneeight<br />
of a bunch crossing period. For this reason the chip<br />
internal working frequency is 320 MHz, eight times the 40<br />
MHz bunch crossing frequency. Input signals can be masked<br />
to the zero logic in order to be able to suppress noisy channels<br />
and to handle unconnected input signals.<br />
RPC average cluster size is ~1.4, hence input signals are<br />
pre-processed and de-clustered in order to sharpen the pT cut.<br />
Themaximumclustersizetobeprocessedinthede-clustering<br />
logic is programmable.<br />
Processed input signals are sent to the coincidence logic,<br />
which performs the coincidence algorithm. The logic is<br />
repeated three times, so that three different coincidence<br />
windows can be simultaneously applied inside the chip. The<br />
three coincidence windows can be independently<br />
programmed, thus providing three different muon pT cuts.<br />
Coincidence logic inputs can be masked to “one” logic, to<br />
simulate unconnected inputs. A programmable majority logic<br />
can be applied to the coincidence algorithm, choosing a 1/4,<br />
2/4, 3/4 or 4/4 plane confirmation. The 32-bit trigger output<br />
pattern is then sent to the chip outputs.<br />
Figure 7: CMA block scheme<br />
The CMA readout logic collects chip input data and<br />
trigger output data in a latency memory, used to store events<br />
during level-1 latency period [5]. Data corresponding to one<br />
event are accepted according to the Level-1 Accept signal<br />
arrival time and to the programmed acceptance window time.<br />
All other hits are discarded, while level-1 validated data are<br />
formatted and sent to de-randomising buffers for the serial<br />
readout.<br />
The CMA chip has ~ 300 internal registers programmable<br />
via I 2 C bus. Single Event Upset detection logic has been<br />
implemented for all registers, redundancy logic has been<br />
implemented for critical control registers.<br />
CMA design has been realized with a 0.18 µm CMOS<br />
technology. The external clock frequency is 40 MHz, the<br />
internal working frequency is 40 MHz for register<br />
initialisation, 320 MHz for the pipeline and trigger logic, 160<br />
MHz for the readout logic. An internal PLL has been used for<br />
clock frequency multiplication. JTAG boundary and scan<br />
chain logic has been implemented for test purposes. The<br />
CMA has ~ 500.000 basic cells, including ~ 80 kbit of<br />
internal memory. Chip area is ~ 20 mm 2 , power dissipation is<br />
B. PAD Logic boxes<br />
The first step of the trigger algorithm is performed<br />
separately in the η andintheφ projections. Four CM boards,<br />
each one mounting one CMA, are plugged on one PAD board,<br />
which is responsible to collect data coming from two η CMAs<br />
and two φ CMAs, to elaborate incoming data and to associate<br />
muon candidates in the Region-Of-Interest ∆η×∆φ ~0.1×0.1.<br />
Globally the PAD logic board covers a region ∆η×∆φ ~<br />
0.2×0.2. A dedicated FPGA chip performs the PAD logic. It<br />
combines η and φ information, selects the higher triggered<br />
track in one RoI and solves overlaps inside the PAD.<br />
Timing, trigger and control signals are distributed in the<br />
PAD boards from the TTCrx ASIC, which will be mounted on<br />
a dedicated plug-in board.<br />
In the low-pT PAD board, which is mounted on the RPC2<br />
detector, low-pT trigger result and the associated RoI<br />
information is transferred, synchronously at 40 MHz, to the<br />
corresponding high-p T PAD board via copper cables, using<br />
LVDS signals. LVDS driver and receiver chips are mounted<br />
on the PAD board. The high-pT PAD board, which is mounted<br />
on the RPC3 detector, is similar to the low-pT one. It performs<br />
the high-pT algorithm, collects the overall result for both the<br />
low-p T and the high-p T trigger and sends the readout data to<br />
the Read Out Driver via optical link. The custom built optical<br />
link is mounted on a dedicated board to be plugged on the<br />
high-pT PAD board.<br />
The Embedded Local Monitor Board, a general-purpose<br />
CANBUS plug-in board (CERN & Nikhef project) is used in<br />
both the PAD boards for JTAG and I 2 C chip initialisation and<br />
for local control and monitoring.<br />
Each PAD board is enclosed in a liquid cooled PAD Box<br />
that will be mounted on the RPC detector.<br />
C. Off-detector electronics<br />
The optical link board in the high-pT PAD box transmits<br />
the trigger and readout data to the receiver board located in<br />
the USA 15 counting room. One Receiver board receives the<br />
inputs from four optical links and sends the output to the<br />
Sector Logic board and to the Read Out Driver board.<br />
Proposed crate for containing Receiver boards, ROD<br />
boards and Sector Logic boards is 6U VME 64X format. A<br />
single crate contains 8 Receiver boards, 4 Sector Logic<br />
boards, 2 ROD boards and a ROD controller, as shown in<br />
Figure 8.<br />
The Sector Logic covers a region ∆η×∆φ =1.0×0.2, each<br />
SL board receives data from 7 high-pT PAD logic boards in<br />
the small sectors and 6 high-pT PAD logic boards in the large<br />
sectors [6]. It maps the signals coming from the Tile<br />
Calorimeter trigger towers to the triggered muons, it performs<br />
outer plane confirmation for all the three low-pT thresholds, it<br />
solves the η overlap inside the sector, it selects the two higher<br />
thresholds triggered in the sector, associating with each muon<br />
a region of interest ∆η×∆φ ~1.0×0.1.<br />
The outputs from the Sector Logic boards are sent to the<br />
Muon Central Trigger Processor Interface via parallel<br />
differential LVDS links.<br />
LAN<br />
CPU (DAQ/DCS)<br />
MUCTPI<br />
FE link<br />
Sector logic<br />
Data/Trigger RX<br />
Data/Trigger RX<br />
ROD<br />
Data/Trigger RX<br />
Data/Trigger RX<br />
Sector logic<br />
RO link<br />
TTC RX<br />
Sector logic<br />
Data/Trigger RX<br />
Data/Trigger RX<br />
ROD<br />
Data/Trigger RX<br />
Data/Trigger RX<br />
Sector logic<br />
ROD_BUSY<br />
Figure 8: Off-detector crate with the contained VME cards<br />
III. RADIATION ASSURANCE<br />
Radiation effects for the level-1 muon trigger electronics<br />
have to be taken into account, since on-detector electronics<br />
will be mounted on the RPC detectors. Radiation levels for<br />
the RPCs have been simulated, Figure 9 and Figure 10 show<br />
the simulated total dose and the > 20 MeV neutron<br />
distribution inside the experiment.<br />
Pre-selection and qualification of electronic components<br />
have to be made following the ATLAS Standard Test<br />
Methods [7]. Components have to be qualified for Single<br />
Event Effects and for Total Ionising Dose.<br />
The ATLAS Radiation Tolerance Criteria for TID is:<br />
RTCtid =SRLtid ·SFsim ·SFldr ·SFlot ·10y,wheretheSafety<br />
Factors depend on simulation accuracy, low dose rate effects<br />
and components lot differences. The Simulated Radiation<br />
Levels for the RPC are shown in Table 1. The resulting RTCtid<br />
is of the order of 1 krad, depending on the electronics<br />
component type.<br />
For Single Event Upsets the foreseen soft SEU rate is:<br />
SEUf =(softSEUm /ARL)·(SRLsee /10y)·SFsim, where<br />
SEUm is the number of measured soft SEU during test, and<br />
the Applied Radiation Level is the integrated hadrons flux<br />
received by the tested component.<br />
SEE and TID pre-selection test have been completed for<br />
almost all the electronics components that will be mounted on<br />
the RPCs. All tested components successfully passed the<br />
radiation tolerance criteria. No Single Event Latchup was<br />
observed, and the foreseen soft SEU frequency for the<br />
components that had SEU is low enough for not<br />
compromising trigger functionality. All tested components<br />
functionality was not altered after the maximum foreseen total<br />
dose.<br />
A few components have still to be tested. Final component<br />
pre-selection is supposed to be completed during 2002.
Figure 9: ATLAS Simulated Total Ionisation Dose<br />
Figure 10: ATLAS Simulated Hadrons Distribution<br />
Table 1: ATLAS Barrel RPC Radiation Levels<br />
SIMULATED RADIATION LEVEL<br />
SRLtid [Gy·10y -1 ]<br />
SRLniel [1 MeV n·cm -2 ·10y -1 ]<br />
BMF 3.02 2.49·10 10<br />
BML 3.04 2.82·10 10<br />
BMS 3.03 2.50·10 10<br />
BOF 1.19 2.14·10 10<br />
BOL 1.33 2.20·10 10<br />
BOS 1.26 2.10·10 10<br />
SRLsee [> 20 MeV h·cm -2 ·10y -1 ]<br />
4.69·10 9<br />
5.65·10 9<br />
4.73·10 9<br />
4.08·10 9<br />
4.21·10 9<br />
4.10·10 9<br />
IV. CONCLUSIONS<br />
All prototype components produced up to now<br />
successfully passed lab tests. Coincidence Matrix ASIC is<br />
missing to complete the Trigger Slice, and is planned to be<br />
available by the end of 2001. Further test on RPC chambers<br />
on cosmic ray is planned before having the CMA.<br />
A C++ high-level simulation code for the level-1<br />
electronics is being developed for trigger functionality<br />
confirmation. CMA VHDL simulation has been compared<br />
with C++ simulation, giving good crosscheck results.<br />
Irradiation test for components pre-selection and<br />
qualification are planned to be finished before the end of<br />
2002. First slice test will be performed during year.� next<br />
V. REFERENCES<br />
[1] ATLAS Level-1 Trigger TDR<br />
[2] E. Gennari, S. Di Marco, A. Nisati, E. Petrolo,<br />
A. Salamon, R. Vari, S. Veneziano<br />
Barrel Large RPC Chamber Sectors Readout and<br />
Trigger Architecture<br />
[3] E. Petrolo, A. Salamon, R. Vari, S. Veneziano<br />
Barrel LVL1 Muon Trigger Coincidence Matrix ASIC<br />
User Requirement Document (ATL-COM-DAQ-2000-<br />
050)<br />
[4] E. Petrolo, A. Salamon, R. Vari, S. Veneziano<br />
CMA ASIC Hardware Requirement document (ATL-<br />
COM-DAQ-2001-005)<br />
[5] E. Petrolo, A. Salamon, R. Vari, S. Veneziano<br />
Readout Requirements in the Level-1 Muon Trigger<br />
Coincidence Matrix ASIC (ATL-COM-DAQ-2000-052)<br />
[6] V. Bocci, A. Di Mattia, E. Petrolo, A. Salamon, R. Vari,<br />
S.Veneziano<br />
The ATLAS LVL1 Muon Barrel Sector Logic<br />
demonstrator simulation and implementation (ATL-<br />
COM-DAQ-2000-051)<br />
[7] ATLAS Policy in Radiation Tolerant Electronics<br />
http://atlas.web.cern.ch/Atlas/GROUPS/FRONTEND/ra<br />
dhard.htm
Fast Pre-Trigger Electronics of T0/Centrality MCP-Based Start Detector for<br />
ALICE<br />
L.Efimov 1 ,G.Feofilov 2 , V.Kondratiev 2 ,V.Lenti 3 ,V.Lyapin 4 , O.Stolyarov 2 , W.H.Trzaska 4 ,<br />
F.Tsimbal 2 , T.Tulina 2 , F.Valiev 2 , O.Villalobos Baillie 5 , L.Vinogradov 2<br />
1 JINR, Dubna, Russia, Joint Institute for Nuclear Research<br />
2 St.Petersburg, Russia, Institute for Physics of St.Petersburg State University<br />
Ulyanovskaya,1, 198904, Petrodvorets, St.Petersburg, Russia, e-mail: feofilov@hiex.niif.spb.su<br />
3 Bari, Italy, Dipartamento di Fisica dell’Universi ta and Sezione INFN<br />
4 Jyvaskyla University,Finland<br />
5 Birmingham, United Kingdom, University of Birmingham, School of Physics and Astronomy<br />
Abstract<br />
This work describes an alternative to the current AL-<br />
ICE baseline solution for a TO detector, still under development.<br />
The proposed system consists of two MCPbased<br />
T0/Centrality Start Detectors (backward-forward<br />
isochronous disks) equipped with programmable, TTC<br />
synchronized front-end electronic cards (FEECs) which<br />
would be positioned along the LHC colliding beam line<br />
on both sides of the ALICE interaction region. The purpose<br />
of this arrangement, providing both precise timing<br />
and fast multiplicity selection, is to give a pre-trigger signal<br />
at the earliest possible time after a central event. This<br />
pre-trigger can be produced within 25 ns. It can be delivered<br />
within 100 ns directly to the Transition Radiation<br />
Detector and would be the earliest L0input coming to the<br />
ALICE Central Trigger Processor. A noise-free passive<br />
multichannel summator of 2ns signals is used to provide a<br />
determination of the collision time with a potential accuracy<br />
better than 10ps in the case of Pb-Pb collisions, the<br />
limit coming from the electronics. Results from in-beam<br />
tests confirm the functionality of the main elements. Further<br />
development plans are presented.<br />
I. INTRODUCTION<br />
A fast pre-trigger decision (which can be made within<br />
one 25 ns bunch crossing) for the ALICE experiment at<br />
the LHC should handle the following functions([1]):<br />
(i) precise T0 determination (better than 50-100 ps<br />
resolution);<br />
(ii) centrality of the collision determination;<br />
(iii) min-bias pre-trigger production within 100 ns after<br />
the collision for the Transition Radiation Detector;<br />
(iv) coordinate of primary vertex (indication of collision<br />
with the interaction diamond);<br />
(v) beam-gas interaction signal;<br />
vi) indication of the pile-up signal.<br />
These functions could be combined into one logic signal<br />
or a set of signals forwarded to the Central Trigger<br />
(For the ALICE collaboration)<br />
Processor (CTP).<br />
This work is a continuation of [2] and [3], combining<br />
both the upgrade of the functional pre-trigger scheme, new<br />
developments and the in-beam test results of the fast detector<br />
and electronics.<br />
II. FUNCTIONAL SCHEME<br />
The fastest pre-trigger decision in the ALICE detector<br />
could be made using the information from two<br />
T0/Centrality disks covering 2.5-3.5 regions of pseudorapidity<br />
on both sides of interaction region. The system is<br />
based on the application of Microchannel Plates (MCPs).<br />
The signals from these MCP detectors are very short (less<br />
than 2 ns at the base), they allow very precise timing and<br />
their pulse height is proportional to the multiplicity. It is<br />
possible to use these features for the fastest <strong>preliminary</strong><br />
decision making done by the ALICE pre-Trigger. The extremely<br />
good timing resolution and counting rate properties<br />
of the MCP-based detector implies the possibility to<br />
obtain the first fast indication of a central event within two<br />
neighbouring 25 ns bunch crossings in case of pp collisions.<br />
A. Detector<br />
Detectors are placed symmetrically on both sides of<br />
the interaction region [4]. They are hermetically sealed inside<br />
thin-wall disk vacuum chambers. A multianode MCP<br />
isochronous readout system provides the needed high accuracy<br />
in timing measurements. Passive summation of<br />
signals from the anode of the segmented MCP disk gives<br />
a multiplicity signal with very precise timing properties.<br />
The pre-trigger decision is based on the information obtained<br />
for each bunch crossing on the total multiplicity for<br />
the event in a given rapidity range (2.6-3.6), primary vertex<br />
Z-location within the interaction region , while permitting<br />
the rejection of beam gas events. A general functional<br />
scheme of a fast trigger is shown in Figure 1.
���<br />
���<br />
���<br />
���<br />
���������<br />
���<br />
���<br />
���������<br />
���������<br />
���������<br />
���<br />
���<br />
���<br />
��<br />
���<br />
�����<br />
��<br />
���<br />
���<br />
���<br />
���<br />
�����<br />
���<br />
���<br />
���<br />
��<br />
���������<br />
���������<br />
��������� ���������<br />
���������������� ����������������<br />
Figure 1: Functional scheme of ALICE pre-trigger T0/Centrality detectors and electronics. Two MCP based disks are situated on<br />
both sides (”backward-forward”) of the interaction region. The ALICE’s Inner Tracking System (ITS), Time Projection Chamber<br />
(TPC) and Transition Radiation Detector (TRD) are shown schematically (not to scale). SUM - passive multichannel summator<br />
of short 2ns pulses; FA - fast amplifiers; FFO - fast fan-out; DTD - Double Threshold TimingDiscriminator; TDC,QDC - time<br />
and charge digital converters; MD - fast multiplicity discriminator; L0*T0 -Fast Programmable Logic Unit.<br />
B. General Scheme<br />
The new scheme of the fast front-end electronics (AL-<br />
ICE L0trigger electronics or ALICE pre-trigger) integrated<br />
for each half of two MCP T0/Centrality disks into<br />
one Front-End Electronics Card (FEEC) is represented<br />
here in Fig. 1. The scheme is based on the passive multichannel<br />
summator application which is integrated into the<br />
detector design providing the noiseless precise summation<br />
and isochronous timing for a large area MCP disk while<br />
preserving the charge information.<br />
As previously mentioned [1], precise timing from a<br />
large area detector implies a high granularity of the detector<br />
elements (cells) This is because individual elements<br />
must be small in order to minimize the signal propagation<br />
spread within a given cell. Using the most straightforward<br />
approach, one would have to develop about 300 fast electronics<br />
channels with very high precision timing properties<br />
matching the total number of pads in a MCP disk that<br />
covers 1 unit of pseudorapidity. In order to simplify the<br />
task, we propose to use the isochronous summation of signals<br />
from many pads belonging to one disk, preserving the<br />
timing precision and good linearity of the pulse heights.<br />
The proposed use of noise-free passive isochronous summators<br />
has the advantage of a strong decrease in the number<br />
of electronic channels and simplifies considerably the<br />
fast logic for multiplicity and timing. The analogue signal<br />
from the passive UHF summator output is used for the<br />
LO trigger applications (see splitter FFO on the Fig. 1).<br />
In general the duration of a signal from the MCP detector<br />
is about 2 ns with 200-300 ps peaking time. This implies<br />
a UHF requirement for the design and development of the<br />
fast electronics (1GHz frequency range).<br />
��<br />
���<br />
C. Fast Front-End Electronics Cards<br />
Each FEEC integrates preamplifiers , QDC chips, pipeline<br />
FIFO and new type fast TDC chip (we supposed to<br />
apply the developments started in[5]). FEEC contains also<br />
the interface between the L0-Trigger electronics and DAQ<br />
system.<br />
The FEECs are situated close to the T0/Centrality<br />
MCP disks providing service to one half of the disk or for<br />
the whole disk ( a baseline option). The FEEC contain<br />
the following (programmable) units:<br />
(i) fast input signal splitters (FFO) matched in<br />
impedance with transmission lines coming from the fast<br />
pre-amplifiers (50Ohm) and the inputs to discriminators;<br />
(ii) a fast analogue single threshold discriminator for<br />
multiplicity analysis (MD);<br />
(iii) a fast timing discriminator (TD) which provides<br />
precise time mark of the incoming analogue sum signal for<br />
agivenMCPhalf-disk;<br />
(iv) a fast TDC for precise timing of the incoming signal;<br />
(v) a fast 8 bit QDC for charge digitization of the signal<br />
for a given disk.<br />
(vi) a pipe-line for storing the MCP disk data during<br />
the L0decision making. Only about 240bytes per card are<br />
required for storage of data from all 40MHz bunch crossings<br />
during 3mks time.(Here we apply some extra margins).<br />
(vii) The FEEC also provides the place for the TTC<br />
adaptors (TTCrx chip) and elements for the connection to<br />
the DDL. (This should be developed in the future using<br />
the standard approach for ALICE.)
D. Functions<br />
Centrality of collisions:<br />
We use the pulse height analogue sum from the MCP disk<br />
to make selections based on multiplicity. This is done<br />
by a fast Multiplicity Discriminator [6] which gives the<br />
logic signal as an indication of high multiplicity event. A<br />
minimum-bias pre-trigger signal could be delivered within<br />
100 ns directly to the ALICE Transition Radiation Detector<br />
(the TRD start). Selection on multiplicity, which is<br />
done by the fast Multiplicity Discriminator(MD), would<br />
provide the earliest L0input signal coming to the Central<br />
Trigger Processor [8].<br />
Event vertex location:<br />
The fast vertex determination is done by time of flight<br />
difference measurements (50ps timing resolution is to provide<br />
about 1.5 cm accuracy).<br />
Precise T0 signal for TOF system:<br />
The general LHC TTC distribution (“the LHC clock”)<br />
is included as the most efficient and independent “start”<br />
signal in TOF measurements for each of MCP disk detectors.<br />
Any possible TTC jitter would not affect these timing<br />
measurements results because the present design implements<br />
isochronous measurements done using two disks.<br />
Proposals for a TOF measuring system using very fast<br />
time-to-digital precise converters (TDC) were suggested<br />
earlier [3]<br />
A precise T0signal could be supplied in two ways:<br />
1) By the hardware electronics (”meantimer” device)<br />
that is supposed to provide the T0relevant to the collision<br />
vertex coordinate Z.<br />
2) By software TOF off-line data treatment using precise<br />
“left” and “right” T0data obtained by the relevant<br />
TDCs. This is considered as the most suitable option.<br />
Pre-trigger signal:<br />
Logic signals L0*T0*Centrality from the TDC and from<br />
the multiplicity discriminator are the result of two criteria<br />
(multiplicity and vertex location)being satisfied for each<br />
MCP disk. The Programmable Fast Coincidence Logic<br />
(PL) provides the necessary logic decisions concerning<br />
the location within the interaction diamond, pile up and<br />
beam-gas interaction logical signals. A pipe-line of about<br />
120cells depth is proposed to store the 8 bit multiplicity<br />
information from individual disk Thus the reliability of<br />
the fast L0decisions can be monitored and the multiplicity<br />
data for a given selected event fed to the DAQ.<br />
“Wake-up” signal for TRD<br />
Two Programmable Logic (PL) units (left and right)<br />
placed near the MCP disks are connected by 1.5m cable<br />
passing outside the ITS cylinder surface. This thin<br />
signal cable is used for the transfer of the minimum bias<br />
MD logical signals. Another two signal cables (left and<br />
right) transfer the TDR pre-trigger signals to the left<br />
and right parts of the TRD pre-trigger signal distribution<br />
system (SDS). We suppose that 10m cable length is a<br />
reasonable estimate to get a signal from the T0/centrality<br />
PL card to the input of the TDR SDS providing 60-80ns<br />
delay after the collision.<br />
T0/Centrality MCP Detector Data Link<br />
The simplest evaluations of the T0/Centrality MCP Detector<br />
Data Link (DDL) feasible distribution have been<br />
made taking into account total amounts of data from the<br />
detector (240bytes/event), a limitation for a total readout<br />
time (< 140 µsec) and a suggested DDL transfer rate (100<br />
MBytes/sec). Only one DDL is sufficient for the start<br />
detector.<br />
Central Trigger Processor(CTP) User Requirements<br />
The L0decision, - as formulated in [8] - is to be done<br />
by the CTP within 1.2 µs which is limited by the signal<br />
propagation time from the detector to the trigger crate.<br />
This means that (i) the minimal depth of the FEEC pipeline<br />
should provide at least 1.2µs storage time and that<br />
(ii) the “wake-up”signal for the TRD should be generated<br />
by the FEEC logic and submitted by the shortest way to<br />
the TRD Signal Distribution System within the required<br />
100ns limit after the collision.<br />
III. SIMULATION<br />
Numerical estimates of the ALICE MCP-disks pretrigger<br />
efficiency were obtained for various colliding relativistic<br />
nuclei at LHC energies. The existing detector<br />
response functions and signal shapes, experimental noise<br />
levels of the electronics, measured values of efficiency for<br />
MIPs, gamma and neutral particle detection were taken<br />
into account. Results are shown in Fig. as a function of<br />
start detector general efficiency. One can see that in case<br />
of O–O,Ar–Ar,Kr–Kr,Sn–Sn and Pb–Pb collisions the pretrigger<br />
efficiency will be 100% even for small acceptance<br />
detectors. However, in case of pp collisions HIJING and<br />
PYTHIA event generators produce considerably different<br />
predictions in terms of multiplicity and trigger efficiency.<br />
This brings the requirement of 100 % geometrical efficiency<br />
for any detector applied as ALICE start counter in<br />
case of possible low multiplicity events studies. Estimates<br />
for high multiplicity events show that with suitable electronics<br />
a limiting time resolution better than 10ps could<br />
be achieved (see Fig. 3). These very promising estimates<br />
confirm the use of a noise-free summation concept.
WULJJHU HIIL�LHQ�\<br />
$U $U<br />
2 2<br />
3
Results of in-beam tests at CERN of the main system<br />
elements, including the fast summators, preamplifiers, MD<br />
and the DTD, confirmed their functionality. A measured<br />
value of 75 ps timing resolution was achieved for MIPs during<br />
our in-beam tests at CERN PS, close to the predicted<br />
result for a single particle.<br />
The same electronics was used in the first in-beam<br />
studies of multiplicity events done with MCP detector.<br />
The measurements were performed at the SPS with proton<br />
40GeV/c beams using a 5cm Pb target positioned<br />
in front of the detector. QDC spectra are shown at the<br />
Fig.5. The low curve is obtained without any target and<br />
demonstrates a single particle detector response function.<br />
The high multiplicity events were obtained with Pb target<br />
positioned in the beam 13cm from the detector (the upper<br />
curve). This spectrum is in line with the first simulations<br />
showed wide distribution expected in this case (multiplicities<br />
up to 20are predicted for the given dynamic range).<br />
VI. PLANS<br />
Further developments of the electronics are foreseen:<br />
A). Development of the electronic card (FEEC) including<br />
the following fast analog devices : MD,DTD,<br />
preamplifier-shaper for 2ns signals timing preamplifier,<br />
FFO and gain variation unit.<br />
b). Investigation of different ASIC -Application Specific<br />
Integrated Circuits) with embedded PL - (Programmable<br />
Logic) based configurations for making high<br />
speed (40MHz clock) pre-triggers and precise (50ps resolution)<br />
pipe-lined TOF measurements;<br />
c). The search and study of a suitable schematic approach<br />
to charge-to-digital conversion arrangement for the<br />
40MHz pipe-lined measurement of the charge accumulated<br />
by MCP disk ;<br />
B. Concept investigation, design and layout of Detector<br />
Data Link/Source Interface Unit protocol and schematic<br />
drawings (DDL/SIU) for its arrangement in the MCP<br />
front-end electronics :<br />
a). Data readout algorithms and circuitry implementation<br />
in FEE.<br />
b). Control algorithms and circuitry implementation<br />
in FEE.<br />
C. The investigation and adaptation to the MCP<br />
FEEC of the LHC Synchronizing Clock System interface.<br />
D. The concept and initial schematic design of the standard<br />
bus-based Fast Programmable Modular Units (PL)<br />
for integrated pre-triggers assembling aimed at making<br />
ALICE Pre-trigger and Veto signals.<br />
VII. CONCLUSIONS<br />
1) New functional scheme of the pre-trigger electronics<br />
with very promising features has been developed.<br />
2) The upgraded technology of the microelectronic passive<br />
summator integrated with the fast MCP based detector<br />
is successfully tested.<br />
3) Results of the in-beam tests of multichannel passive<br />
summator and standard available fast electronics modules<br />
(timing discriminators, QDC,TDC) developed for other<br />
applications confirm the expectations coming from simulations<br />
of timing resolution and multiplicity signal measurement(75ps<br />
timing resolution is obtained for MIP registration,<br />
the 1st multiplicity spectra with Pb target were<br />
obtained).<br />
4) Further plans involve the development of the FEEC<br />
and continuation of the in-beam studies at high multiplicity<br />
environment.<br />
Acknowledgements: Authors are indebted to NA57<br />
Collaboration for the support of these studies, to L.Ulrice<br />
and L.Dimovasili for their help with targets during the<br />
in-beam tests at CERN, to J.Stachel, G.Valenti and<br />
C.Williams for their interest and useful discussions. This<br />
job is partially supported by the International Science<br />
and Technology Center, Grant No.1666 and by grant from<br />
Higher Education Ministry of Russian Federation No.520.<br />
References<br />
[1] ALICE Technical Proposal, CERN/LHCC 95-71,<br />
LHCC/P3, Chapters 7,9,10, 15 December 1995<br />
[2] L.G.Efimov et al., Proceedings of the 2nd Workshop<br />
on Electronics for LHC Experiments, Balatonfured,<br />
September 23-27, p.166-169, 1996; CERN/LHCC/96-<br />
39<br />
[3] L.G.Efimov et al., Proceedings of the 3rd Workshop<br />
on Electronics for LHC Experiments, London,<br />
Sept.1997, p.359-363, 1997; CERN/LHCC/97-60<br />
[4] M.A.Braun et.al., ALICE Technical Design Report<br />
for T0/Centrality MCP-Based Start Detector,<br />
ISTC#Technical 1666 Report, St.Petersburg, June<br />
2001<br />
[5] M.Mota, J.Christiansen, see Abstr./Summaries of the<br />
3-rd Workshop on Electronics , 1997 London<br />
[6] L.Efimov et al., ”Fast Multiplicity Discriminator”,<br />
7th Workshop, LEEC-2001,<br />
[7] C.Neyer, Proceedings of the 3rd Workshop on<br />
Electronics for LHC Experiments, London, 22-26<br />
Sept.1997, p.238-241, 1997; CERN/LHCC/97-60, 21<br />
Oct.1997<br />
[8] ALICE Central Trigger Processor USER REQUIRE-<br />
MENTS DOCUMENT (1 June 2000, Draft 01, AL-<br />
ICE Internal Note)
Design and Test of the Track-Sorter-Slave ASIC<br />
for the CMS Drift Tube Chambers<br />
F. Odorici and G.M. Dallavalle, A. Montanari, R. Travaglini<br />
I.N.F.N. and University of Bologna, V. B. Pichat 6/2, 40127 - Italy<br />
Fabrizio.Odorici@bo.infn.it<br />
Abstract<br />
Drift Tubes Chambers (DTCs) are used to detect muons in<br />
the CMS barrel. Several electronic devices installed on the<br />
DTCs will analyse data at every bunch crossing, in order to<br />
produce a level-1 trigger decision. In particular, the Trigger<br />
Server system has to examine data from smaller sections of a<br />
DTC, in order to reduce the chamber trigger output by a factor<br />
of 24. The basic elements of the Trigger Server system are the<br />
Track-Sorter-Slave (TSS) units, implemented in a 0.5 micron<br />
CMOS ASIC. This paper describes the way the project of the<br />
TSS ASIC has been carried on, with emphasis on the<br />
methodology used for design verification with IC simulation<br />
and prototypes test.<br />
I. THE TRACK SORTER SLAVE<br />
In the CMS muon barrel, the DTCs represent an important<br />
detector to produce a level-1 trigger decision [1]. The DTC<br />
trigger system is made of a chain of several devices that are<br />
placed on the chambers and arranged on 1080 trigger boards.<br />
Each chamber can have up to seven trigger boards. Each<br />
trigger board allocates a TSS unit. The full functionality of the<br />
TSS is described in [2]. Essentially, it works as a processor<br />
with the following main tasks:<br />
• Track quality sorter. It selects two out of 8 tracks,<br />
based on their quality (transverse momentum, number<br />
of hits, correlation, etc.).<br />
• Background filter. It rejects ghost tracks that can be<br />
erroneously<br />
windows.<br />
reconstructed within small angular<br />
• Data watcher. It allows on-line monitoring of the<br />
trigger data, permitting, for example, to exclude noisy<br />
channels from the trigger decision.<br />
• Tx/Rx unit. The TSS is mounted on a trigger board,<br />
which covers about (at least) a seventh of a chamber.<br />
The TSS controls the link between the trigger board<br />
and the chamber’s Control-Board.<br />
In order to decide which technology is more convenient to use<br />
for the device implementation, the following boundary<br />
conditions were taken into account:<br />
1. 1200 TSS needed for the whole detector;<br />
2. A device needs 90 I/O pads, among which 40 pads are bidirectional;<br />
3. Reduced power dissipation. The device has to be<br />
allocated on the chamber itself and cooling will not be<br />
very effective and powerful.<br />
4. Event processing has to complete within 25 ns, i.e. the<br />
TSS latency has to be 1 BX;<br />
5. The whole functionality is quite complex. In addition to<br />
many base functionalities it has to account for remote<br />
programmability and on/off-line monitoring. It has to<br />
include a built-in self-test and a connectivity test<br />
(Boundary Scan).<br />
6. Radiation tolerant. The total dose expected for the TSS in<br />
10 LHC years is not very big, around 0.01 krad (with a<br />
factor 10 as uncertainty). Instead, Single Event Effects<br />
(SEE) could cause serious problems to the whole system,<br />
for example a bus direction flip.<br />
Based on the previous conditions (especially 4-6) we<br />
considered appropriate to implement the device as an ASIC.<br />
In the IC design particular effort has been devoted to speed<br />
optimisation, remote programmability and monitoring.<br />
Programmability allows choosing among different processing<br />
options, depending on the local trigger demands of each DTC<br />
section, and permits to partially cover for malfunctioning<br />
trigger channels. Since TSS units will be hosted onto the DT<br />
chambers and their access will not be easy or frequent, much<br />
effort has been dedicated to redundancy of remote<br />
programming and monitoring logic. In particular, two<br />
independent access protocols, via serial JTAG and/or via an<br />
ad-hoc 8-bit parallel interface, allow programming and<br />
exhaustive monitoring of each device. In Figure 1 a block<br />
diagram is shown to summarize the TSS functionalities.
4 x 9 bit<br />
TRACO<br />
previews<br />
I/O<br />
pads<br />
TEST<br />
regs<br />
SNAP<br />
regs<br />
R E<br />
G<br />
Configuration<br />
Registers<br />
Input<br />
mask<br />
Parallel<br />
Interface<br />
Program<br />
mode<br />
Quality<br />
Filter<br />
Priority<br />
Encoder<br />
JTAG<br />
controller<br />
JTAG<br />
serial line<br />
Figure 1: Block diagram of TSS functionalities.<br />
II. THE EXECUTIVE PLAN<br />
The TSS project has been carried on by following a<br />
three-step plan:<br />
• Working Rules. Firstly, define rules that fully<br />
describe the TSS functionality. Some of these Rules<br />
are the outcome of the Trigger simulation.<br />
• Design Joined Approach. Design a “machine” which<br />
satisfy the Rules using two independent formalisms:<br />
– A logic description (VHDL);<br />
– A software Device Emulator (C language).<br />
• Software Tools Common Base. In order to master and<br />
verify the considerable design complexity, we<br />
developed a common base of software tools for IC<br />
simulation and prototype (also production) testing<br />
phases. The common base consists of an event<br />
generator, a device emulator and an output comparator.<br />
The above approach gives several advantages. For<br />
example, the two independent formalisms allow to perform<br />
a reciprocal verification of the design and to correct for<br />
“wrong” or “missing” Rules. The Device Emulator allows<br />
to produce an exhaustive test vector set and becomes a<br />
certified “bug-free” software for prototype verification.<br />
More generally, a common base of software tools gives<br />
advantages in terms of development time and code<br />
correctness.<br />
In Figure 2 the methodology adopted during the<br />
development of the project is shown as a flow diagram. The<br />
test software tools are shared between IC simulation and<br />
prototype test phases.<br />
4 x 9 bit<br />
Ghost2<br />
Buster<br />
Ghost1<br />
Buster<br />
Carry<br />
9 bit<br />
2 word<br />
comp<br />
x 10<br />
FBT<br />
SBT<br />
R E<br />
G<br />
R E<br />
G<br />
R E<br />
G<br />
FBT = First Best Track<br />
SBT = Second Best Track<br />
I/O<br />
pads<br />
5 bit<br />
TRACO<br />
select<br />
11 bit<br />
I/O<br />
pads<br />
TSM<br />
preview<br />
8<br />
Figure 2: Methodology used to develop the TSS device.<br />
III. WORKING TOOLS<br />
The basic tools we adopted during the project life were:<br />
• ASIC development system (Synopsys). To implement<br />
the VHDL design and IC Simulation at various levels<br />
(VHDL, Gate, Post Layout).<br />
• Layout & Prototypes were made via Europractice<br />
(IMEC), which offers low rates for no-profit institutes.<br />
For the same reason also the mask and the small<br />
volume (less than 10 kpcs) production is made via<br />
Europractice.<br />
• Custom Test-Software (programs and libraries) was<br />
implemented in C language.<br />
• Custom Test-Hardware was based on a programmable<br />
I/O Pattern Unit VME module, able to operate well
eyond the LHC bunch crossing frequency, i.e. up to<br />
100 MHz. The Pattern Unit is a very flexible<br />
instrument; in fact, we designed it as a general testing<br />
tool for digital electronic devices. The device controls<br />
up to 128 I/O channels and has many features and<br />
programmable options that make it a suitable tool for<br />
both a prototype test bench and for a test-beam set up.<br />
In Figure 3 a Pattern Unit is shown and a complete<br />
description of its functionality can be found in [3]. The<br />
device under test (DUT) is connected to the Pattern Unit<br />
through a piggy board, inserted on appropriate socket<br />
strips.<br />
Figure 3: Pattern Unit hosting the DUT interface board.<br />
The DUT interface board can also be used with a remote<br />
connection to the Pattern Unit. For example, we used the<br />
remote connection in the radiation test set up (see Figure<br />
4). In that case, data were injected and monitored via<br />
JTAG. In correspondence of the chip die (4.5x4.5 cm 2 ) the<br />
interface board has a 16 mm diameter hole in order to<br />
minimize attenuation effects due to extra material.<br />
Figure 4: Radiation Test set up on 60 MeV protons beam.<br />
The test was made at the Cyclotron facility (CRC) in<br />
Louvain la Neuve (Belgium).<br />
IV. PERFORMANCES AND RESULTS<br />
The adopted technology for the TSS is the Alcatel-Mietec<br />
0.5 µm CMOS. A picture of the TSS layout is shown in<br />
Figure 5.<br />
Figure 5: layout of the final TSS device.<br />
An important aspect of the device implementation is the<br />
completeness of the IC simulation and prototype test. For<br />
many digital processors, as well for the TSS, the number of<br />
possible I/O patterns and internal device configurations is<br />
so large that an exhaustive test pattern set can easily get<br />
over millions of events. Moreover, hardware tests usually<br />
require long term (hours) observations in order to verify<br />
temperature stability and noise immunity. For those reasons<br />
it is useful to dispose of fast simulation system and fast test<br />
chains. For the TSS the IC simulation was performed<br />
within the Synopsys CAD, running on a Sun Ultra 10<br />
workstation. The prototype test was controlled by our<br />
custom software, running on a PC (Pentium-II, 333 MHz),<br />
embedded on the same VME crate that houses the Pattern<br />
Unit. The performances of our systems are reported in<br />
Table 1.<br />
Table 1: performances of the simulation and test systems.<br />
Performances IC simulation Prototype Test<br />
Event generation negligible negligible<br />
10 Mevt/h<br />
Event injection negligible (VME limited)<br />
10 Kevt/h negligible<br />
Event processing (CPU limited) (40 Mevt/s)<br />
Output analysis negligible negligible<br />
Full test ∼ 1 Mevt > 100 Mevt
The behaviour of the TSS under radiations has been<br />
verified using a 60 MeV proton beam at the Cyclotron<br />
facility (CRC) in Louvain la Neuve (Belgium). We find the<br />
IC to be fully tolerant (the drawn current is stable) up to 30<br />
krad, while the rate of single event effects (SEEs) was<br />
observed to be:<br />
σ = 8.4 x 10 SEE -15 cm2 /bit .<br />
For the TSS, which is placed on the muon drift tube<br />
chambers, the expected integrated flux is moderate (0.01-<br />
0.1 krad/10 LHC years). Since also the number of SEEs<br />
expected for about 1000 TSS is negligible (less than 1 in 10<br />
LHC years), we can exclude problems related to radiations<br />
for the TSS.<br />
The TSS project, during his development, required the<br />
implementation of two prototypes, the first one with<br />
reduced functionality. Each step of the project required a<br />
variable amount of manpower, with different distributions<br />
for the two prototypes. The manpower, expressed in terms<br />
of “full time work”, is reported in Table 2. The total R&D<br />
full time work corresponds to more than 4 years, excluding<br />
the time invested for trigger simulation. Most of the<br />
manpower has been devoted to implement the Test System<br />
and the Test Software.<br />
Table 2: manpower dedicated to each project step, for the<br />
two prototypes, expressed in terms of "full time work".<br />
ASIC v. 1 ASIC v. 2<br />
Project steps (f.t.w.)<br />
(f.t.w.)<br />
Rules definition 0.1 y 0.1 y<br />
VHDL design 0.4 y 0.3 y<br />
Device Emulator 0.3 y 0.1 y<br />
IC Simulation 0.2 y 0.3 y<br />
Test System 1.2 y 0.1 y<br />
Test SW 0.5 y 0.1 y<br />
Interface board 0.1 y 0.1 y<br />
Prototype tests 0.1 y 0.1 y (undergoing)<br />
Total R&D 2.9 y 1.2 y<br />
V. CONCLUSIONS<br />
The Track-Sorter-Slave, the basic element of the Trigger<br />
Server system for the trigger chain of the CMS Drift Tube<br />
Chambers, has been implemented in a 0.5 micron CMOS<br />
ASIC. The project has been organized on a long term<br />
perspective, because different steps of the realization<br />
(milestones) have been affronted: two prototypes with<br />
increased complexity, radiation tests, test-beams and<br />
integration tests. The R&D work dedicated to the project<br />
involved about 4 years of manpower. Most of the job was<br />
dedicated in developing software and hardware test tools,<br />
which were not, as usual, commercially available.<br />
REFERENCES<br />
[1.] CERN LHCC-2000/038, CMS Collaboration, “The<br />
Level-1 Trigger, Technical Design Report”;<br />
[2.] CMS TN 1996/078, G. M. Dallavalle et al., “Track<br />
Segment Sorting in the Trigger Server of a Barrel<br />
Muon Station in CMS”;<br />
CMS IN 2001/xxx (in preparation), A. Montanari et<br />
al., “Track Sorter Slave reference manual”.<br />
[3.] CERN LHCC 1998/036 291, G. M. Dallavalle et al.<br />
“Pattern unit for high throughput device testing”;<br />
CMS IN 2001/xxx (in preparation), F. Odorici et al.,<br />
“A high throughput Pattern unit as a testing tool for<br />
digital Integrated Circuits”.
Use of Network Processors in the LHCb Trigger/DAQ System<br />
Abstract<br />
Network Processors are a recent development targeted at<br />
the high-end network switch/router market. They usually<br />
consist of a large number of processing cores, multi-threaded<br />
in hardware, that are specialized in analysing and altering<br />
frames arriving from the network. For this purpose there are<br />
hardware co-processors to speed-up e.g. tree-lookups,<br />
checksum calculations etc. The usual application is in the<br />
input stage of switches/routers to support de-centralized<br />
packet or frame routing and hence obtain a better scaling<br />
behaviour.<br />
In this paper we will present the use of Network<br />
Processors for data merging in the LHCb dataflow system.<br />
The architecture of a generic module will be presented that<br />
has the potential to be used also as a building block of the<br />
event-building network for the LHCb software trigger.<br />
I. INTRODUCTION<br />
Network processors are a relatively new development. The<br />
first one was introduced by C-Port (now Motorola) in 1999.<br />
Nowadays every major and a lot of smaller chip-manufacturer<br />
has one in their product line.<br />
A network processor is a dedicated processor for network<br />
packet (=frame) handling. It provides fast memory and<br />
dedicated hardware support for frame analysis, address lookup,<br />
frame manipulation, check summing, frame classification,<br />
multi-casting and much more. All these operations are driven<br />
by software, which runs in the network processor (NP) core.<br />
These processors are usually multi-threaded in hardware,<br />
multiple threads are running at the same time with zerooverhead<br />
context switching. They were primarily designed as<br />
powerful and flexible front-ends for high-end network<br />
switches and switching routers. Because they are software<br />
driven they can easily be customised to various network<br />
protocols, requirements or new developments. They allow to<br />
create really big switching frameworks, because the<br />
decentralise the address resolution and forwarding functions<br />
traditionally performed by a single, powerful control<br />
processor. Thus they enable switch manufactures to construct<br />
large switches (up to 256 Gigabit ports and more), with<br />
dedicated software in a short time. Currently the “Gigabit”<br />
generation of network processors is on the market, while the<br />
next one will be able to handle 10 Gigabit speeds (either as<br />
10-Gigabit Ethernet or OC-192). These processors will be<br />
available in the course of 2002. More information can be<br />
found in [1].<br />
J-P. Dufey, R. Jacobsson, B. Jost and N. Neufeld<br />
CERN, CH-1211 Geneva 23, Switzerland<br />
beat.jost@cern.ch, niko.neufeld@cern.ch<br />
We present the use of a specific network processor, the<br />
IBM NP4GS3 to implement a versatile module for LHCb.<br />
The NP4GS3 can be operated either together with a switching<br />
fabric or in back-to-back with a second NP4GS3. In this note<br />
we will summarise our experiences so far, and will<br />
demonstrate how a NP-based module can fulfil many uses in<br />
the LHCb data acquisition system, but also potentially in the<br />
Level 1 trigger system.<br />
II. THE IBM NP4GS3<br />
The IBM NP4GS3 is a network processor, which<br />
comprises 8 dual processor units (DPPU), each being able to<br />
run 2 out of 4 total threads at the same time. Each DPPU<br />
shares a set of coprocessors, which regulate the efficient<br />
access to external resources, such as port queues, memory,<br />
tree look-up, check-summing and policy. The chip includes<br />
also 4 media access controllers, to which Gigabit Ethernet<br />
Physical Layer Interfaces (PHYs) can be directly attached<br />
either using the GMII or the 8/10 bit encoding. The processor<br />
has a 128 kB fast, on-chip input buffer, and a 64 MB output<br />
buffer, made from DDR RAM chips. The access to the<br />
memory is via a 128-bit wide data-path. The chip also<br />
includes a PPC 405 core for control and monitoring and<br />
exception handling. This PPC can run an operating system, if<br />
desired Also attached are various memory interfaces for very<br />
fast and fast address look-up memory, in total up to 64 MB.<br />
For more details the datasheet can be consulted [2].<br />
The data-flow through the NP4GS3 is shown in Figure 1.<br />
Data is coming in from the ports, are stored in the ingress<br />
memory, can be accessed here and are then transferred to the<br />
Switch Interface Link (the DASL). From here they can either<br />
reach their own blade or a twin processor connected back to<br />
back to the first one. They will arrive in any case in the output<br />
buffer or egress memory, where they can be accessed a<br />
second time, before they are finally put on to one of several<br />
output queues for transfer over the network.<br />
III. A VERSATILE 8 GIGABIT PORT MODULE<br />
The IBM NP4GS3 has a high-speed interface to connect to<br />
a switching engine, the Data Aligned Synchronous Link<br />
(DASL). In fact it has 2 such interfaces. When there is no<br />
switching engine to connect to, one of these interfaces can<br />
used to connect to another NP4GS3, thus creating effectively<br />
an 8-port switch. The other DASL will be usually wrapped to<br />
itself to ensure full connectivity.<br />
In addition to the DASL the NP4GS3s (can) share the<br />
following resources: power and clock distribution and access<br />
to a PCI interface for configuration and monitoring. Each
NP4GS3 requires its own memories and physical layer<br />
interfaces. The Media Access Controller (MAC) is already<br />
incorporated on-chip.<br />
DASL DASL<br />
Access to frame data Access to frame data<br />
Ingress Event Building Egress Event Building<br />
Figure 1: Main components of the NP4GS3 together with an<br />
indication of the standard data-flow paths. Data can be accessed and<br />
modified at the input/ingress and output/egress stage, leading to two<br />
different event building algorithms. One of the two DASL interfaces<br />
is always wrapped, so that each NP can send to itself. Also indicated<br />
are the various external memories.<br />
Since the network processor and memory carrying part of<br />
the module is by far the more complex and deep (in terms of<br />
layers), it is very attractive separate this module of as a<br />
daughter/piggy-pack board, which carries everything<br />
belonging to one NP4GS3 alone (except the physical layers<br />
interfaces), and feeding out the connections for PCI, DASL<br />
(to the other Processor if present) and PHYs.<br />
The common, “simpler” functionality and the control<br />
processor (Credit Card PC) would be housed on a<br />
motherboard. The two boards will be described in more detail<br />
in the following:<br />
A. Motherboard<br />
The mother board will provide all common “infrastructure”<br />
needed for the operation of the NP4GS3’s. This<br />
includes power generation, clock generation and the physical<br />
layer interfaces. It will also include a Credit-Card PC (CC-<br />
PC) , the standard LHCb interface to the Experimental<br />
Control System (ECS). It provides the connectors for the two<br />
carrier cards with the Network Processors. These connectors<br />
carry the following lines and interfaces: DASL, PCI, JTAG,<br />
DMU. PCI is used by the CC-PC to configure and monitor the<br />
NPs and also to communicate with the embedded PowerPC.<br />
JTAG is needed for the boundary scan and for the hardware<br />
debugging using RISCWatch [3]. The DASL is used to<br />
connect two NPs, as has been said already. The DMU (Data<br />
Mover Unit) interfaces connect the Media Access Controllers<br />
integrated on the NP4GS3 to the physical interfaces. These<br />
could be hot-plugable, thus allowing more flexibility in<br />
configuring them either as 1000BaseT (CAT 5 copper) or<br />
1000BaseSX (multi-mode fibre).<br />
Except for some length requirements on the DMU and<br />
DASL lines and some necessary screening for the high<br />
frequency signals this mother-board will not be particularly<br />
complex. A simple layout is shown in Figure 2.<br />
Figure 2: Mother-board for the NP4GS3 carriers. It provides power,<br />
clock and PCI to both NPs. Also shown are the 9 physical connectors<br />
(8 for the NPs, one for the CC-PC).<br />
B. Piggy-back Board<br />
This board will be comparatively complex, with ~ 12 layers<br />
and has rather stringent requirements on timing and distances.<br />
To have it on a small carrier board has therefore quite some<br />
advantages: the multi-layer board can be kept small, which<br />
eases production, and the flexibility to connect different<br />
physical layers is kept. An simplified block-diagram is shown<br />
in Figure 3:.<br />
8Mx16<br />
DDR<br />
D3<br />
DRAM DRAM Control<br />
Control<br />
2x<br />
32Mx4<br />
DDR<br />
PARITY<br />
D6<br />
8Mx16<br />
DDR<br />
D2<br />
DRAM DRAM Data<br />
Data<br />
2x<br />
32Mx4<br />
DDR<br />
DATA<br />
D6<br />
8Mx16<br />
DDR<br />
D1<br />
2x<br />
32Mx4<br />
DDR<br />
DATA<br />
D6<br />
2x<br />
8Mx16<br />
DDR<br />
D0<br />
PCI<br />
DASL A<br />
NP4GS3<br />
DMUs<br />
A A B B C C D<br />
D<br />
2x<br />
512kx18<br />
SRAM<br />
LU<br />
2x<br />
8Mx16<br />
DDR<br />
PARITY<br />
D4<br />
Throttle<br />
2x<br />
8Mx16<br />
DDR<br />
DATA<br />
DS0<br />
DRAM DRAM Data<br />
Data<br />
512kx18<br />
SRAM<br />
SCH<br />
DRAM DRAM Control<br />
Control<br />
2x<br />
8Mx16<br />
DDR<br />
DATA<br />
DS1<br />
Figure 3: The piggy-back or carrier board will house the NP4GS3<br />
processor and its associated memory-chips.<br />
IV. APPLICATIONS IN LHCB<br />
The flexibility of the module allows for a range of<br />
applications in the LHCb Data Acquisition system. They are
shortly discussed here. For details about the LHCb DAQ<br />
system see for example [4].<br />
1) Readout Unit<br />
The most obvious application is as a Readout Unit, which<br />
is interfacing/multiplexing front-end links to the Readout<br />
Network. The network processor is in this context as a fast<br />
sub-event merger, to assign destinations and function as a<br />
front-end to the event building switching network. The<br />
Readout Unit always has one (and only one) output to the<br />
Readout Network (RN), it can have several inputs (usually<br />
either 2 or 4). The NP-based Readout Unit will merge the subfragments<br />
and send them out to the network, using addresses<br />
determined by a pre-loaded address table. It will respect flowcontrol<br />
messages (“X-On/X-Off”) from the network, to cope<br />
with local congestion, and it will itself have the possibility to<br />
throttle the trigger, when its buffers are about to overspill.<br />
2) Front-end Multiplexer<br />
The Front-end Multiplexer (FEM) application is basically<br />
the same as the Readout Unit. The multiplexing factor will be<br />
anywhere, between 7 and 2. For multiplexing factors smaller<br />
then 4, 2 FEMs can be implemented using a single, fully<br />
equipped module.<br />
3) Main Event builder<br />
The main event builders task is to collect all the fragments<br />
belonging to a specific event, originating from the RUs. This<br />
will be some 100 fragments, which have to be assembled into<br />
one contiguous event and sent to the Sub farm Controller<br />
(SFC). The fragments will arrive out of the order from the<br />
network, because the LHCb data acquisition does not have<br />
(nor does it want or need) any synchronisation after the<br />
Level 1 derandomisers. They have to be re-arranged into<br />
correct order and the possibly empty data and error blocks<br />
have to be merged.. Since the rates are low at this stage, one<br />
module could drive 4 event building streams, that is it can<br />
feed 4 SFCs.<br />
4) Elementary Switching Module<br />
The 8-port module can also be used as the building<br />
network for the switching network itself. Performance-wise<br />
this is definitely no problem, because this is the original<br />
domain of the network processors from their conception. The<br />
question is then more if such a switching network can be costeffective<br />
on a price/port basis. Obviously one needs quite a lot<br />
of them, because one has to provide interconnections, which<br />
serve the purpose of the backplane in a conventional<br />
monolithic switch. There are however studies on how to<br />
reduce the number of modules, by making intelligent use of<br />
the traffic patterns in a data acquisition system see for<br />
example [4]. Additional advantages would be, that such a<br />
module would allow full control over switching process and<br />
functionality. Flow control, traffic shaping, check-summing<br />
could be implemented at will and customised for maximum<br />
performance in the DAQ. Furthermore such a switching<br />
network could do the final, main event building in its last<br />
stage.<br />
V. EXAMPLES OF SOFTWARE FOR APPLICATIONS<br />
A. Sub-event merging in a Readout Unit<br />
The task here is to collect up to 7 fragments arriving at a<br />
rate of at most 100 kHz. The fragments have an average size<br />
of a few 100 Bytes, which increases after successive levels of<br />
multiplexing. Sub-event building proceeds by analysing the<br />
fragment headers and waiting until all fragments belonging to<br />
an event have been received or a time-out condition has<br />
occurred. In any case event-building will start, in the latter<br />
case an error will be flagged in the error block. The frames<br />
will then be connected by adapting the link-pointers, moving<br />
as little data as possible. At the boundaries of original frames,<br />
it sometimes becomes necessary to actually copy some data to<br />
fill from the bottom. New frames are being built until a predefined<br />
maximum transfer unit has been reached. The frame<br />
is then dispatched, and the procedure is iterated until all data<br />
have been used up.<br />
Care has to be taken to avoid corruption of the static data,<br />
due to multiple threads wanting to access them at the same<br />
time. The NP4GS3 provides a powerful semaphore<br />
mechanism to handle these situations.<br />
Performance of egress event building according to<br />
simulation is shown in Figure 4:, only 16 out of 32 threads<br />
have been enabled, an improvement of at least 50% can still<br />
be expected.<br />
Maximum Allowed Fragment Rate [kHz]<br />
350<br />
300<br />
250<br />
200<br />
150<br />
100<br />
50<br />
Limit imposed by NP Processing Power<br />
Limit imposed by single Gb Output Link<br />
Range of possible Level-1 trigger rates<br />
0<br />
0 100 200 300 400 500 600<br />
Average Fragment Size [Bytes]<br />
Figure 4: Performance of the Egress Event Building as a function of<br />
the average input fragment size. The green area shows the range of<br />
possible L1 trigger rates.<br />
B. High rate event-building<br />
Another application which could be of interest for the<br />
LHCb Level 1 trigger is sub-event merging at high rates of<br />
incoming fragments. Here very small fragments of some 30 to<br />
50 Bytes are coming on 2 to 3 links at rates above 1 MHz.
The very high speed of the ingress memory, makes it ideal<br />
to perform event-building at high data rates and high trigger<br />
rates. The main idea is to store the data fragments waiting for<br />
all belonging to the same event having arrived and then copy<br />
the payload to form a new fragment to be sent towards the<br />
output ports. After stripping of the transport headers only part<br />
of the incoming data is transferred to the egress side after the<br />
event-building process, including the new transport structure<br />
for the outgoing event fragment. It has been shown that this<br />
strategy allows for event-building performances far beyond<br />
the capabilities of the output port. The performance is shown<br />
in Figure 5:.<br />
Maximum Acceptable L0 Trigger Rate [MHz]<br />
2<br />
1.8<br />
1.6<br />
1.4<br />
1.2<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0<br />
0 10 20 30 40 50 60 70 80 90 100 110 120<br />
Average Input Fragment Payload [Bytes]<br />
Limit imposed by NP Perfomance<br />
Limit imposed by single Gb Output Link<br />
Max. Level-0 Trigger Rate<br />
Figure 5: Performance of Ingress event-building from simulation as<br />
a function of the average incoming packet size. The straight line<br />
shows the fixed level 1 trigger rate of 1.1 MHz. The limitation on<br />
performance comes from the output link bandwidth only.<br />
VI. MEASUREMENTS WITH THE IBM POWERNP<br />
REFERENCE PLATFORM<br />
The results presented so far have been obtained using<br />
simulation. It is therefore interesting to assess how reliable the<br />
timing information of the simulation is. Since the latest<br />
version of the processor (revision 2.0) is not yet available on a<br />
reference platform, some parts of the sub-event building code<br />
cannot run un-modified on the reference design hardware.<br />
However, it has been possible to compare a representative set<br />
of algorithms on simulation and real hardware. The results<br />
agree well. They will be described in the following:<br />
A. The IBM N4GS3 Reference Platform<br />
The IBM NP4GS3 Reference Kit [6] aims at providing<br />
users with an implementation, which allows exploring most of<br />
the functions of the processor. The chassis with the important<br />
cards is shown in Figure 6. The configuration used for our<br />
measurements consisted of a chassis, a control processor (a<br />
PPC 705 based cPCI computer), two carrier boards with a<br />
NP4GS3 and 4 daughter cards, each with 2 Gigabit Ethernet<br />
SX optical ports. Furthermore we had a RiscWATCH ([3])<br />
probe attached to the JTAG interface of one of the blades,<br />
which allows to access the network processor’s internal<br />
registers directly from the NPSCOPE debugger (via Ethernet).<br />
Figure 6: Main components of the NP4GS3 reference platform. The<br />
packet routing switch board is not included in our set-up.<br />
B. Test set-up<br />
The test set-up shown in Figure 7: consisted of 4 Netgear<br />
Gigabit Ethernet NICs, running a dedicated firmware, which<br />
made them traffic generators and sinks. The internal clock of<br />
the NICs allowed to measure latencies with approximately<br />
1 µs precision. More information about these “smart NICs”<br />
can be found in [7].<br />
.<br />
Figure 7: Test-setup for the NP4GS3 reference kit. The data are fed<br />
and read back using four Tigon 2 based Gigabit Ethernet NICs,<br />
shown at the top of the figure. Download of NP software and<br />
monitoring of the NP is done via JTAG using the RISCWatch probe
The network processor is accessed remotely via the<br />
RiscWATCH. This configuration allowed to test 3 to 1 eventbuilding.<br />
Each NIC has an internal clock which has been used<br />
to measure the latencies imposed by the event building code.<br />
This clock has an intrinsic resolution of 1 micro-second. The<br />
generation of frames and the evaluation of the timedifferences<br />
in the receiving NICs, which are the same as the<br />
senders.To synchronise the NIC (the sources), which is<br />
necessary for the reasons outlined above, a special frame is<br />
send by one of the NICs to the NP, which then multi-casts it<br />
to all NICs. This triggers the collective sending of the packets,<br />
within 0.5 microseconds.<br />
C. Results<br />
The main aim of all measurements was to understand the<br />
accuracy of the simulation. The simulation is claimed to be<br />
cycle precise. It takes into account the contention between<br />
threads. However, it does not accurately simulate all external<br />
resources with their associated latencies. It was therefore<br />
especially interesting to see to what extend simulation results<br />
can be trusted. One problem with these measurements is that<br />
the version of the NP on these boards is not the latest. It lacks<br />
the semaphore coprocessor, a unit specifically designed for<br />
efficient resource protection to avoid race conditions in a<br />
multi-threaded application. Since our code heavily relies on<br />
this feature, it was necessary to tune the test conditions<br />
somewhat. This has been done by doing either a single thread<br />
measurement or by reducing the spread in the arrival time of<br />
the fragments by careful synchronisation. This does not<br />
change the run-time of the code, the simulations for both<br />
versions agree on that. It allows, however to avoid the<br />
occurrence of synchronisation problems which would<br />
otherwise be avoided by the semaphore coprocessor. We are<br />
confident that these measurements give a realistic impression<br />
of the performance of our sub-event building codes.<br />
Measurements have been done only in the “high rate“<br />
environment, which means short frames at high rates, since<br />
this is the more demanding and critical application.<br />
Table 1: Comparison of measurements with simulation results. The<br />
handling time per fragment is shown in microseconds. The<br />
measurement times are shown once raw as measured, and second<br />
corrected, with the round-trip and handling time in the NICs<br />
subtracted.<br />
1 source<br />
1 thread<br />
4 sources<br />
1 thread<br />
1 source<br />
16 threads<br />
Measurement<br />
[µs/fragment]<br />
Simulation<br />
[µs/fragment]<br />
6.6(4.9) 4.9<br />
4.5(2.8) 3.2<br />
1.7 (0.0) 0.5<br />
First the round-trip time of a packet has been measured.<br />
This is necessary to subtract any overheads coming from the<br />
transport over the DASL, the cables and especially the<br />
creation and time-stamping in the NICs, which are not<br />
included in the simulation. This time has been found to be<br />
1.7 µs. It can be seen as an intrinsic resolution of the<br />
measurements. Since the whole system is pipe-lined (many<br />
threads working) time-intervals smaller than this intrinsic time<br />
cannot be accurately measured. This is the reason for the<br />
apparently strange value 0.0 in the last row of the following<br />
table.<br />
Several scenarios have been tried, varying the number of<br />
active threads and active sources. The results are summarized<br />
in Table 1:.<br />
VII. CONCLUSIONS<br />
In this paper we have presented the use of a Network<br />
Processor for several applications in the LHCb Data<br />
Acquisition System. An integrated module has been<br />
described, whose function would be determined only by the<br />
software driving it, providing maximum flexibility and<br />
excellent debugging capabilities.<br />
Two such sample software codes have been developed and<br />
benchmarked using a cycle-precise simulation .<br />
The simulation results have been compared with<br />
measurements obtained with the reference platform of the<br />
IBM PowerNP. The results are in very good agreement,<br />
making us confident that we will have one, and only one,<br />
powerful, versatile module for the LHCb Data Acquisition.<br />
VIII. REFERENCES<br />
[1] Network Processor Central, [online]<br />
http://www.linleygroup.com/npu<br />
[2] IBM PowerNP NP4GS3 Datasheet, [online]<br />
http://www3.ibm.com/chips/techlib/techlib.nsf/techd<br />
ocs/852569B20050FF7785256983006A3809<br />
[3] RISC Watch Debugger, [online]<br />
http://www3.ibm.com/chips/techlib/techlib.nsf/produ<br />
cts/RISCWatch_Debugger<br />
[4] B. Jost, “The LHCb DAQ system”, Presentation at<br />
the DAQ 2000 Workshop, Lyon<br />
[5] J. P. Dufey et al., “Results from Readout Network<br />
Simulation” LHCb Note in preparation<br />
[6] IBM PowerNP NP4GS3 Reference Platform,<br />
[online]<br />
http://www3.ibm.com/chips/techlib/techlib.nsf/techd<br />
ocs/546B9AC56334EA0F872569F9005F7DA5<br />
[7] LHCb Event-Building, [online]<br />
http://lhcb-comp.web.cern.ch/lhcb-comp/daq/Event-<br />
Building/default.htm
Abstract<br />
STATUS OF ATLAS LAr DMILL CHIPS<br />
C. de La Taille, LAL, Orsay, France (email: taille@lal.in2p3.fr)<br />
This document reviews the status and performance of<br />
the ten DMILL chips developed in 2000-2001 by the<br />
ATLAS liq uid argon (LAr) community in order to ensure<br />
the radiation tolerance of its front-end electronics.<br />
1. INTRODUCTION<br />
The LAr front-end electronics is located right on the<br />
cryostat in dedicated front-end crates [1]. These house<br />
four different species of boards :<br />
• Front-end boards (FEB) which bear<br />
preamplifiers, shapers, analog memories, ADCs and<br />
optical outputs.<br />
• Calibration boards to generate 0.2% accuracy<br />
calibration pulses<br />
• Tower builder boards (TBB) which perform<br />
analog summation and re-shaping for LVL1 trig ger<br />
• Controller boards which handle TTC and serial<br />
link (SPAC) control signals<br />
.All these boards have been produced in several<br />
exemplars in order to equip module 0 calorimeter and<br />
extensively used in the testbeam for the last three years.<br />
Their performance has met the requirements in terms of<br />
signal, noise, density at the system level on several<br />
thousands of channels [2]. However, they make use of<br />
many COTS, in particular FPGAs which are not radiation<br />
tolerant.<br />
Since then, several developments have been realised in<br />
order to design the “final” ATLAS boards, based on the<br />
same architecture but completely radiation tolerant, by<br />
migrating most of the COTS into DMILL ASICs [3]. A<br />
milestone has been set to get the first boards by end-2001<br />
and a full crate by the end of 2002.<br />
The radiation levels anticipated at the LAr crate<br />
location is 50 Gy in 10 years and 1.6 10 12 N/cm 2 . Taking<br />
into account the safety factors required by the rad-tol<br />
policy [4] , they must be qualified up to 0.2-3 Gy (20-300<br />
krad) and 1-5 10 13 N/cm 2 , depending on the process as<br />
explained in ref. [5]. For DMILL chips, the radiation<br />
tolerance criteria (RTC) are<br />
• RTCTID = 3.5*1.5*2*50 = 5 kGy<br />
• RTCNIEL = 5*1*2*1.610 12 = 1.6 10 13 N/cm 2<br />
• RTCSEE = 5*1*2*0.710 11 = 7.7 10 12 h>20MeV/cm 2<br />
The performance of all these DMILL chips and in<br />
particular the yield and results of irradiation and SEE tests<br />
are presented below.<br />
On behalf of the LArG collaboration<br />
2. CALIBRATION BOARDS [6]<br />
The calibration board houses 128 pulsers which<br />
generate accurate pulses to simulate the detector signal<br />
over the full 16bit dynamic range. It is based upon 128<br />
0.1% precision DC current sources and HF switches which<br />
transform the DC current into fast pulses with a 400 ns<br />
exponential decay. The calibration board is used to intercalibrate<br />
the 160 000 readout channels and measure their<br />
three gains.<br />
2.1 16bit DAC<br />
A 16bit DAC with 10 bit accuracy is necessary to cover<br />
the full dynamic range of ATLAS and COTS did not<br />
provide adequate radiation tolerance. Therefore, a 16 bit<br />
R/2R ladder DAC has been made with 16 switched<br />
identical current mirrors. As there is only one DAC per<br />
board, external precision resistors (0.1%) can be<br />
accomodated. To reduce the sensitivity to VBE mismatch<br />
and variations with temperature the emitters of the current<br />
sources are strongly degenerated. This ladder DAC has<br />
been developed and tested successfully in AMS 0.8 μm<br />
BiCMOS and submitted to DMILL in may 2000, with<br />
improved temperature stability (1 μV/K).<br />
Figure 1 : Schema tic diagram of the 16bit ladder DAC<br />
19 chips have been received in march 2001 among 18<br />
were fully functional, giving a yield of 94% for a chip area<br />
of 6.3mm 2 .<br />
The performance measured is 0.01% integral non<br />
linearity over the 3 gains as shown in Fig. 2. The<br />
temperature stability has been measured on 10 chips<br />
between 20 and 65C to be of +0.01%/K.
Figure 2 : residuals of a linear fit on the DAC output<br />
over the three shaper ranges (10 mV, 100mV, 1V)<br />
10 chips have been irradiated up to 9. 10 13 N/cm 2 at<br />
CERI (20 MeV) and to 2 kGy 60 Co at Saclay and measured<br />
on line. The result, shown in Fig. 3 and 4, indicate good<br />
tolerance to Neutrons (0.1% variation up to RTC) but also<br />
a visible drift with gammas (-0.5% after 2 kGy). As the bits<br />
ratio turned out to be very stable, the decay has been<br />
traced down to a change in the reference current : in effect<br />
a one-mV VT shift in he current mirror is enough to cause<br />
the effect.<br />
RTCNIEL 0.2%<br />
Figure 3 DAC output voltage under Neutron<br />
irradiation<br />
Although acceptable as such, enough time was available<br />
for an iteration with an improved reference source : a band<br />
gap reference incorporated and the current mirror<br />
has been replaced by a reference source built around a<br />
low-offset opamp as described below.<br />
2.2 Low offset opamp<br />
RTCTID<br />
0.1%<br />
Figure 4 : DAC output voltage under gamma irradiation<br />
The DAC voltage is distributed throughout the board<br />
to 128 channels in order to produce the 2 μA – 200 mA<br />
precision current. The voltage to current conversion is<br />
based upon a low-offset opamp and a 0.1% 5Ω resistor.<br />
The opamp offset should not be larger than the DAC<br />
LSB=16μV. Again COTS did not provide adequate<br />
radiation tolerance and ASICs were developed along two<br />
paths : static and auto-zero. After prototyping in AMS,<br />
the static configuration was chosen and translated into<br />
DMILL.<br />
As shown in Fig. 5, the circuit is built around a bipolar<br />
differential pair and external precision collector resistors<br />
(150kΩ 0.1%). The transistors are 10*1.2 NPN, mounted in<br />
centroid configuration ; a larger transistor size would in<br />
principle further reduce the offset, but would decrease the<br />
radiation hardness due to a too low current density. This<br />
input stage provides a gain of 127, enough to disregard<br />
the second stage offset. The chips are sorted to be within<br />
± 100μV.<br />
Figure 5 : Schematic diagram of the low-offset opamp
The second stage is built around a cascoded PMOS<br />
differential pair, again in a centroid configuration. A bank<br />
of 5 binary scaled current sources allows to add or remove<br />
up to 20% of the static current and allow further trimming<br />
down to ± 10μV. The total open loop gain is 80 000, in<br />
good agreement with measureme nts. The output stage is a<br />
large (20,000/0.8) PMOS in order to drive the large<br />
maximum output current (200 mA).<br />
40 chips have been received in march 2001 among<br />
which 37 were fully functional, giving a functional yield of<br />
94% for a chip area of 2 mm 2 . The circuits have then been<br />
tested for the offset performance. As anticipated, the<br />
offset is dominated by the input pair and 27 chips were<br />
found between ± 100μV, giving a total yield of 70%, similar<br />
to what was obtained with the AMS prototype.<br />
Figure 6 : Offset distribution of the input bipolar pair.<br />
Concomitantly to the DAC, ten opamps have been<br />
irradiated to photons and neutrons. As can be seen in<br />
Fig.7, the offset has remained stable inside 15 μV up to 2<br />
kGy. Incidentally, an AMS version which was left there<br />
died immediately after RTC (yellow curve).<br />
ID<br />
±100 μV<br />
The test to Neutrons was also performed far in excess<br />
of the requirements. After 2.5 10 12 N/cm 2 , the circuits could<br />
no longer be measured on line because of the failure of a<br />
discrete NPN transistor commanding the multiplexing<br />
relays. Notwithstanding, the circuits were measured again<br />
after the irradiation and had remained stable.<br />
2.3 Calibration logic<br />
RTCNIEL<br />
Death of<br />
mux<br />
readout<br />
25 μV<br />
Figure 7 : Neutron irradiation of the low offset opamp<br />
The calibration boards used on module 0 were<br />
controlled by an elaborate digital circuitry which allowed<br />
to load on board a full calibration sequence (ramping the<br />
DAC, changing patterns…)[7]. Although practical and<br />
very time efficient this circuitry was based on memories<br />
and numerous FPGAs which would not operate reliably in<br />
the high radiation environment. It has thus been decided<br />
to simplify the control logic and load through the SPAC<br />
serial bus the run parameters (DAC value, delays, pulsing<br />
patterns). These parameters are decoded from I 2 C local<br />
bus and stored in registers which have again been<br />
designed in DMILL. No particular SEU mitigation has<br />
been included as the calibration board is idle 99% of the<br />
time and SEU results only in a wrong calibration pulse,<br />
which can be discarded in the RODS.<br />
The chip covers an area of 16 mm 2 and has been<br />
submitted in may 2000. 20 chips have been received in<br />
march 2001, among which 17 were functional, giving a<br />
yield of 70%.<br />
Two chips have been subsequently tested for SEU at<br />
Louvain with 70 MeV protons. No SEE have been<br />
observed up to a fluence of 3 10 12 p + /cm 2 . Extrapolating<br />
this cross-section to ATLAS yields one SEE/2 days,<br />
assuming the calibration is used 1% of the time.
3. FRONT-END BOARDS [8]<br />
The Front-end board has necessitated the development<br />
of 6 chips in DMILL in order to ensure the integration of<br />
all the elaborate digital electronics necessary to operate<br />
the board. Except the preamplifier (bipolar hybrid) and the<br />
shaper (BiCMOS AMS), almost all the front-end board is<br />
built around DMILL chips. The analog pipelines (SCA)<br />
which follow the shapers have been designed from the<br />
start in DMILL. They make use only of the CMOS<br />
components and of full custom logic running at 40 MHz.<br />
The read and write addresses necessary to operate the<br />
SCA with no dead time are generated by a SCA controller.<br />
The gain selection at the SCA output are also handled by<br />
a dedicated ASIC : the gain selector. Then the data are<br />
multiplexed to 16bit 80 MHz (MUX chip), to be fed into the<br />
Glink serializer and output optically. Furthermore, the<br />
parameters necessary to operate the board are loaded by a<br />
serial lin k (SPAC). A serial link decoder (SPAC slave) is<br />
necessary as well as a configuration controller.<br />
3.1 Switched Capacitor Arrays (SCA)<br />
The analog pipeline is a key element of the front-end<br />
board, as it stores the analog signal until the reception of<br />
LVL1 trigger in a bank of 144 capacitors with a 13 bit<br />
dynamic range. Several prototypes have been realized in<br />
DMILL in the last 3 years, as well as in radiation soft<br />
technologies (AMS 0.8 μm, HP 0.6 μm) with similar<br />
electrical performance [9].<br />
As more than 50 000 good chips will be necessary,<br />
corresponding to more than 200 wafers, the yield is of<br />
particular concern. Most of the various batches received<br />
so far have exhibited satisfactory yield above 65%, except<br />
a recent one as low as 10% due to a few randomly<br />
distributed leaky switches. This process defect has<br />
subsequently been understood and fixed in a later batch.<br />
Before undergoing mass-production in 2002, the final<br />
engineering run has been submitted at the end of 2000 and<br />
received in march 2001. More than 2500 circuits have been<br />
measured with the automated testing setup [10].<br />
batch date # chips yield<br />
V 1.1 6/98 30 90%<br />
V 1.2 8/98 30 80%<br />
V 2 wafer 12 8/99 68 50%<br />
V 2 wafer 4 8/99 49 84%<br />
V 3.1 12/99 18 10%<br />
V 3.2 7/00 35 65%<br />
V 3.2 eng. run 3/01 2534 65%<br />
An important parameter on the acceptance cuts is the<br />
leakage current. Although most of the cells exhibit very<br />
low leakage (2 fA in average), the requirement of having<br />
all cells on the sixteen channels (16*144) below 5 pA is<br />
enough to induce a 4% yield loss.<br />
Figure 9 : leakage current of all the SCA cells<br />
3.2 SCA controller (SCAC)<br />
In module 0 FEB, the SCA controller was implemented<br />
in a XC4036 Xilinx, based on 0.35 μm technology. This<br />
component has been extensively tested for radiation<br />
tolerance and has shown a significant supply current<br />
increase after 400 Gy. Moreover, SEU tests have been<br />
carried out and have shown a cross section for SEU of<br />
σ SEU = 2.7 10 -9 cm 2 , a LET for the configuration switches of<br />
22 MeV and worse of all, one latch-up event [11].<br />
It has then been decided to migrate this element into<br />
DMILL. However, the chip complexity and critical timings<br />
have turned out to be marginally achieved and resulted in<br />
a very large chip area (80mm 2 ) for which the yield was<br />
likely to be rather small (20%). Besides, due to the large<br />
area, it has not been possible to include any error<br />
correction mechanism, leaving the SEU problem open. For<br />
such chip, the SEU effects are rather serious as read or<br />
write pointers could get systematically wrong. A fallback<br />
in 0.25 μm technology has thus also been designed,<br />
including SEU error correction logic, and submitted in<br />
march 2001.<br />
40 DMILL SCA controllers have been received in june<br />
01, among which 28 have passed all the digital tests,<br />
giving an unexpectedly high yield of 70%. Nine chips<br />
have been tested for maximum clock frequency and all ran<br />
up to 50 MHz. The chips also passed successfully a burn -<br />
in test.<br />
Four chips have then been tested for SEU at Triumf<br />
with 74 MeV protons. The associated TID was around 40-<br />
70 krad. After 4.6 10 10 p+, all chips needed a power-on<br />
cycling to reset. It has been traced to a fault in the<br />
(analog) power-on reset, which has been subsequently<br />
removed. Extrapolated to ATLAS, each SCAC would need<br />
a reset every 70 days, amounting to a reset every hour<br />
over the whole experiment.
3.3 Gain selector<br />
The SCA is followed by a 12 bit 5 MHz ADC (AD9042)<br />
which has been qualified by CMS [12]. Two ADCs feed a<br />
gain selector chip which chooses the correct gain and<br />
formats the data for the subsequent RODs. As the chip<br />
stores two thresholds per channel for the gain selection,<br />
SEU have been mitigated with a Hamming correction code.<br />
It results in a 21 mm 2 ASIC, submitted to DMILL in<br />
september 2000. The same chip has also been submitted in<br />
0.25 μm alongside with the SCA controller.<br />
29 DMILL chips have been received in may 2001 and 27<br />
were functional, giving a yield of 93%. Five chips have<br />
been tested for SEU at Harvard facility with 50, 100 and<br />
158 MeV protons. The single event upset can be sorted in<br />
two categories : single bit errors (SBE) corresponding to<br />
one bit flip in the registers which is corrected by the error<br />
correction algorithm and single event upsets<br />
corresponding to a wrong bit in the output data which<br />
leads to a rejected event in the RODs. The measurements<br />
are shown in the table below. Coarsely extrapolated to<br />
ATLAS by multiplying the cross section by the hadron<br />
flux RTC (supposed flat) leads to 1 SBE/30mn and 1<br />
SEU/168 mn for the full calorimeter (13 000 chips).<br />
Energy Fluence #SBE σ SBE #SEU σSEU<br />
MeV 10 13 /cm 2 10 -13 10 -13<br />
50 2.4 0 - 0 -<br />
100 4.0 14 3.5 4 1.0<br />
158 20.8 212 10.2 38 1.8<br />
3.4 Optical output<br />
The formatted digital data are sent out after LVL1<br />
trigger through an optical fiber. Five samples of each 128<br />
channels are multiplexed at 40 MHz, resulting in 2.6 Gbit/s<br />
output rate. The baseline option was using HP Glink, but<br />
extensive irradiation studies [13] have shown that<br />
although the link exhibited very good total dose tolerance<br />
up to 43 kGy and 10 13 N, it was sensitive to SEU, 0.05<br />
error/link/hour with ATLAS spectrum. In particular,<br />
energetic neutrons could induce synchronization errors,<br />
bringing the link down up to 10 ms [14].<br />
A multiplexing chip (MUX) is necessary to turn the<br />
32bit 40 MHz data into 16bit 80 MHz Glink input format.<br />
Initially, the design was done for a dual Glink option to<br />
improve redundancy [14]. This DMUX chip has been<br />
submitted in DMILL in may 2000 [15].<br />
18 chips have been received in march 01, with a<br />
functional yield of 88% for a chip area of 16 mm 2 . Four of<br />
them have been irradiated at CERI for SEE tests alongside<br />
with the Glink. Its contribution to SEU rate is negligible<br />
compared to the Glink. Furthermore, no parameters are<br />
stored in the chip, which renders SEU effects very minor.<br />
3.5 Configuration chips (SPAC,FEBconfig)<br />
All the parameters necessary to operate the FEB are<br />
loaded via a serial bus on the front-end crate (SPAC),<br />
inspired from I 2 C [16]. This bus is decoded by an ASIC<br />
called SPAC slave, which provides regular I 2 C and parallel<br />
outputs. This chips has also required the development of<br />
a special RAM and is common to all the boards.<br />
Submitted in September 00, 18 chips have been received<br />
in march 01. The functional yield is 94<br />
%, for a chip area of 27 mm 2 . It has been iterated in sept 01<br />
to mitigate possible SEU effects, in particular on the subaddress,<br />
with error correction logic.<br />
Some additional functions which are specific to the FEB<br />
have been grouped in another DMILL chip called<br />
configuration controller. The design has also been<br />
submitted in September 2000 and 40 chips have been<br />
tested in july 2001. The functional yield is 93<br />
%, for a chip area of 20 mm 2 . The chip is final and has been<br />
tested successfully in relation with the other chips on a<br />
“quarter digital FEB”, shown below.<br />
Optical<br />
transceive<br />
HP<br />
Glink<br />
DMILL<br />
MUX<br />
DMILL<br />
or DSM<br />
Gain<br />
selector<br />
Figure 10 : Picture of the "quarter digital FEB"<br />
used to test the FEB DMILL chips.<br />
4 chips have also been tested for SEE at Harvard with<br />
158 MeV protons. The associated TID was 2-15 Mrad.<br />
After a fluence of 7 to 22 10 13 p + /cm 2 , 69 SEU have been<br />
observed, giving a cross section of σ SEU = 1.5 10 -13 cm 2 .<br />
Extrapolating this rate to ATLAS gives a rate of 1<br />
SEU/26.8 hr.<br />
DMILL<br />
TTCRx<br />
DMILL<br />
SPAC<br />
DMILL<br />
Config<br />
controller<br />
DSM SCA<br />
controller
4. PRODUCTION STRATEGY<br />
Except the SCA, the SCA controller and the gain<br />
selector, most of the DMILL chips are needed in<br />
quantities which are too small to justify dedicated wafers.<br />
It has thus been decided to group them on shared wafers<br />
to reduce the price of the masks. Two shared wafers will<br />
thus be produced :<br />
• An analog wafer grouping 52 low offset<br />
opamps, 1 DAC and 26 BiMUX<br />
• A digital wafer with the 4 SPAC, 2 calogic, 3<br />
FEB config and 3 MUX.<br />
13 analog wafers and 20 digital wafers will be needed for<br />
a total cost of 350 k$. It should be noted that this cost is<br />
similar to the cost which was allocated for COTS.<br />
Figure 11: layout of the analog and digital shared<br />
DMILL wafers<br />
5. CONCLUSION<br />
Ten new DMILL chips have been designed and tested<br />
in 2000-2001. All of them are now final and ready for<br />
production in 2002. Shared wafers will be used to reduce<br />
the production costs.<br />
Seven of these chips are purely digital and exhibit a<br />
yield above 70% for an area between 20 and 80 mm 2 . Two<br />
of these chips have a DSM alternative (SCAC and gain<br />
selector) which are also ready and working satisfactorily.<br />
The choice will be made in October 2001.<br />
Three chips are analog. Their yield is also larger than<br />
70% and their electrical performance is very good, similar<br />
to what was prototyped in AMS 0.8 μm BiCMOS.<br />
The engineering run of the analog pipeline (SCA) has<br />
been produced and tested successfully with a yield of<br />
65%. The full production (200 wafers) will be launched at<br />
the end of 2001.<br />
6. ACKNOWLEDGEMENTS<br />
This paper reviews the work of several institutes from<br />
the LArG collaboration : Alberta, Annecy, BNL, Grenoble,<br />
Nevis, Orsay, Paris VI -VII, Saclay. The author expresses<br />
his gratitude to N. Dumont-Dayau, D. Gingrich, J. Parsons,<br />
JP Richer, N. Seguin -Moreau and D. Zerwas for their help<br />
in preparing this work.<br />
Chip<br />
Area<br />
(mm 2 )<br />
SCA 30<br />
Number<br />
needed<br />
OK/tested Yield<br />
(%)<br />
54 400 1643/2500 65<br />
SCA contr. 80 3 300 28/40 70<br />
Gain select 21 13 300 27/29 9 3<br />
FEB config 20 3 300 37/40 93<br />
MUX 18 1 650 15/17 88<br />
SPAC slave 27 2 500 17/18 94<br />
Opamp 3 17 000 26/37 70<br />
DAC 6.3 130 18/19 94<br />
Calib logic 16 700 8/10 80<br />
BiMUX 4.6 8 320 65/80 81<br />
7. REFERENCES<br />
The transparencies can be found on :<br />
http://www.lal.in2p3.fr/recherche/atlas<br />
[1] LAr technical design report. CERN/LHCC/98-016<br />
[2] J. Colas : overview of the ATLAS LArG electronics.<br />
CERN/LHCC/99-33 LEB5 (Snowmass) p 217-221<br />
[3] C. de La Taille : overview of ATLAS LAr radiation<br />
tolerance. LEB6 (Krakow) p 265-269<br />
[4] ATLAS policy for radiation hardness.<br />
http://atlas.web.cern.ch/Atlas/GROUPS/FRONTEND/<br />
radhard.html<br />
[5] M. Dentan : overview of ATLAS radiation policy. On<br />
radiation tolerant electronics. LEB6 (Krakow) p 270<br />
[6] J. Colas et al.. : The LArG calibration board ATLAS<br />
internal note : LarG-99-026<br />
[7] G. Perrot et al. : .the ATLAS calorimeter calibration<br />
board. LEB5 (Snowmass) p 265-269<br />
[8] D. Breton et al. : the front-end board for the ATLAS<br />
liquid Argon calorimeter. CERN/LHCC/98-36 LEB4<br />
(Rome) p 207-212<br />
[9] D. Breton et al. : HAMAC a rad-hard high dynamic<br />
range analog memory for Atlas calorimetry LEB6<br />
(Krakow) p 203-207<br />
[10] G. Perrot et al. : the ATLAS calorimeter calibration<br />
board. LEB5 (Snowmass) p 265-269<br />
[11] D. Gingrich et al. : Proton induced radiation effects<br />
on a Xilinx FPGA... ATLAS LArG 2001-011<br />
[12] P. Denes : digitization and data transmission for the<br />
CMS electromagnetic calorimeter. LEB4 (Roma) p 223-<br />
228<br />
[13] M.L. Andrieux et al.. ATLAS LArG No -00-006<br />
[14] B. Dinkespieler : Redundancy or GaAs ? two different<br />
approaches to solve the problem of SEU in digital<br />
optical links. LEB6 (Krakow) p 250-254<br />
[15] D. Dzahini : A DMILL multiplexer for glink. LEB7<br />
(Stockholm)<br />
[16] B. Laforge : implementation of a Serial Protocol for the<br />
liquid argon Atlas calorimeter (SPAC). LEB6 (Krakow)<br />
p 454-458
DeltaStream : A 36 channel low noise, large dynamic range silicon detector readout ASIC<br />
optimised for large detector capacitance.<br />
P.Aspell * 1 , D.Barney 1 , A.Elliot-Peisert 1 , P.Bloch 1 , A.Go 2 , K.Kloukinas 1 , B.Lofstedt 1 , C.Palomares 1 ,<br />
S.Reynaud 1 , N.Tzoulis 3<br />
* Corresponding author … Paul.Aspell@cern.ch<br />
1 CERN, 1211 Geneva 23, Switzerland, 2 NCU, Chung-Li, Taiwan, 3 University of Ioannina, GR-45110 Ioannina, Greece<br />
Abstract<br />
DeltaStream is a 36 channel pre-amplifier and shaper<br />
ASIC that provides low noise, charge to voltage readout for<br />
capacitive sensors over a large dynamic range. The chip has<br />
been designed in the DMILL BiCMOS radiation tolerant<br />
technology for the CMS Preshower project. Two gain settings<br />
are possible. High gain (HG), has gain ~30 mV/MIP (7.5<br />
mV/fC) for a dynamic range of 0.1 to 50 MIPS (0.4 fC – 200<br />
fC) and low gain (LG), has gain ~4 mV/MIP (1 mV/fC) for a<br />
dynamic range of 1 to 400 MIPS (4 fC – 1600 fC). The<br />
peaking time is ~25 ns and the noise has been measured at<br />
~ENC = 680 e + 28 e/pF. Each channel contains a track &<br />
hold circuit to sample the peak voltage followed by an analog<br />
multiplexer operating up to 20 MHz. The response of the<br />
signal is linear throughout the system. The design and<br />
measured results for an input capacitance < 52 pF are<br />
presented.<br />
I. INTRODUCTION<br />
DeltaStream has been developed within the framework of<br />
the CMS Preshower development [ 1 ]. It provides a simple<br />
analog signal processor, which can be used for the multichannel<br />
readout of silicon sensors with strip/pad capacitances<br />
up to 55 pF per channel. The prime motivation for the<br />
development was to provide the signal processing necessary<br />
for the production testing of the CMS Preshower silicon<br />
Idet<br />
LCC<br />
Delta Preamplifier<br />
Switched gain shaper<br />
DeltaStream<br />
HG<br />
sensors. DeltaStream incorporates the same analog design<br />
specifications as needed for the CMS Preshower front-end<br />
electronics foreseen for LHC (PACE) but avoids its<br />
complexity.<br />
The main features of DeltaStream are dc coupling to the<br />
sensors, sensor leakage current compensation, two dynamic<br />
range settings with linear response over the ranges 0.1-50<br />
MIPs and 1-400 MIPs, S/N > 10 (for 1 MIP in the 0.1-50 MIP<br />
range), single channel or multiplexed analog readout with<br />
multiplexing frequency up to 20 MHz, radiation tolerance up<br />
to 10 Mrads(Si) of ionising radiation and 4x10 13 ncm -2 .<br />
II. DELTASTREAM DESIGN<br />
The DeltaStream channel architecture is shown in Figure 1.<br />
The architecture is similar to that of the AMPLEX family of<br />
ASICs [ 2 ] but differs in its analog properties and speed of<br />
multiplexed readout.<br />
Each of 36 identical channels include a charge sensitive<br />
pre-amplifier (Delta) with leakage current compensation<br />
(LCC) [ 3 ] followed by a CR-RC 2 shaper and a track & hold<br />
circuit. The outputs from the 36 channels feed into an analog<br />
multiplexer. The multiplexer serialises the sampled analog<br />
voltage from each channel into a stream of analog values. The<br />
analog stream is the then buffered to the outside world<br />
through a single analog output.<br />
(Low Gain = 4mV/Mip)<br />
(High Gain = 30mV/Mip)<br />
T/H<br />
Vo<br />
Track & Hold<br />
Figure 1 : The DeltaStream channel architecture.<br />
36:1<br />
20MHz<br />
Analog<br />
Multiplexer<br />
q<br />
s<br />
Analog<br />
Output<br />
Buffer<br />
Clk
The Delta pre-amplifier provides charge to voltage<br />
conversion producing a “step like” function with a fast initial<br />
response and a slow tail back to the operating point. Delta has<br />
a bipolar input device with optimised emitter area with respect<br />
to noise for highly capacitive sensors (~55 pF) and expected<br />
radiation levels of 10 Mrads(Si) of ionising radiation and<br />
4x10 13 ncm -2 .<br />
The LCC enables dc coupling to the sensor and virtual<br />
insensitivity to sensor leakage current up to 150 µA (well<br />
beyond the requirements of most modern day silicon sensors).<br />
A switched gain shaper provides noise filtering and offers<br />
the possibility of two gain settings and hence two dynamic<br />
ranges. These are:<br />
• High gain (HG) ~30 mV/MIP, dynamic range 0.1-<br />
50 MIPs<br />
• Low gain (LG) ~4 mV/MIP, dynamic range 1-400<br />
MIPs<br />
The peaking times in the two gains have been matched to<br />
25 ns.<br />
The Delta pre-amplifier, LCC circuit and switched gain<br />
shaper were first developed on a demonstrator chip. Details of<br />
the design, optimisation for noise with respect to input<br />
capacitance and irradiation and the demonstrator chip results<br />
before and after irradiation can be found in [ 3 ].<br />
In DeltaStream a track & hold circuit (shown in Figure 2)<br />
is implemented after the shaper which comprises a switch,<br />
storage capacitor (Ch) and an amplifier implemented as a<br />
unity gain buffer. The switch has been designed as a<br />
complementary CMOS switch with W/L values chosen to<br />
have an almost constant “on” resistance with respect to signal<br />
value in order to maintain dynamic range and linearity. The<br />
time constant of the switch plus Ch is 620 ps allowing the<br />
voltage on Ch to track effectively the output from the shaper.<br />
Also shown in Figure 2 is the multiplexer which is<br />
designed to run a 20 MHz. A static shift register is used to<br />
sequentially turn on and off switches connecting each channel<br />
output to the output buffer. The same complementary switch<br />
design as used in the track and hold circuit is used to maintain<br />
linearity.<br />
Output from<br />
shaper<br />
Analog switch<br />
Track & Hold<br />
Ch<br />
p=20/0.8<br />
n=10/0.8<br />
Ch2<br />
Multiplexer<br />
Output Buffer<br />
Cp(mux)<br />
Figure 2 : The track & hold plus multiplexer circuit.<br />
Cl<br />
The drain capacitance of one analog switch is 42fF. The<br />
output node of the multiplexer is connected to each channel<br />
by a metal line, which has a calculated capacitance of 462 fF.<br />
The total parasitic capacitance of the multiplexer output node<br />
including the metal interconnect (462 fF) and drain<br />
capacitance of each analog switch (42 fF each) is ~ 2 pF . This<br />
is represented in Figure 2 by Cp(mux) and is naturally much<br />
larger than the parasitic capacitance associated with each<br />
multiplexer input. Since the signal voltage difference from<br />
one channel to the next may be as large as 1.6V, it is possible<br />
that charge stored on Ch couples back to the multiplexer input<br />
of the next selected channel causing distortion when<br />
multiplexing at high speed. This problem can be reduced by<br />
increasing the drive capability of the track & hold unity gain<br />
buffer but this increases power consumption. Another option<br />
(used in this design) is to deliberately load the track & hold<br />
output with a capacitance matched with Cp(mux) therefore<br />
eliminating the capacitive imbalance. Ch2 in Figure 2<br />
represents the additional capacitor.<br />
Figure 3 shows a photograph of a bonded DeltaStream<br />
chip. The 36 channel inputs are located on the left and the<br />
analog readout buffer is the central block on the right. The<br />
power supply is delivered to the top and bottom as well as<br />
bias currents and voltages. Digital signals for the control of<br />
the multiplexer enter DeltaStream in the lower right hand<br />
section. The digital circuits have their own power supply and<br />
guard-ring. The overall dimensions of the chip are 3.115 mm<br />
x 5.106 mm = 15.9 mm 2 .<br />
Figure 3 : Photograph of DeltaStream.
e<br />
clk<br />
s<br />
Track/Hold<br />
Q pulse on input<br />
Analog output<br />
Single channel mode<br />
re<br />
clk<br />
s<br />
Track/Hold<br />
Q pulse on input<br />
Shaper Output<br />
Analog output<br />
x<br />
Multiplex mode<br />
multiplexed output from 36 channels<br />
Figure 4 : Timing diagrams for operating DeltaStream in "single channel mode" and "multiplex mode.<br />
III. MEASURED RESULTS<br />
All measurements have been made using electrical test pulses<br />
to stimulated the input. An input charge of 4 fC is used to<br />
represent the 25000 e rms produced by 1 MIP traversing a 300<br />
µm thick fully depleted silicon sensor. The inherent parasitic<br />
capacitance of the measurement board has been measured to<br />
be Ci = 13.2 pF (+- 0.8 pF). Additional capacitance (CAdd)<br />
was introduced to the inputs (up to 39 pF) to achieve a total<br />
input loading capacitance of ~52 .2 pF.<br />
DeltaStream can be operated in two modes with respect to<br />
the multiplexer. These two modes are Single channel mode<br />
and Multiplex mode.<br />
Output (Volts)<br />
0.4<br />
0.38<br />
0.36<br />
0.34<br />
0.32<br />
0.3<br />
0.28<br />
0.26<br />
0.24<br />
0.22<br />
Multiplex to<br />
desired channel<br />
(chan. 36 in this case)<br />
dc offset of each<br />
channel visible<br />
Track Mode<br />
Hold Mode<br />
0.4<br />
0.38<br />
0.36<br />
0.34<br />
0.32<br />
0.3<br />
0.28<br />
0.26<br />
0.24<br />
0.22<br />
0.2<br />
0. 1000. 2000. 3000. 4000. 5000. 6000. 7000. 8000<br />
time (ns)<br />
0.2<br />
0. 1000. 2000. 3000. 4000. 5000. 6000. 7000. 8000<br />
time (ns)<br />
Figure 5 : The DeltaStream analog output in single<br />
channel mode.<br />
In single channel mode, the multiplexer can be used to<br />
switch through to one particular channel and stay there<br />
indefinitely. Keeping the track & hold circuit in “track” mode<br />
enables the full pulse shape from the shaper to be observed.<br />
Multiplex mode samples first the signal on the peak and<br />
then multiplexes through all 36 channels.<br />
Output (Volts)<br />
Figure 4 shows a timing diagram for the control signals in<br />
both modes of operation. Figure 5 shows the analog output in<br />
single channel mode. In this case the last channel (36) was<br />
selected and hence the dc values of channels 1-35 are evident<br />
during the channel selection. Channel 36 then remains<br />
connected and a 1 MIP signal is clearly seen during “track<br />
mode. The insert shows the track & hold circuit maintaining<br />
the peak of the signal response.<br />
Figure 6 shows the analog output in multiplex mode. A<br />
signal of 10 MIPs was injected onto channel 18 and sampled<br />
on the peak before multiplexing at 20 MHz. A zoom of the<br />
channel containing the signal shows an initial overshoot of the<br />
signal value by the output buffer. The signal settles within the<br />
first half period of multiplexing, external sampling of the<br />
signal value should therefore be done towards the end of the<br />
second half period when the output has settled.<br />
Output (Volts)<br />
0.7<br />
0.6<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
0<br />
500 1000 1500 2000 2500 3000 3500 4000 4500<br />
Time (ns)<br />
Figure 6 : The DeltaStream analog output in multiplex<br />
mode.<br />
The channel to channel spread in dc values was measured<br />
as 91 mV peak to peak with a standard deviation around the<br />
mean of σ = 21 mV.
Output (Volts)<br />
0.32<br />
0.315<br />
0.31<br />
0.305<br />
0.3<br />
0.295<br />
0.29<br />
3900 3950 4000 4050 4100 4150 4200 4250 4300<br />
time (ns)<br />
Figure 7 : The signal response for 1 MIP in high gain.<br />
The signal response is best seen in Figure 7. The rise time<br />
(as measured from 10% to 90%) of the signal amplitude was<br />
independent of signal size within the dynamic range.<br />
Measurements showed mean rise times in LG of 14.5 ns with<br />
CAdd = 0 pF and 17.5 ns with CAdd = 39 pF. In HG the mean<br />
rise times were 18.9 ns with CAdd = 0 pF and 21.8 ns with CAdd<br />
= 39 pF. The channel-to-channel variation was ~ 1.5 %.<br />
The dynamic range for CAdd = 39 pF is 50 MIPs in HG and<br />
400 MIPs in LG as shown in Figure 8. The gain (measured by<br />
the mean of straight line fits) was 4.62 mV/MIP in LG with<br />
CAdd = 0 pF reducing to 3.45 mV/MIP with CAdd = 39 pF. In<br />
HG the mean gain was 33.12 mV/MIP with CAdd = 0 pF<br />
reducing to 24.9 mV/MIP with CAdd = 39 pF. The channel to<br />
channel variation of the gain calculated as the standard<br />
deviation (σ) around the mean was ~ 3.5 % (exact values<br />
given in Table 2).<br />
The linearity in both LG and HG is shown in Figure 9<br />
which plots the peak amplitude divided by the input signal in<br />
MIPs against the input signal in MIPs. A straight horizontal<br />
line would show perfect linearity. The integral non-linearity<br />
(INL) (measured as the standard deviation from a straight line<br />
fit of the data in Figure 8 and expressed as a percentage of the<br />
operating range) is 0.42% for LG and 0.21% for HG<br />
measured over the specified ranges of 400 MIPs (LG) and 50<br />
MIPs (HG).<br />
Response (Volts)<br />
1.4<br />
1.2<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0<br />
High Gain<br />
Low Gain<br />
0 50 100 150 200 250 300 350 400 450<br />
Input signal (mips)<br />
Figure 8 : The measured peak amplitude for 6 channels in<br />
LG and HG for input signals up to 400 MIPs.<br />
Figure 10 shows the noise measured against total input<br />
capacitance for each channel in HG and the corresponding<br />
straight line fit. The mean fit for all 36 channels showed an<br />
ENC of 676 e + 28 e/pF.<br />
Response (mV/mip)<br />
Response (mV/mip)<br />
6<br />
5<br />
4<br />
3<br />
2<br />
1<br />
Low Gain, mean = 3.45 mV/mip<br />
0<br />
0<br />
40<br />
50 100 150 200 250 300 350 400 450<br />
Input signal (mips)<br />
35<br />
30<br />
25<br />
20<br />
15<br />
10<br />
High Gain, mean = 24.9 mV/mip<br />
5<br />
0<br />
0 10 20 30 40 50 60<br />
Input signal (mips)<br />
Figure 9 : The peak amplitude against peak amplitude<br />
divided by input charge in MIPs .<br />
Noise (electrons)<br />
3500<br />
3000<br />
2500<br />
2000<br />
1500<br />
1000<br />
500<br />
0<br />
2<br />
1.5<br />
1<br />
0.5<br />
Mean 675.9<br />
0<br />
400 600 800 1000<br />
Intercepts (e - )<br />
0 10 20 30 40 50 60<br />
Capacitance (including board) (pF)<br />
3<br />
2<br />
1<br />
0<br />
Mean 28.10<br />
20 30 40<br />
Noise slope (e - /pF)<br />
Figure 10 : Measured noise as a function of the total input<br />
capacitance for all 36 channels.<br />
The power consumption for each module and the entire<br />
chip is given in Table 1. The gain-bandwidth required by the<br />
track & hold op-amp. is less in multiplex mode compared to<br />
single channel mode. Increasing the distance in time marked<br />
“X” in Figure 4 allows the track & hold power consumption<br />
to be reduced in multiplex mode.<br />
Delta pre-amp + LCC 5.12 mV / ch.<br />
Shaper 5.6 mV / ch.<br />
Track & hold (single chan. mode ) a 8 mW/ ch<br />
Track & hold (multiplex mode ) b 950 µW / ch<br />
Output Buffer 10 mW<br />
Total Power consumption 684 mW a 430 mW b<br />
Table 1 : The DeltaStream power consumption.<br />
A summary of the principle results from the DeltaStream<br />
measurements is contained within Table 2 .
Measurement type &<br />
Low Gain High Gain<br />
additional Capacitance pp mean V pp mean V<br />
Rise time (0 pF)<br />
14.5 ns 0.21 ns<br />
18.9 ns 0.27 ns<br />
Rise time (39 pF)<br />
17.5 ns 0.23 ns<br />
21.8 ns 0.38 ns<br />
Gain (0 pF)<br />
4.62 mV 159 µV<br />
33.12 mV 1.16 mV<br />
Gain (39 pF)<br />
3.45 mV 112 µV<br />
24.90 mV 925 µV<br />
DC Baseline (0 pF)<br />
(Channel to channel)<br />
91 mV 21 mV 69 mV 18mV<br />
Linear range & INL<br />
(39 pF)<br />
400 MIPs (1600 fC) with INL = 0.42% 50 MIPs (200 fC) with INL = 0.21 %<br />
Noise (HG) ENC = 676 e + 28 e/pF<br />
Table 2 : Summary of DC levels, gain over the full dynamic range, rise time and noise. The results of all 36 channels<br />
are included.<br />
Laser plus<br />
splitter<br />
2<br />
Photo<br />
sensor<br />
Preshower<br />
silicon sensor<br />
1<br />
A<br />
D<br />
C<br />
DeltaStream<br />
Passive surface<br />
mounted components<br />
to provide biasing<br />
Daughter board<br />
Position and timing laser control<br />
Analog buffer<br />
DeltaStream<br />
multiplexed analog<br />
output<br />
DeltaStream<br />
control signals<br />
ADC<br />
PCB<br />
PC<br />
running Labview<br />
Figure 11 : Preshower silicon sensor laser measurement system using DeltaStream.<br />
IV. APPLICATION EXAMPLE<br />
DeltaStream can be used for the analog signal<br />
processing of silicon strip/pad sensors that require low<br />
noise and large dynamic range. The application foreseen<br />
within the CMS Preshower development is a silicon sensor<br />
measurement system. The Preshower sensors are to be<br />
produced in a number of regional centres around the world.<br />
In order to maintain consistency between test methods and<br />
results during the production phase a common<br />
measurement system is required. Figure 11 shows a block<br />
diagram of the system. A laser is used to generate pulses of<br />
light with wavelength 1060 nm. A passive “splitter” is used<br />
to divide the light into two parts and fibre optic cables<br />
direct the light to well focused regions on the sensor strips<br />
and to a photo sensor. The charge from the strips are<br />
readout by DeltaStream and digitised by an ADC. The<br />
signal from the photo sensor is also digitised and the two<br />
results compared on a PC.<br />
V. CONCLUSIONS<br />
The design and results of DeltaStream have been<br />
presented. DeltaStream contains 36 identical channels of<br />
pre-amplifier, 25 ns peaking time shaper and a track & hold<br />
circuit. The analog signal from each channel is multiplexed<br />
to a single analog output at frequencies up to 20 MHz.<br />
DeltaStream has a selectable gain (LG and HG) offering a<br />
FPGA<br />
linear response over 2 dynamic range settings of 0.1-50<br />
MIPs (HG) and 1-400 MIPs (LG). DeltaStream has been<br />
designed to readout large silicon strip/pad sensors imposing<br />
up to 55 pF of input capacitance per channel. The chip can<br />
be dc connected to the sensor and is unaffected by sensor<br />
leakage current up to 150 µA per channel.<br />
VI. REFERENCES<br />
[ 1 ] The CMS collaboration The Calorimeter Project<br />
Technical Proposal (CERN/LHCC 94-43 LHCC/P2, 1994)<br />
[ 2] E.Beuville, K.Borer, E.Chesi, E.Heijne, P.Jarron,<br />
B.Lisowski, S.Singh AMPLEX, A low noise, low power<br />
analog CMOS signal processor for multi-element particle<br />
detectors. Nuclear Instruments & Methods in Physics<br />
Research A 288 (1990) 157 North Holland<br />
[ 3 ] P. Aspell, D.Barney, P.Bloch, P.Jarron, B.Lofstedt,<br />
S.Reynaud, P.Tabbers Delta: A charge sensitive frontend<br />
amplifier with switched gain for low-noise, large<br />
dynamic range silicon detector readout. Nuclear<br />
Instruments & Methods in Physics Research A 461 (2001)<br />
449-455<br />
[ 4 ] A.Go, A.Peisert The Preshower gain pre-calibration<br />
using infrared light (CERN CMS internal note 2001)
Ì�� Ñ�Ü�� �Ò�ÐÓ� ����Ø�Ð ×��Ô�Ö Ó� Ø�� ÄÀ�� ÔÖ�×�ÓÛ�Ö<br />
 Ä� ÓÕ � �Ó�Ò�Ö Ê��Ñ� �ÓÖÒ�Ø È È�ÖÖ�Ø �ÝÖ�ÐÐ� ÌÖÓÙ�ÐÐ��Ù ÒÓÛ �Ø Ì��Ð�×<br />
Ä È � �Ð�ÖÑÓÒØ ��ÖÖ�Ò� ÍÒ�Ú�Ö×�Ø�� �Ð��×� È�× �Ð � �� �Í�Á�Ê� ����Ü<br />
Ð� ÓÕ Ð�ÖÑÓÒØ �Ò Ô �Ö<br />
��×ØÖ� Ø<br />
Ì�� ÄÀ�� ÔÖ�×�ÓÛ�Ö ×��Ò�Ð× ×�ÓÛ ×Ó Ñ�ÒÝ Ù ØÙ�Ø�ÓÒ×<br />
�Ø ÐÓÛ �Ò�Ö�Ý Ø��Ø � Ð�××� �Ð ×��Ô�Ò� �× ÒÓØ Ù×��Ð� �Ø �ÐÐ<br />
Ì��Ò�× ØÓ Ø�� �� Ø Ø��Ø Ø�� �Ö� Ø�ÓÒ Ó� Ø�� ÓÐÐ� Ø�� �Ò�Ö�Ý<br />
�ÙÖ�Ò� � Û�ÓÐ� ÄÀ� ���Ñ ÖÓ××�Ò� Ø�Ñ� �× �� Û� ×ØÙ�<br />
��� Ø�� ×Ô� ��Ð ×ÓÐÙØ�ÓÒ Û� ÔÖ�×�ÒØ�� �Ø ËÒÓÛÑ�×× ���<br />
ÛÓÖ�×�ÓÔ Ì��× ×ÓÐÙØ�ÓÒ ÓÒ×�×Ø× Ó� �ÒØ�ÖÐ��Ú�� ��×Ø �Ò<br />
Ø��Ö�ØÓÖ× ÓÒ� ���Ò� �Ò �ÒØ��Ö�Ø� ÑÓ�� Û��Ò Ø�� ÓØ��Ö �×<br />
����Ø�ÐÐÝ Ö�×�Ø ÌÛÓ ØÖ� � �Ò� �ÓÐ� �Ò� �Ò �Ò�ÐÓ� ÑÙÐØ�<br />
ÔÐ�Ü�Ö �Ö� Ù×�� ØÓ ��Ú� �Ø Ø�� ÓÙØÔÙØ �� Ó� Ø�� ×��Ò�Ð<br />
ÔÐÙ× � Ó� Ø�� ÔÖ�Ú�ÓÙ× ÓÒ� Ì��×� � �Ö� ����Ø�ÐÐÝ<br />
ÓÑÔÙØ�� �ÖÓÑ Ø�� ÔÖ�Ú�ÓÙ× ×�ÑÔÐ� �Ò� ×Ù�ØÖ� Ø�� �<br />
ÓÑÔÐ�Ø�ÐÝ Ò�Û ��×��Ò Ó� Ø��× ×ÓÐÙØ�ÓÒ ��� ØÓ �� Ñ���<br />
×�� ¬�ÙÖ� Ì��× Ò�Û ��×��Ò �× ��× Ö���� �Ò ÐÙ��Ò� Ò�Û<br />
Ñ�Ø�Ó�× ØÓ �� Ö��×� Ø�� ×ÙÔÔÐÝ ÚÓÐØ��� �Ò� Ø�� ÒÓ�×� �×<br />
Û�ÐÐ �× ØÓ �Ò Ö��×� Ø�� ÕÙ�Ð�ØÝ Ó� Ø�� Ö�×�Ø �Ò� Ø�� Ð�Ò��Ö<br />
�ØÝ �Ò ÓÙØÔÙØ ×Ø��� ÓÒ×�×Ø�Ò� Ó� ��� Ð�×× ÔÙ×� ÔÙÐÐ<br />
Ù×�Ò� ÓÒÐÝ ÆÈÆ ØÖ�Ò×�×ØÓÖ× �× �Ð×Ó ��× Ö���� Ä��ÓÖ�ØÓÖÝ<br />
�Ò� ���Ñ Ø�×Ø Ö�×ÙÐØ× �Ö� ��Ú�Ò<br />
Á ÁÒØÖÓ�Ù Ø�ÓÒ<br />
Ì�� ÄÀ�� ÔÖ�×�ÓÛ�Ö �× Ù×�� �ÓÖ Ø�� Ð�Ú�Ð ØÖ����Ö �ÓÖ<br />
Û�� � � Ø�Ö�×�ÓÐ� ÓÖÖ�×ÔÓÒ��Ò� ØÓ � Ñ�Ò�ÑÙÑ �ÓÒ�Þ�<br />
Ø�ÓÒ Ô�ÖØ� Ð� ÅÁÈ �× �ÔÔÐ��� Û�Ø� � � � ÙÖ� Ý Ì��×<br />
��Ø� ØÓÖ �× �Ð×Ó Ù×�� ØÓ �ÑÔÖÓÚ� �Ð� ØÖÓÒ �Ò� Ô�ÓØÓÒ Ñ��<br />
×ÙÖ�Ñ�ÒØ ÙÔ ØÓ ÅÁÈ Ì��×� ØÛÓ �ÙÒ Ø�ÓÒ× ��Ú� Ù× �<br />
�ÝÒ�Ñ� Ö�Ò�� Ó� ØÓ ÅÁÈ � � ��Ø×<br />
Ì�� ×ØÙ�Ý Ó� Ø�� ×��Ò�Ð � ℄ ��Ú�Ò �Ý �×�ÒØ�ÐÐ�ØÓÖ �ÐÐ �Ò�<br />
Ø�� �� ��ÒÒ�Ð× À�ÑÑ�Ñ�Ø×Ù ÈÅÌ Û�Ø� � �ÓÓ� ��Ö��<br />
Ñ�ÒØ Û�Ø� Ø���Ö ×�ÑÙÐ�Ø�ÓÒ ×�ÓÛ× Ù× Ø��Ø �Ø ÐÓÛ �Ò�Ö�Ý<br />
Ø�� �ÓÑ�Ò�ÒØ �«� Ø �× Ø�� ×Ø�Ø�×Ø� �Ð Ù ØÙ�Ø�ÓÒ Ó� Ø��<br />
Ô�ÓØÓ�Ð� ØÖÓÒ ÓÐÐ� Ø�ÓÒ Û��Ð� �Ø ���� �Ò�Ö�Ý Ø�� �ÓÑ�<br />
Ò�ÒØ �«� Ø �× Ø�� ÈÅÌ ×�ØÙÖ�Ø�ÓÒ Û�� � ����Ò �Ø � Ñ�<br />
×�� ¬�ÙÖ� Ì��×� ÓÒ��Ø�ÓÒ× �Ò� Ø�� �� Ø Ø��Ø Ø�� ×��Ò�Ð<br />
Ð�Ò�Ø� �× �ÐÛ�Ý× ÐÓÒ��Ö Ø��Ò � Ò× �Ö�Ú� Ù× ØÓ Ø�� ×ÓÐÙØ�ÓÒ<br />
��× Ö���� ���ÓÖ� Ï� �ÓÒ Ø ��Ò�� Ø�� Ñ��Ò �Ð� ØÖÓÒ�<br />
�Ó� �× Û� Ñ��� ÓÒ ��� � ℄�<br />
¯ � �ÙÐÐÝ ��«�Ö�ÒØ��Ð ��×��Ò ØÓ Ñ�Ò�Ñ�Þ� Ø�� ÒÓ�×�× �<br />
¯ ��ÔÓÐ�Ö ØÖ�Ò×�×ØÓÖ× �Ø Ø�� �ÒÔÙØ ×Ø���× ØÓ Ö��Ù � Ø��<br />
Ó«×�Ø× �<br />
¯ �ÅÇË ØÖ�Ò×�×ØÓÖ× ØÓ ×�Ú� ÔÓÛ�Ö �Ò� ��×��Ò �ÒØ��Ö�<br />
ØÓÖ ×Û�Ø ��× �<br />
¯ �ÅË �� �Ñ ���ÅÇË Ø� �ÒÓÐÓ�Ý �Ò� ����Æ��<br />
ØÓÓÐ×<br />
arbitrary scale<br />
arbitrary scale<br />
100<br />
80<br />
60<br />
40<br />
20<br />
t (ns)<br />
t (ns)<br />
arbitrary scale<br />
0<br />
0 20 40 60 80 100<br />
140<br />
120<br />
100<br />
80<br />
60<br />
40<br />
20<br />
0<br />
arbitrary scale<br />
0 20 40 60 80 100<br />
100<br />
80<br />
60<br />
40<br />
20<br />
0<br />
0 20 40 60 80 100<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
���ÙÖ� � Ó×Ñ� �Ú�ÒØ×<br />
t (ns)<br />
0<br />
0 20 40 60 80 100<br />
ÀÓÛ�Ú�Ö Û� ��� ØÓ Ö���×��Ò Ø�� ��Ô �Ù� ØÓ Ø�� �ÓÐÐÓÛ�Ò�<br />
ÓÒ×���Ö�Ø�ÓÒ× �<br />
¯ �� �Ù×� Ó� Ø�� ÈÅÌ ×�ØÙÖ�Ø�ÓÒ Û� ��� ØÓ �Ò Ö��×�<br />
Ø�� ���Ò �Ý � �� ØÓÖ �Ò� Ø��Ò Û� ��� ØÓ Ø��� ÑÓÖ�<br />
�Ö� Ó� Ø�� ÒÓ�×� �Ò� Ó«×�Ø �«� Ø× �ÓÖ Ø�� ÒÓ�×� Ø��<br />
�ÒØ��Ö�ØÓÖ �ÒÔÙØ ×Ø��� Û�× ��Ò��� �Ò� �ÓÖ Ø�� Ó«×�Ø<br />
�Ò� Ø�� ÓÔ�Ö�Ø�Ò� ÔÓ�ÒØ ×Ø���Ð�ØÝ � ×Ô� ��Ð ÓÑÑÓÒ<br />
ÑÓ�� ������ �ÐÓÓÔÛ�× ����� �<br />
¯ Û� Ò��� � Ú�ÖÝ ���� ÕÙ�Ð�ØÝ Ö�×�Ø ØÓ �� ��Ð� ØÓ ÓÑ<br />
ÔÙØ� Ø�� ×Ù�ØÖ� Ø�ÓÒ Û�Ø� � Ò��Ð����Ð� �ÖÖÓÖ �Ú�Ò �Ò<br />
Ø�� �×�× Ó� � Ñ�Ü�ÑÙÑ ×��Ò�Ð �ÑÑ����Ø�ÐÝ �ÓÐÐÓÛ��<br />
�Ý � �ØÖ����Ö Ð�Ú�Ð� ÓÒ�� Ø�� �ÒØ��Ö�ØÓÖ �Ø×�Ð� Û�×<br />
����� �<br />
¯ Ø�� ×ÙÔÔÐÝ ÚÓÐØ��� ��� ØÓ �� �� Ö��×�� �ÓÛÒ ØÓ<br />
¦ ��� Î ØÓ Ñ�Ø � Ø�� �ÓÙÒ�ÖÝ ×Ô� �¬ �Ø�ÓÒ× �Ò� ØÓ<br />
Ó�Ø��Ò Ø�� �×Ñ�ÐÐ ÔÓÛ�Ö ÓÒ×ÙÑÔØ�ÓÒ� Ó� ÑÏ ��ÒÒ�Ð �<br />
¯ ØÓ �ÖÖÝ Ø�� � ÓÙØÔÙØ �Ò�ÐÓ� ×��Ò�Ð× Û� ÔÐ�ÒØÓ<br />
Ù×� ×�ÑÔÐ� �Ø��ÖÒ�Ø ��«�Ö�ÒØ��Ð ��Ð�× ÓÒ ÙÔ ØÓ Ñ<br />
ÐÓÒ� �Ò Ø��× �×� Û� ÑÙ×Ø ���ÔØ Ø��× ��Ð� �Ø �ÓØ�<br />
�Ò� �Ò� Ø��Ò ��Ú� ØÓ �ÓÙ�Ð� Ø�� �ÝÒ�Ñ� ÌÓ ×�Ú�<br />
ÔÓÛ�Ö Ø��× �ÝÒ�Ñ� �× �ÓÒ� �Ò Ø�� Ð�×Ø ×Ø��� �Ý ��<br />
×��Ò�Ò� � ��«�Ö�ÒØ��Ð �Ò�ÐÓ� ÑÙÐØ�ÔÐ�Ü�Ö Û�Ø� � ���Ò<br />
Ó� ØÛÓ �Ò� � ¦ Î �ÝÒ�Ñ� Ö�Ò�� Û�Ø� � ¦ ��� Î<br />
×ÙÔÔÐÝ Ì��× Ö�ÕÙ�Ö�� Ø�� ��×��Ò Ó� Ô�Ö�ÐÐ�Ð Ð�Ò��Ö�ØÝ<br />
ÓÖÖ� Ø�ÓÒ �Ò×Ø��� Ó� Ø�� ÔÖ�Ú�ÓÙ× ×�Ö��Ð ÓÒ� �<br />
t (ns)
¯ Û� ��Ú� ØÓ �Ö�Ú� �Æ ��ÒØÐÝ Ø�� ��Ð� Û�Ø�ÓÙØ �ÜØÖ�<br />
��Ô× � �Ò ��ÐÐ ÆÈÆ � � Ð�×× ÔÙ×� ÔÙÐÐ � Û�× ��<br />
����<br />
ÁÁ ��×��Ò ÓÒ×���Ö�Ø�ÓÒ×<br />
ÅÓ×Ø �ÐÓ �× Ó� Ø�� ��Ô �Ö� ��×��Ò�� �ÖÓÙÒ� � ×�ÑÔÐ� ��ÔÓ<br />
Ð�Ö ��«�Ö�ÒØ��Ð Ô��Ö � Ø��× × ��Ñ� �× ×Ø��Ð� ��×Ý ØÓ Ù×� �Ò�<br />
� ÓÒÓÑ� �Ò Ø�ÖÑ Ó� ×�Ð� ÓÒ �Ö�� ÁÒ ����Ø�ÓÒ �Ò � �ÙÐÐÝ<br />
��«�Ö�ÒØ��Ð ��×��Ò �� � ×��Ò�Ð ��× �Ø× ÓÔÔÓ×�Ø� Û�� � �×<br />
Ú�ÖÝ Ù×��ÙÐ ØÓ ÓÑÔ�Ò×�Ø� Ô�Ö�×�Ø� �«� Ø× ÁÒ ÓÙÖ �×�<br />
Û� ��Ú� ØÓ Ø��� �Ö� Ó� Ø�� Ð�Ò��Ö�ØÝ � ÓÙÖ �Ð��Ö�Ø�ÓÒ<br />
Û�ÐÐ �� �ÓÒ� �××�ÒØ��ÐÐÝ �Ý Ø�� ÅÁÈ Ú�ÐÙ�× Û�� � �Ö� ÓÒÐÝ<br />
��� ÓÙÒØ× �ÓÖ Ø��× Ö��×ÓÒ Û� Ò��� � ÒÓÒ Ð�Ò��Ö�ØÝ<br />
×Ñ�ÐÐ�Ö Ø��Ò �Ò� Û� ��Ú� ¬Ú� �ÐÓ �× � � ¬Ú� ÒÓÒ<br />
Ð�Ò��Ö�ØÝ ×ÓÙÖ �× �Ò ×�Ö��Ð ×�� ¬�ÙÖ�<br />
input<br />
C2DM<br />
C2DM<br />
clock<br />
divisor<br />
clock reset<br />
40MHz<br />
T/H<br />
T/H<br />
multiplexer<br />
buffer<br />
buffer<br />
���ÙÖ� � ÓÒ� ��ÒÒ�Ð ��×��Ò<br />
ÁØ �× Û�ÐÐ �ÓÛÒ Ø��Ø Ø�� ÒÓÒ Ð�Ò��Ö�ØÝ Ó� ��«�Ö�ÒØ��Ð Ô��Ö<br />
�××�ÒØ��ÐÐÝ �Ù� ØÓ Ø�� Ú�Ö��Ø�ÓÒ Ó� Ø�� ��×� �Ñ�ØØ�Ö ÚÓÐØ���<br />
Ó� Ø�� ØÛÓ ØÖ�Ò×�ØÓÖ× �× ��×�ÐÝ ÓÖÖ� Ø�� �Ý Ø�� ����Ø�ÓÒ Ó�<br />
ÓÒ� ��Ó�� ���Ò Ó� ÓÖ ØÛÓ ��Ó��× ���Ò Ó� �Ò Ø�� ÓÐ<br />
Ð� ØÓÖ �Ö�Ò ��× ØÓ Ó�Ø��Ò Ø�� ×�Ñ� ÚÓÐØ��� �ÖÓÔ �Ò �Ñ�ØØ�Ö<br />
�Ò� ÓÐÐ� ØÓÖ ÐÓ�� ×�� ¬�ÙÖ� ÀÓÛ�Ú�Ö Ø�� ÓÒ×�ÕÙ�Ò �<br />
Ó� Ø��× �×�Ö��Ð ÓÖÖ� Ø�ÓÒ� �× Ø�� ÐÓ×Ø Ó� ÓÒ� ÓÖ ØÛÓ ��Ó��<br />
ÚÓÐØ��� �ÖÓÔ× Û�� � �× �Ò ÓÑÔ�Ø��Ð� Û�Ø� � Ð�Ö�� �ÝÒ�Ñ�<br />
Ö�Ò�� Ù×�Ò� ×Ñ�ÐÐ ÚÓÐØ��� ÔÓÛ�Ö ×ÙÔÔÐÝ Ì�� ���� Û� �Ü<br />
ÔÐÓÖ� ØÓ ÓÚ�Ö ÓÑ� Ø��× ÔÖÓ�Ð�Ñ �× ØÓ Ö�ÔÐ� � Ø��× �×�Ö��Ð<br />
ÓÖÖ� Ø�ÓÒ� �Ý � Ô�Ö�ÐÐ�Ð ÓÒ� ×�� ¬�ÙÖ� �<br />
R<br />
Vbe1<br />
R<br />
Vcc Vcc<br />
Vbe 2<br />
i2<br />
i1<br />
R<br />
Vbe 1<br />
R/2<br />
Vcc<br />
Vbe 2<br />
Vbe 2<br />
i2<br />
i1<br />
Vcc<br />
���ÙÖ� � ×�Ö��Ð ÓÑÔ�Ò×�Ø�ÓÒ<br />
50Ω<br />
50Ω<br />
100Ω<br />
R c<br />
Vin1<br />
R<br />
R a<br />
Vcc Vcc<br />
T1<br />
T2<br />
Vin2<br />
Tc1 Tc2<br />
R b<br />
���ÙÖ� �� Ô�Ö�ÐÐ�Ð ÓÑÔ�Ò×�Ø�ÓÒ<br />
���ÙÖ� �� Û�Ø�ÓÙØ ���ÙÖ� �� ÙÒ��Ö<br />
���ÙÖ� �� ÓÚ�Ö ���ÙÖ� �� Ö���Ø<br />
ÁÒ Ø��× Ò�Û ÓÖÖ� Ø�ÓÒ Ø�� ÐÓ×Ø Ó� ���Ò �× ÓÑÔ�Ò×�Ø�� �Ý<br />
�� Ö��×�Ò� Ø�� �ÑÑ�ØØ�Ö ÐÓ�� �Ò Ø�� ÓÔÓ×�Ø� �Ö�Ò � Ì��<br />
���Ò Ó�Ø��Ò�� �× �ÓÙ�Ð� � Û� Ó�Ø��Ò � Ú�ÖÝ �ÓÓ� ÓÖÖ� Ø�ÓÒ<br />
Û�Ø�ÓÙØ ÐÓ×Ø Ó� �ÝÒ�Ñ� Ö�Ò�� �Ò� Ø�� ÓÚ�Ö ÓÑÔ�Ò×�Ø�ÓÒ<br />
�× ÔÓ××��Ð� Ì��× �× Ù×��ÙÐ ØÓ ÓÑÔ�Ò×�Ø�× ØÛÓ ×Ø���× Û�Ø�<br />
ÓÒÐÝ ÓÒ� Ô�Ö�ÐÐ�Ð ÓÖÖ� Ø�ÓÒ<br />
�× �Ò �Ü�ÑÔÐ� ×�� ¬�ÙÖ� �Ò� � ÓÒ Ø��× �ÔÔÐ� �Ø�ÓÒ �<br />
���Ò Ó� �× Ò����� �Ò Ø�� ÑÙÐØ�ÔÐ�Ü�Ö ×Ø��� � Ù×�Ò� Ø��×<br />
ÓÖÖ� Ø�ÓÒ � ÒÓÒ Ð�Ò��Ö�ØÝ �ÖÖÓÖ Ó� ÓÒÐÝ ¦ �Î Û�× Ó�<br />
Ø��Ò�� �ÓÖ ��ÚÓÐØ ¦ Î �ÝÒ�Ñ� Ö�Ò�� �Ò ÐÙ��Ò� ÓÙØÔÙØ<br />
×Ø��� Ù×�Ò� � ¦ ��� Î ÔÓÛ�Ö ×ÙÔÔÐÝ ���ÙÖ�× � � � �Ò�<br />
� ×�ÓÛÒ Ø�� �ÖÖÓÖ Ó�Ø��Ò�� �Ò Ø��× �Ü�ÑÔÐ� �Ò Ø�� �×�×<br />
Ó� ÒÓ ÓÑÔ�Ò×�Ø�ÓÒ � ÙÒ��Ö ÓÑÔ�Ò×�Ø�ÓÒ � ÓÚ�Ö<br />
ÓÑÔ�Ò×�Ø�ÓÒ � �Ò� Ö���Ø ÓÑÔ�Ò×�Ø�ÓÒ �<br />
Ì�� ×ÓÐÙØ�ÓÒ ��× Ö���� ��Ö� �× Ú�ÖÝ ×�Ò×�Ø�Ú� ØÓ Ø�� ÐÓ �<br />
��ØØ�Ö Û�� � ��Ø�ÖÑ�Ò�× Ø�� �ÒØ��Ö�Ø�ÓÒ ÔÖ� �×�ÓÒ �ÓÖ Ø��×<br />
Ö��×ÓÒ Û� �� ��� ØÓ ×�Ò� ØÓ Ø�� ��Ô Ø�� Ñ��Ò � ÅÀÞ<br />
ÐÓ � �Ò� ØÓ Ñ��� Ø�� ÅÀÞ ÐÓ � �Ò×��� Ø�� ��Ô ÁÒ<br />
Ø�� ÓØ��Ö ��Ò� ØÓ ÔÖÓØ� Ø Ø�� �Ò�ÐÓ� Ô�ÖØ× Ó� Ø�� ��Ô<br />
i
����Ò×Ø ÐÓ � ÖÓ×× Ø�Ð� Ø�� ÐÓ � ÓÒÒ� Ø�ÓÒ× �Ò Ø�� ��Ô<br />
�Ö� �ÐÐ ��ÔÓÐ�Ö ÐÓÛ Ð�Ú�Ð ��Ä �Ò� Ø�� �ÅÇË ÐÓ �× Ó�<br />
Ø�� �ÒØ��Ö�ØÓÖ× �Ö� ��Ò�Ö�Ø�� �Ò×��� �� ��ÒØ��Ö�ØÓÖ �ÐÓ �<br />
ÁÁÁ �Ù�Ð��Ò� �ÐÓ �×<br />
�ÓÑÑÓÒ ØÓ �����Ö�ÒØ��Ð ÑÓ�� �ÐÓ � � �Å<br />
Ì�� ¬Ò�Ð × ��Ñ� �Ó×�Ò�×�ÚÓÐØ��� �ÒÔÙØ ÙÖÖ�ÒØ ÓÙØÔÙØ<br />
�ÐÓ � �Ò×Ø��� Ó� Ø�� ÚÓÐØ��� �ÑÔÐ�¬�Ö Ó� Ø�� ¬Ö×Ø Ú�Ö×�ÓÒ<br />
×�� ¬�ÙÖ� � Ì�� � �Å ÓÙØÔÙØ× �Ò �� ÓÒ×���Ö�� �×<br />
�ÖÓÙÒ��� Ú�ÖØÙ�Ð �ÖÓÙÒ� Ó� Ø�� �ÒØ��Ö�ØÓÖ � � � � �<br />
Î�Ò� Ê<br />
Ì�� Ú�ÐÙ� Ó� Ø�� ÙÖ�ÒØ Á ��Ø�ÖÑ�Ò�× ÓÒÐÝ Ø�� ×Ô��� �×<br />
Ø�� Ú�ÐÙ�× Ó� Ö �Ò� Î ÓÒÐÝ Ø�� ÓÔ�Ö�Ø�Ò� ÓÙØÔÙØ ÔÓ�ÒØ<br />
ÔÖÓÚ���� Ø��Ø Ø�� Ú�ÐÙ� Ó� Ö Ö�Ñ��Ò× ���� Û�Ø� Ö�×Ô� Ø Ø��<br />
�ÒØ��Ö�ØÓÖ �ÒÔÙØ �ÑÔ���Ò � Ì��×� Ú�ÐÙ�× �Ö� ÓÔØ�Ñ�Þ��<br />
�ÓÖ Ø�� ÒÓ�×� Ø�� ÔÓÛ�Ö ×ÙÔÔÐÝ �× Ö��Ù �� �Ý ��Ó��× �Ò�<br />
��ÖÐ�Ò�ØÓÒ ØÖ�Ò×�×ØÓÖ× �Ö� Ù×�� ØÓ Ö��Ù � �ÒÔÙØ ÙÖ�ÒØ �Ò�<br />
�ÐÐÓÛ ØÓ ×�Ø Ø�� ÓÙØÔÙØ ÓÔ�Ö�Ø�Ò� ÔÓ�ÒØ Ò��Ö Ø�� �ÖÓÙÒ�<br />
Vin<br />
Ro<br />
V<br />
r r<br />
I<br />
Ro<br />
���ÙÖ� �� � �Å �Ò� �ÒØ��Ö�ØÓÖ<br />
ËÛ�Ø ��� �ÒØ��Ö�ØÓÖ<br />
i1<br />
i2<br />
C<br />
C<br />
Vout<br />
Ì��× �ÐÓ � ×�� ¬�ÙÖ� � �× �Ù�ÐØ �ÖÓÙÒ� Ø�� Û��� ��Ò� ����<br />
���Ò �ÑÔÐ�¬�Ö ÁØ ��× ØÛÓ ���Ò ×Ø���× Ø�� ¬Ö×Ø �× ÓÔØ�Ñ�Þ��<br />
ØÓ Ñ�Ò�Ñ�Þ� Ø�� Ó«×�Ø Ø�� ×� ÓÒ� �× � Ö��Ð ØÓ Ö��Ð �ÅÇË<br />
ÓÙØÔÙØ Ì�� ÓÒÒ� Ø�ÓÒ ��ØÛ��Ò Ø�� ØÛÓ ×Ø���× �× � ×�ÑÔÐ�<br />
�ÓÐÐÓÛ�Ö �ÓÖ Ø�� ÈÅÇË × �Ò� � ¬Ü�� ÚÓÐØ��� �ÖÓÔ ��Ó��×<br />
�Ò� Ö�×�×ØÓÖ �Ø ÓÒ×Ø�ÒØ ÙÖÖ�ÒØ �ÓÖ Ø�� ÆÅÇË ×<br />
ÌÛÓ ÓÖ���Ò�Ð�Ø��× �Ö� �ÑÔÐ�Ñ�ÒØ�� �<br />
�ÙÖ�Ò� Ø�� �ÒØ��Ö�Ø� Ô��×� �ÖÓÑ Ø�� �� ÔÓ�ÒØ Ó� Ú��Û<br />
Ø�� �ÒØ��Ö�ØÓÖ �× �Ò ÓÔ�Ò ÐÓÓÔ ���� ���Ò �ÑÔÐ�¬�Ö � �× �<br />
ÓÒ×�ÕÙ�Ò � Ø�� Ó«×�Ø �Ò� Ø�� ÓÔ�Ö�Ø�Ò� ÔÓ�ÒØ ×Ø���Ð�ØÝ<br />
�Ö� Ö�Ø� �Ð� �ÓÖ Ø�� Ó«×�Ø �Ö��Ø�×Ø �Ö� Û�× Ø���Ò ÓÒ Ø��<br />
�ÒÔÙØ ×Ø��� Ö�×�×ØÓÖ ÐÓ�� �Ò� ØÖ�Ò×�×ØÓÖ× �ÓÙ�Ð�� �Ò� ��<br />
×��Ò�� �× � ÖÓ×× �Ò� �Ò �ÜØÖ� ������ � ÐÓÓÔ � Ø�Ò� ÓÒÐÝ<br />
ÓÒ Ø�� ÓÑÑÓÒ ÑÓ�� ×��Ò�Ð Û�× ����� ×�� ¬�ÙÖ� � ¬Ö×Ø<br />
Ø�� ÓÑÑÓÒ ÑÓ�� ÚÓÐØ��� �× Ó�Ø��Ò �Ý ×ÙÑÑ�Ø�ÓÒ Ó� Ø��<br />
ØÛÓ ÓÑÔÐ�Ñ�ÒØ�ÖÝ ÓÙØÔÙØ× Û�Ø� Ö�×�×ØÓÖ× Ì��× ÚÓÐØ���<br />
�× Ø��Ò ÓÑÔ�Ö�� ØÓ �ÖÓÙÒ� �Ò� Ø�� �ÖÖÓÖ Ö�×ÙÐØ �ÑÔÐ�¬��<br />
�Ò� �ÔÔÐ��� Û�Ø� Ø�� ×�Ñ� ÔÓÐ�Ö�ØÝ ÓÒ Ø�� ØÛÓ �ÒÔÙØ×<br />
�× � Ö�×ÙÐØ Ø�� ÓÙØÔÙØ ÓÑÑÓÒ ÚÓÐØ��� �× Ñ��ÒØ��Ò�� ØÓ<br />
�ÖÓÙÒ� �Ò� Ø�� Ó«×�Ø �× Ö��Ù ��<br />
�ÙÖ�Ò� Ø�� Ö�×�Ø Ô��×� �ÒÔÙØ× �Ò� ÓÙØÔÙØ× �Ö� ×�ÓÖØ��<br />
�× � ÓÒ×�Õ�Ò � Ø�� ���Ò �� ÓÑ�× Ú�ÖÝ ÐÓÛ �Ò�Ø�� Ú�Ö<br />
ØÙ�Ð �ÖÓÙÒ� �× ÒÓØ � ���Ú�� ÌÓ ÓÚ�Ö ÓÑ� Ø��× ÔÖÓ�Ð�Ñ<br />
ØÛÓ �ÜØÖ� ×Û�Ø ��× �Ö� ����� �Ø Ø�� �ÒÔÙØ× Ì�� �ÓÙÖ<br />
×Û�Ø ��× �Ö� ��×��Ò�� Û�Ø� ÓÑÔÐ�Ñ�ÒØ�ÖÝ ÅÇË �× Ù×Ù�Ð<br />
ØÓ Ñ�Ò�Ñ�×� Ø�� �Ò�� Ø�� ��Ö�� �«� Ø<br />
Vcc Vcc<br />
in in<br />
Vcc<br />
Vee<br />
Vcc<br />
out out<br />
Vee<br />
Vcc<br />
���ÙÖ� � �ÇÈ Û�Ø� � Å ������ �<br />
common mode feedback<br />
Ê�×ÙÐØ× � Ø�� ×�ÑÙÐ�Ø�ÓÒ Ö�×ÙÐØ× ×�ÓÛ � ���Ò Ó� � �Ò� �<br />
Ú�ÖÝ ���� ÕÙ�Ð�ØÝ Ö�×�Ø Ì�� ×�ÑÙÐ�Ø�� Ð�Ò��Ö�ØÝ �× Ô�Ö�� Ø<br />
�Ò �ÒØ��Ö�ØÓÖ Û�× Ö��Ð�Þ�� �Ò� Ø�×Ø�� �ÐÓÒ� Ì�� Ø�×Ø<br />
Ö�×ÙÐØ× �Ö� �Ò Ú�ÖÝ �ÓÓ� ��Ö��Ñ�ÒØ Û�Ø� Ø�� ×�ÑÙÐ�Ø�ÓÒ �<br />
Ø�� Ó«×�Ø× Ñ��×ÙÖ�� �Ø Ø�� ÓÙØÔÙØ Ú�ÖÝ �ÖÓÑ � ��Û ÑÎ<br />
ØÓ � Ñ�Ü�ÑÙÑ Ó� ÑÎ �Ò� Ø�� Ð�Ò��Ö�ØÝ �× ��ØØ�Ö Ø��Ò<br />
ÓÙÖ Ñ��×ÙÖ�Ñ�ÒØ �Ô���Ð�ØÝ<br />
ÌÖ� � �Ò� ÀÓÐ�<br />
Ì�� Ù×�� ×ØÖÙ ØÙÖ� �× ��× Ö���� �Ý È��Ø�Ö ÎÓÖ�Ò��ÑÔ � ℄<br />
�Ò� Â��Ò Å�Ö�� �Ù××�Ø ��℄ �Ò� �ÐÖ���Ý Ù×�� �Ò� ��× Ö����<br />
×�Ú�Ö�Ð Ø�Ñ�× Ì�� ÓÑÔ�Ò×�Ø�ÓÒ Ó� Ø�� ��×� �Ñ�ØØ�Ö Ô�Ö<br />
�×�Ø� �Ô� �ØÝ Û�× ÙÒ ��Ò��� �ÙØ Ø�� �×�Ö��Ð� Ð�Ò��Ö�ØÝ<br />
ÓÑÔ�Ò×�Ø�ÓÒ Û�× Ö�ÔÐ� �� �Ý ÓÙÖ Ò�Û Ô�Ö�ÐÐ�Ð ÓÒ� ØÓ<br />
�� ÑÓÖ� ÓÒ�ÓÖØ��Ð� ÓÒ �ÝÒ�Ñ� �Ò� �Ð×Ó ØÓ Ó�Ø��Ò Ø��<br />
ÓÙØÔÙØ ÓÔ�Ö�Ø�Ò� ÔÓ�ÒØ �× ���� �× ÔÓ××��Ð� ÍÒ��Ö Ø��×�<br />
ÓÒ��Ø�ÓÒ Ø�� ¦ Î �ÝÒ�Ñ� Ö�Ò�� Û�× ��×�ÐÝ Ó�Ø��Ò��<br />
Û�Ø� � Ð�Ò�Ö�ØÝ Ó� ��Û Ô�Ö Ø�ÓÙ×�Ò� Ì��×� Ö�×ÙÐØ× Û�Ö�<br />
ÓÒ¬ÖÑ�� �Ý Ø�� Ø�×Ø× Ó� Ø�� ÔÖÓØÓØÝÔ�× ×�� Ø�� ÄÀ��<br />
Û�� ×�Ø� ��℄<br />
� �����Ö�ÒØ��Ð �Ò�ÐÓ� ÑÙÐØ�ÔÐ�Ü�Ö<br />
Ì�� �Ö Ù�Ø Û�× �ÒØ�Ö�×Ø�Ò� ØÓ ��×��Ò � �Ò Ø��× ¬Ò�Ð Ú�Ö×�ÓÒ<br />
�Ø �× Ø�� ��Ý �ÐÓ � Û��Ö� Û� ��� ØÓ �Ó �ÖÓÑ ¦ Î ØÓ ¦ Î<br />
�ÝÒ�Ñ� Ö�Ò�� �Ò � � Ò× ÑÙÐØ�Ô�Ü�Ò� ÓÔ�Ö�Ø�ÓÒ Û�Ø�ÓÙØ<br />
�ÒÝ ÖÓ××Ø�Ð� �ÖÓÑ ÓÒ� �ÒÔÙØ ØÓ Ø�� ÓØ��Ö<br />
Ï� ×Ø�ÖØ �ÖÓÑ Ø�� ×�ÑÔÐ� ×Û�Ø �Ó�ØÛÓ ���ÒØ� �Ð ��«�Ö�Ò<br />
Ø��Ð Ô��Ö× Ó� Ø�� ¬�ÙÖ� Ì��× × ��Ñ� ×�ÓÛ× Ø�� �ÓÐÐÓÛ�Ò�<br />
�ÑÔ�Ö�� Ø�ÓÒ× �<br />
¯ �Ø �× � Ð�ØØÐ� ØÓÓ ×ÐÓÛ ×Ô� ��ÐÐÝ ØÓ Ö�ØÙÖÒ ØÓ Þ�ÖÓ Û��Ò<br />
Ø�� ÓØ��Ö �ÒÔÙØ �× ����� Ì�� ÓÖÖ� Ø�ÓÒ �ÓÙÒ� �× �<br />
×�ÑÔÐ� ÔÙÐÐ ÙÔ Ö�×�×ØÓÖ ×�� ¬�ÙÖ� �<br />
¯ Ø�� ÒÓÒ Ð�Ò��Ö�ØÝ �ÖÖÓÖ× �× �ÜÔÐ��Ò ���ÓÖ� Û�× ÓÖ
in1 in1 in2 in2<br />
clock+ clock−<br />
���ÙÖ� � ÑÙÐØ�ÔÐ�Ü�Ö ��×��Ò<br />
out<br />
in in<br />
���ÙÖ� � ÑÙÐØ�ÔÐ�Ü�Ö ��Ø��Ð<br />
Ö� Ø�� Û�Ø� Ø�� �Ô�Ö�ÐÐ�Ð ÓÑÔ�Ò×�Ø�ÓÒ� �<br />
out<br />
¯ �× ÓÒ Ñ�ÒÝ ÓØ��Ö ÑÙÐØ�ÔÐ�Ü�Ö× Ø��Ö� �× � Ð�ØØÐ� ÖÓ××<br />
Ø�Ð� ��ØÛ��Ò Ø�� ØÛÓ �ÒÔÙØ× ×Ô� ��ÐÝ Û��Ò Ø�� ÙÒ<br />
Ù×�� �ÒÔÙØ �× ��×Ø �Ò� ���� À�Ö� Ø��× �� Ø �× �Ù� ØÓ<br />
Ø�� �� Ø Ø��Ø Ø�� �ÐÓ ��� ØÖ�Ò×�×ØÓÖ× Ó� Ø�� ÒÓÒ Ù×��<br />
�ÒÔÙØ � Ø �× Ô�Ö�×�Ø� �Ô� �ØÓÖ �Ò� �Ò�� Ø ÓÒ Ø�� ÓÙØ<br />
ÔÙØ � Ð�ØØÐ� Ô�ÖØ Ó� Ø���Ö ÓÛÒ ×��Ò�Ð Ì��Ò�× ØÓ Ø��<br />
�� Ø Ø��Ø Ø��× ��×��Ò �× �ÙÐÐÝ ��«�Ö�ÒØ��Ð Û� Ù×�� ØÛÓ<br />
�ÜØÖ� ØÖ�Ò×�×ØÓÖ× �ÐÛ�Ý× �ÐÓ ��� ØÓ �Ò�� Ø �Ð×Ó Ø��<br />
ÓÔÔÓ×�Ø� Ô�Ö�×�Ø�× ×�� ¬�ÙÖ� ÆÓØ� � Ø��Ø Ø�� �<br />
ØÖ�Ò×�×ØÓÖ× ��Ú� �Ü� ØÐÝ Ø�� ×�Ñ� ��×� �Ò� ÓÐÐ� ØÓÖ<br />
ÓÔ�Ö�Ø�Ò� ÔÓ�ÒØ× �× � Ö�×ÙÐØ Ø�� ÓÑÔ�Ò×�Ø�ÓÒ �×<br />
Ô�Ö�� Ø �Ò� Ø�� ÖÓ×× Ø�Ð� ��×�ÔÔ��Ö× �ÙÐÐÝ<br />
� ÇÙØÔÙØ �Ù���Ö<br />
Ì�� ÓÙØÔÙØ �Ù«�Ö �× �Ð×Ó � ��ÐÐ�Ò�� � Ø�� ÓÒ×ÙÑÔØ�ÓÒ<br />
Ó� Ø�� Ú�ÖÝ �ÖÓÒØ �Ò� �Ó�Ö� �× Ú�ÖÝ Ö�Ø� �Ð �× Û� ��Ú� ØÓ<br />
��Ò�Ð� �� ��ÒÒ�Ð× �Ò� �Ö�Ú� Ø��Ñ ÓÒ Ñ�Ø�Ö ��Ð�×<br />
ÓÒ � � ¢ � Ñ È� �Ó�Ö� �Ò ÐÙ��Ò� Ø�� �� ��ÒÒ�Ð Ô�ÓØÓ<br />
ØÙ�� � � ØÖÙ� Ð�×× � ÓÖ �� ÔÙ×� ÔÙÐÐ Ù×�Ò� ÓÒÐÝ ÆÈÆ<br />
ØÖ�Ò×�×ØÓÖ× Û�× ��×��Ò�� ÁØ �Ò �� ×��Ò ÓÒ Ø�� ¬�ÙÖ�<br />
Ø��Ø ØÖ�Ò×�×ØÓÖ× Ì � Ø× �× � Ù×Ù�Ð ÓÑÔÐ�Ñ�ÒØ�ÖÝ ÔÙ×�<br />
ÔÙÐÐ Ì�� ØÖ�Ò×�×ØÓÖ Ì Û�� � Ö�ÔÐ� �× Ø�� Ð�××� �Ð ÈÆÈ<br />
�× �Ö�Ú�Ò �Ý � ÓÒØÓÐ ÐÓÓÔ �ÓÖ Û�� � Ø�� ØÖ�Ò×�×ØÓÖ× Ì Ì �<br />
�Ò� Ì � �Ö� ����� Ì Ì �Ò� Ì � Ì � �ÓÖÑ � ÙÖÖ�ÒØ<br />
Ñ�ÖÖÓÖ � Ø��Ý ��Ú� ×�Ñ� ��×� �Ò� ×�Ñ� �Ñ�ØØ�Ö ÚÓÐØ���<br />
� �Ö� Ø�ÓÒ Ó� Ø�� ÓÙØÔÙØ ÙÖÖ�ÒØ ��Ø�ÖÑ�Ò�� �Ý Ø�� ×�Þ�<br />
in<br />
T4<br />
T5<br />
Iref<br />
Vref<br />
���ÙÖ� � �Ù«�Ö ��×��Ò<br />
Ö�Ø�Ó Ó� Ì Ì �Ò� Ì � Ì � �× �Ñ��×ÙÖ��� �Ý Ì � Ì ��Ò�<br />
×Ù�ØÖ� Ø�� ØÓ ÁÖ�� Ì�� Ô�ÖØ Ó� ÁÖ�� Û�� � �× ÒÓØ Ø���Ò �Ý<br />
Ì � Ì � �× �ÔÔÐÝ ØÓ Ø�� ��×� Ó� Ì �Ý Ø�� ÈÅÇË ØÖ�Ò×�×ØÓÖ<br />
ÁØ �× ��×Ý ØÓ Ú�Ö��Ý Ø��Ø Ø��× ������ � �× ×Ø��Ð� � �� � ÔÓ×�Ø�Ú�<br />
×��Ò�Ð Ó ÙÖ× Ø�� ÙÖÖ�ÒØ Ó�Ì �Ò Ö��×�× �Ø× Î �� �Ò Ö��×�×<br />
�Ò� Ø��Ò Ø�� Î �� Ó� Ì � �Ò� Ì � �Ò Ö��×� �× � ÓÒ×�ÕÙ�Ò �<br />
ÑÓÖ� ÙÖÖ�ÒØ �× Ø���Ò ØÓ Á Ö�� �Ý Ì ��Ò�Ì � �Ò� Ø�� ÙÖÖ�ÒØ<br />
��Ú�Ò ØÓ Ì �Ý Ø�� ÈÅÇË �� Ö��×�× ØÓ �ÐÐÓÛ Ø�� ÓÙØÔÙØ<br />
ÚÓÐØ��� ØÓ �Ò Ö��×� Ì�� ×�Ñ� �Ò�ÐÝÞ� �× ��×Ý ØÓ �Ó Û�Ø�<br />
� Ò���Ø�Ú� �Ó�Ò� ×��Ò�Ð Ì�� ×�ÑÙÐ�Ø�ÓÒ ÓÒ¬ÖÑ× Ø��Ø Ø��×<br />
�Ö Ù�Ø �× � ØÖÙ� ÔÙ×� ÔÙÐÐ ÁØ �× �ÒØ�Ö�×Ø�Ò� ØÓ ÒÓØ� � Ø��Ø<br />
Ø�� ÙÖÖ�ÒØ ������ � Ô�Ö�Ñ�Ø�Ö× �Ö� ÒÓØ Ö�Ø� �Ð �Ø �ÐÐ �Ò�<br />
Ú�ÖÝ ��×Ý ØÓ ���Ù×Ø ÁØ �× �Ð×Ó �ÒØ�Ö�×Ø�Ò� ØÓ ÒÓØ� � Ø��Ø<br />
Ø�� Ñ�Ü�ÑÙÑ �ÝÒ�Ñ� Ö�Ò�� Û� �Ò Ö�� � �× ÓÒÐÝ Ð�Ñ�Ø��<br />
�Ý Ø�� ÔÓÛ�Ö ×ÙÔÔÐÝ �Ò� Ø�� ÐÓ×Ø Ó� ØÛÓ Î �� Ì �Ò� Ì<br />
�Ü� ØÐÝ Ð��� �Ò � ÓÑÔÐ�Ñ�ÒØ�ÖÝ ÈÙ×� ÈÙÐÐ Ì�� �ÓÐÐÓÛ�Ò�<br />
Ø��Ð� ×�ÓÛ× Ø�� Ö�×ÙÐØ× Û�� � ÓÙÐ� �� ��×�ÐÝ Ö�� ��� Û�Ø�<br />
Ø��× ÓÙØÔÙØ ×Ø��� �ÓÖ ��«�Ö�ÒØ ÕÙ��× �ÒØ ÙÖÖ�ÒØ×�<br />
ÔÓÛ�Ö ×ÙÔÔÐÝ ÕÙ��× �ÒØ Ð�Ò��Ö�ØÝ �ÖÖÓÖ Ö�×� Ø�Ñ�<br />
ÙÖÖ�ÒØ× ÓÒ ÚÓÐØ×<br />
¦ ��� Î � Ñ� ÑÎ Ñ�Ü Ò×<br />
¦ ��� Î � Ñ� �ÑÎ Ñ�Ü � Ò×<br />
¦ ��� Î Ñ� �ÑÎ Ñ�Ü �Ò×<br />
�ÓÖ ÓÙÖ �ÔÔÐ� �Ø�ÓÒ Û� �Ó×� ØÓ ÓÔ�Ö�Ø� �Ø � Ñ� ÆÓ<br />
Ø� � Ø��Ø �× �ÜÔÐ��Ò ���ÓÖ� Ø�� � ÑÎ ÒÓÒ Ð�Ò��Ö�ØÝ �ÖÖÓÖ<br />
Û�× ÓÖÖ� Ø�� �Ò Ø�� Ô�Ö�ÐÐ�Ð ÓÑÔ�Ò×�Ø�ÓÒ Ó� Ø�� ÔÖ�Ú�ÓÙ×<br />
×Ø��� ØÓ Ö�� � �Ò ×�ÑÙÐ�Ø�ÓÒ ¦ �Î �ÖÖÓÖ Û�Ø�Ø�� ØÛÓ<br />
×Ø���× ØÓ��Ø��Ö<br />
ÁÎ Å��×ÙÖ�Ñ�ÒØ×<br />
�× Ù×Ù�Ð �� � �ÐÓ �Û�× �Ò��Ú��Ù�ÐÐÝ Ö��Ð�Þ�� �Ò� Ø�×Ø��<br />
Ø��Ò � ÓÑÔÐ�Ø� ÓÒ� ��ÒÒ�Ð ÔÖÓØÓØÝÔ� Û�× Ö��Ð�Þ�� �Ò�<br />
Ø�×Ø�� �ÓØ� �Ò Ð��ÓÖ�ØÓÖÝ �Ò� �Ò Ø�×Ø ���Ñ ��Ò�ÐÐÝ ��Ø�Ö<br />
ÓÒ� �Ø�Ö�Ø�ÓÒ � � ��ÒÒ�Ð �Ò� Ø��Ò Ø�� ¬Ò�Ð � ��ÒÒ�Ð×<br />
��Ô× Û�Ö� ÔÖÓ�Ù �� Ì�� Ö�×ÙÐØ× Û�Ö� �Ò Ú�ÖÝ �ÓÓ� ��Ö��<br />
Ñ�ÒØ Û�Ø� Ø�� ×�ÑÙÐ�Ø�ÓÒ ×ÓÑ� ��Ø��Ð× �Ò �� �ÓÙÒ� ÓÒ<br />
Ø�� ÄÀ�� Û�� ×�Ø� ��℄<br />
T1<br />
T3<br />
T2<br />
out
Ì�� �ÙÒ Ø�ÓÒÒ�Ð�Ø��× Ó� Ø�� ��Ô Û�Ö� �Ö��ÙÐÐÝ Ú�Ö��Ý Ð��<br />
ÓÖ�ØÓÖÝ �Ò� Ø�×Ø ���Ñ Ø�×Ø �Ò� Ñ��×ÙÖ�Ñ�ÒØ Û�Ö� �ÓÒ�<br />
Ø���Ö Ö�×ÙÐØ× �Ö� ×ÙÑÑ�Ö�Þ�� ��ÐÐÓÛ<br />
Ä��ÓÖ�ØÓØÝ Ø�×Ø×<br />
Ç«×�Ø � �× �ÜÔ� Ø�� Ó«×�Ø ��×Ô�Ö×�ÓÒ �× ÐÓÛ�Ö Ø��Ò ÑÎ<br />
���Ò× � Ø�� ���Ò ��×Ô�Ö×�ÓÒ ��Û Ô�Ö �ÒØ �× Ò��Ð����Ð�<br />
����Ò� Ø�� ��ÒÒ�Ð ���Ò ��×Ô�Ö×�ÓÒ Ó� Ø�� Ô�ÓØÓØÙ��<br />
ËÛ�Ø ��Ò� �Ò� Ö�×� Ø�Ñ� � Ø�� Ö�×ÙÐØ× Ó� Ø�� �ÑÙÐØ�ÔÐ�Ü�Ö<br />
�Ù«�Ö �ÐÓÒ�� Ø�×Ø ×�ÓÛ Ø���Ö ØÓØ�Ð ��Ö��Ñ�ÒØ Û�Ø� ×�ÑÙ<br />
Ð�Ø�ÓÒ Û�Ø�ÓÙØ �ÒÝ ÓÚ�Ö×�ÓÓØ �Ò� Û�Ø� Ø�� ÔÖ��� Ø�� Ö�×�<br />
��<br />
�ÐÓ��Ð Ð��ÓÖ�ØÓØÝ Ø�×Ø× �Ö� ×ÙÑ�Ö�Þ�� �Ò ¬�ÙÖ�× � �Ò� �<br />
Û�� � ×�ÓÛ Ø�� ×�ÑÙÐ�Ø�� ÓÙØÔÙØ �Ù×Ø Ò��Ö Ø�� Ñ��×ÙÖ��<br />
ÓÒ� �Ò� ÕÙ�ÒØ�¬�� �Ý � �ÐÓ��Ð Ñ��×ÙÖ� Ó� Ø�� Ð�Ò��Ö�ØÝ<br />
Û�� � �× Ø�� ÑÓ×Ø �ÑÔÓÖØ�ÒØ Ø�×Ø �× Û� ���Ò Ø ��Ú� �ÒÝ<br />
Ú�ÖÝ ÔÖ� �×� ÔÙÐ×� ��Ò�Ö�ØÓÖ �Ò Ø�ÖÑ �ÓØ� Ó� Ø�Ñ� ��ØØ�Ö<br />
�Ò� �ÑÔÐ�ØÙ�� Û� Ø�×Ø Ø�� ��Ô ÓÒ �Ø× ÛÓÖ×Ø ÓÔ�Ö�Ø�Ò�<br />
ÑÓ�� � � � Û�Ø� �Ú�ÖÝ Ð�Ö�� �ÒÔÙØ ÔÙÐ×� ��Ú�Ò �Ý � Ø��<br />
�Ï� Ì��ØÖÓÒ�Ü ��Ø× �Ö��ØÖ�ÖÝ Û�Ú� �ÓÖÑ ��Ò�Ö<br />
�ØÓÖ Ì�� ÓÙØÔÙØ Ó� Ø�� ��Ô �× Ø��Ò ØÖ�Ò×Ñ�ØØ�� ØÓ Ø��<br />
×�Ñ� ��� �Ö�Ú�Ö �× Ù×�� ÓÒ Ø�� �ÖÓÒØ �Ò� �Ó�Ö� �Ò� ØÓ<br />
Ø�� � ��Ø× �� ÅË × ��� ������ Ì�� Ö�×ÙÐØ Û�� � Ö�Ô<br />
Ö�×�ÒØ× Ø�� ×ÙÑ Ó� �ÖÖÓÖ× Ó� Ø�� ��Ò�Ö�ØÓÖ Ø�� ��Ô Ø��<br />
ÓÔ�ÑÔ �Ò� Ø�� ��� �× ×�ÓÛÒ ¬�ÙÖ� � Ì��× Ñ��×ÙÖ��<br />
�ÖÖÓÖ �× Ð�×× Ø��Ò ÓÒ� ÄË� �ÓÖ Ø�� ÐÓÛ ×��Ò�Ð× �Ò� Ð�×× Ø��Ò<br />
ÓÒ� Ô�Ö �ÒØ �ÐÓÒ� Ø�� Û�ÓÐ� �ÝÒ�Ñ� Ö�Ò�� �× Ø��× �Ö<br />
ÖÓÖ �× �Ò Ø�� ×�Ñ� ÓÖ��Ö Ó� Ñ��Ò�ØÙ�� Ó� Ø��×� Ó� ÓÙÖ Ø�×Ø<br />
��Ò � Ø�� �� Ø Ø��Ø Ø��Ý �Ö� ÒÓØ �Ü� ØÐÝ ×Ó �ÓÓ� Ø��Ò<br />
Ø�� ×�ÑÙÐ�Ø�� ÓÒ�× �× ÒÓØ ×��Ò�¬ �ÒØ ÁÒ �ÒÝ �×� Ø��Ý ¬Ø<br />
Û�Ø� Ø�� Ö�ÕÙ�Ö�Ñ�ÒØ× Ó� Ø�� �ÜÔ�Ö�Ñ�ÒØ<br />
���ÙÖ� �� Ð�Ò��Ö�ØÝ ���ÙÖ� �� ��Ô �Ò ÓÙØ<br />
Ì�� ÒÓ�×� Û�× Ñ��×ÙÖ�� � Û� Ó�Ø��Ò Ø�� Ú�ÐÙ� Ó� �� �Î<br />
Û�� � ¬Ø× ÓÙÖ Ö�ÕÙ�Ö�Ñ�ÒØ �× �Ø �× Ð�× Ø��Ò ÓÒ� ÄË�<br />
Ì�×Ø ���Ñ Ö�×ÙÐØ×<br />
�ÙÖ�Ò� Ø�� Ð�×Ø ÄÀ�� Ø�×Ø ���Ñ ×�ÔØ�Ñ��Ö Û�<br />
Ø�×Ø �ÓÖ Ø�� ¬Ö×Ø Ø�Ñ� Ø�� ¬Ò�Ð Ú�Ö×�ÓÒ Ì�� ¬�ÙÖ� �<br />
×�ÓÛ× Ø�� ÓÙØÔÙØ Ó� Ø�� ��Ô ��Ø�Ö � � Ñ�Ø�Ö ��Ð� �ÓÖ<br />
ÓÒ� ÅÁÈ Ì�� ÒÓ�×� Ú�ÐÙ� ×�� ¬�ÙÖ� � Û�× Ó�Ø��Ò��<br />
�Ý ��×ØÓ�Ö�ÑÑ�Ò� Ø�� ��Ô ÓÙØÔÙØ ��Û Ô�Ö�Ó�× ���ÓÖ� Ø��<br />
ØÖ����Ö Ì�� Ú�ÐÙ� Ó� � �Î Û� Ó�Ø��Ò ×�ÓÛ× Ø��Ø �Ò Ø��<br />
ÔÓÓÖ ÓÒ��Ø�ÓÒ× Ó� Ø��× Ø�×Ø Ø�� Ô�Ö�ÓÖÑ�Ò �× Ó� Ø�� ��Ô<br />
Û�× ÒÓØ ���Ö����<br />
10000<br />
7500<br />
5000<br />
2500<br />
0<br />
90 100 110 120 130 140 150 160 170 180<br />
ADC count<br />
���ÙÖ� �� ÒÓ�×�<br />
ADC ( HT =600 V)<br />
10<br />
7.5<br />
5<br />
2.5<br />
Î �ÓÒ ÐÙ×�ÓÒ<br />
0<br />
1 2 3 4 5 6 7 8 9 10 11<br />
time (x25 ns)<br />
���ÙÖ� �� Ø�� ÅÁÈ<br />
Ì�� Ñ�Ü�� �Ò�ÐÓ� ����Ø�Ð ×��Ô�Ö Ó� Ø�� ÄÀ�� ÔÖ�×�ÓÛ�Ö<br />
��×�� ÓÒ ×Û�Ø ��� �ÒØ��Ö�ØÓÖ ØÖ� � �Ò� �ÓÐ� ÑÙÐØ�ÔÐ�Ü�Ö<br />
�Ò� ��Ð� ÔÙ×� ÔÙÐÐ �Ö�Ú�Ö Û�× ×Ù �××�ÙÐÐÝ ��×��Ò�� Ö��Ð<br />
�Þ�� �Ò� Ø�×Ø�� Ì�� Ö�×ÙÐØ× �Ò Ú�ÖÝ �ÓÓ� ��Ö��Ñ�ÒØ Û�Ø�<br />
×�ÑÙÐ�Ø�ÓÒ× ��Ú��<br />
¯ � �ÝÒ�Ñ� Ö�Ò�� �����Ö Ø��Ò ¦ Î Û�Ø� ¦ ��� Î<br />
ÔÓÛ�Ö ×ÙÔÔÐÝ �<br />
¯ � Ð�Ò��Ö�ØÝ ��ØØ�Ö Ø��Ò ÓÖ ÑÎ ÓÚ�Ö Ø�� Û�ÓÐ�<br />
�ÝÒ�Ñ� Ö�Ò�� �<br />
¯ � ÒÓ�×� ×Ñ�ÐÐ�Ö Ø��Ò �� �Î �<br />
¯ � ÓÒ×ÙÑÔØ�ÓÒ Ó� ÑÏ Ô�Ö ���ÒÒ�Ð �<br />
¯ � ×�Ð� ÓÒ �Ö�� Ó� �� ÑÑ Ô�Ö ��ÒÒ�Ð � ��ÒÒ�Ð× Ô�Ö<br />
��Ô<br />
Ì��Ò�× ØÓ Ø�� �ÙÐÐ ��«�Ö�ÒØ��Ð ��×��Ò �ÐÐ Ø�� ÓÖÖ� Ø�ÓÒ×<br />
Û�Ö� ��×�ÐÝ �ÓÒ� �Ò� Ø�� ÓÙØÔÙØ ÒÓ�×� Û�× ÒÓØ �«� Ø�� �Ø<br />
�ÐÐ �Ý Ø�� � ÅÀÞ ÐÓ �<br />
Ê���Ö�Ò �×<br />
� ℄ ���Ö�Ö� �Ó�Ò�Ö �Ò� �Ð �ÄÀ�� ÈÖ�×�ÓÛ�Ö Ë��Ò�Ð<br />
��Ö� Ø�Ö�×Ø� ×� ÄÀ�� ÒÓØ� �<br />
� ℄ ���Ö�Ö� �Ó�Ò�Ö �Ò� �Ð �� Ñ�Ü�� �Ò�ÐÓ� ����Ø�Ð<br />
×��Ô�Ö �ÓÖ Ø�� ÄÀ�� ÔÖ�×�ÓÛ�Ö� ���Ø� ÛÓÖ×�ÓÔ ÓÒ<br />
�Ð� ØÖÓÒ� × �ÓÖ ÄÀ� �ÜÔ�Ö�Ñ�ÒØ× ×�ÔØ�Ñ��Ö ���<br />
ËÒÓÛÑ�××<br />
� ℄ È��Ø�Ö ÎÓÖ�Ò��ÑÔ ÂÓ��Ò È Å Î�Ö���×�ÓÒ� ��ÙÐÐÝ<br />
��ÔÓÐ�Ö Å×�ÑÔÐ�× × � ÌÖ� � �Ò� ÀÓÐ� ��Ö<br />
Ù�Ø � Á��� �ÓÙÖÒ�Ð Ó� ×ÓÐ�� ×Ø�Ø� �Ö Ù�Ø× ÂÙÐÝ �<br />
��<br />
��℄ Â��Ò Å�Ö�� �Ù××�Ø Ø���×� �� �Ó ØÓÖ�Ø ��ÓÒ �ÔØ�ÓÒ<br />
� ÙÒ ��×ÔÓ×�Ø�� � � ÕÙ�×�Ø�ÓÒ Ö�Ô��� �� �Ö�Ò�� �Ý<br />
Ò�Ñ�ÕÙ� � � �Ù�Ò ���<br />
��℄ �ØØÔ� Ð� � Û�� �ÖÒ � Ð� � �ÐÓÖ�Ñ�Ø�Ö ���Ò��<br />
�Ð� ØÖÓÒ� Ö�Ú��Û
Production and Test of the ATLAS Hadronic Calorimeter Digitizer<br />
S. Berglund, C. Bohm, K. Jon-And, J. Klereborn, M. Ramstedt and B. Selldén<br />
Abstract<br />
The pre-production stage of the full-scale production of<br />
the ATLAS TileCal digitizer started during the summer<br />
2001. To be able to ensure full functionality and quality, a<br />
thorough test scheme was developed.<br />
All components are radiation tested before start of<br />
production. After mounting components all digitizer boards<br />
will pass burn-in and tests in Stockholm. Custom designed<br />
software ensures that full functionality is maintained. A<br />
record of the test results is stored in a repository accessible<br />
via Internet for future reference. Similar test software is later<br />
used at the site of full electronics assembly in Clermont-<br />
Ferrand cross referencing their results with the test data<br />
entries.<br />
A. The Digitizer<br />
I. INTRODUCTION<br />
Stockholm University is responsible for design,<br />
manufacture and quality control of the digitizing unit of the<br />
ATLAS Hadron Calorimeter [1]. The calorimeter is often<br />
called TileCal because of its interleaved iron and scintillating<br />
tiles. Heavy particles interact with the iron tiles and form<br />
showers of charged particles that produce light in the<br />
adjacent scintillating tiles. This light is transferred to an array<br />
of PMTs via wave length shifting fibers. The PMTs and all<br />
the front-end electronics are contained in so called drawers at<br />
the base of the calorimeter modules. There are 32 or 45<br />
PMTs in a drawer depending of its position in the detector.<br />
The PMTs are connected to 3-in-1 cards [2] that shape and<br />
amplify the pulses. The 3-in-1 cards are in turn connected to<br />
digitizers that sample the pulses.<br />
Each digitizer board serves six 3-in-1 channels. There are<br />
two TileDMUs, a specially designed controller and readout<br />
ASIC, on each digitizer board. Data from the TileDMUs are<br />
read out via a G-link based interface board (Fig.1). The TTCsystem<br />
[3] delivers timing and slow control to the interface<br />
Stockholm University, Sweden October 2001<br />
mank@physto.se<br />
board via fibers. The TTC signal is then distributed to the<br />
digitizer boards. A drawer contains eight or six digitizer<br />
boards depending on its placement in the detector with the<br />
interface board placed in the middle. When a digitizer board<br />
at the far end of the drawer is read out, data and TTC signals<br />
pass through lines with no active components on<br />
intermediate boards. Thus, any malfunctioning components<br />
on a digitizer board will only corrupt its own data.<br />
TTC-rx<br />
Tile_DMU Tile_DMU<br />
TTC-rx<br />
Tile_DMU Tile_DMU<br />
TTC-rx<br />
Tile_DMU Tile_DMU<br />
interface<br />
Read-out<br />
optical fibers<br />
TTC-rx<br />
Tile_DMU Tile_DMU<br />
Tile_DMU Tile_DMU Tile_DMU Tile_DMU Tile_DMU Tile_DMU Tile_DMU Tile_DMU<br />
TTC-rx<br />
TTC-rx<br />
TTC-rx<br />
TTC-rx<br />
TTC distribution<br />
interface optical fibers<br />
Fig. 1 The data flow along a chain of eight digitizer boards<br />
The main components of the digitizer board are the<br />
ADCs, the TileDMUs and the TTCrx. The ADCs are ten bits<br />
converters. To provide an effective dynamic range of 16 bits<br />
there are two ADC channels per PMT channel digitizing the<br />
two signals from the 3-in-1 card. These are high and low gain<br />
signals with an amplification ratio of 64.<br />
Data from all channels are stored temporarily in pipeline<br />
memories in the TileDMUs. When the first level trigger<br />
validates an event the TileDMU will choose the appropriate<br />
gain, according to the amplitude of the signal, format the data<br />
and store it in a readout buffer. Functions for tests and<br />
calibration are also part of the TileDMU.<br />
B. Production of the digitizer<br />
In order to ensure the quality of the full production, the<br />
pre-production has been made as close to the real production<br />
as possible including all steps: manufacture, mounting, burnin,<br />
tests and logistics. All design is now almost completed<br />
and final manufacturer and assembly-company will soon be<br />
chosen. A few small design modifications will be made to<br />
take care of a yield problem that was discovered during preproduction<br />
when mounting a high-density surface connector.
The production and the test routines will also be improved to<br />
implement some additional features suggested by the preproduction<br />
experience.<br />
In the pre-production 86 boards were manufactured and<br />
assembled with a surprisingly high fault rate level of about<br />
35 %. The sources of these faults have been investigated and<br />
are to a large extent understood. About 20% failed due to<br />
badly soldered data connectors. To fix this the connector<br />
surface mount pads will be modified. About 5% failed due to<br />
malfunctioning TileDMUs. This number was expected since<br />
the TileDMUs of the pre-production were not fully tested<br />
before delivery. The remaining 10% failures are being<br />
investigated in more detail. When this is done there will be a<br />
final product readiness review (PRR) and the mass<br />
production will start, most likely in November this year.<br />
A. Test procedure<br />
II TESTS<br />
To ensure the quality and functionality of the digitizer<br />
boards, tests are made at several checkpoints along the<br />
production process. Components used on the board are<br />
required to be radiation tolerant and are tested according to<br />
the ATLAS recommendations [4]. Unassembled boards are<br />
checked for breaks and shorts at the board manufacturer.<br />
Functionality of the TileDMUs are tested before and after<br />
packaging i.e. just before they are sent for assembly. After<br />
mounting the components the boards will be superficially<br />
tested before delivery to the burn-in and test facility at<br />
Stockholm University. The burnt in boards are thoroughly<br />
tested and then sent to the drawer assembly plant at<br />
Clermont-Ferrand for assembly and final tests.<br />
B. Test Bench Setup<br />
A test drawer has been set up for the purpose of<br />
production tests. This set-up is quite similar to a final<br />
ATLAS drawer. A RIOII VME processor is used as readout<br />
buffer and a TTCvi module as a source of clocks and for<br />
configuration of the digitizer boards (Fig. 2). These tests are<br />
all controlled by RIOII software. The main electronic parts in<br />
the test drawer are the 3-in-1 system, the digitizer boards and<br />
the interface link board.<br />
Fig. 2 Schematic picture of the test bench components and signal<br />
flow<br />
C. Radiation Tests<br />
Workstation<br />
RIOII<br />
Data (G-link)<br />
TTCvi<br />
Interface board<br />
Laser Crate<br />
TTC-signal<br />
Digitizer x4<br />
Digitizer x4<br />
Most active components have been tested for radiation<br />
tolerance and the remaining tests will take place before the<br />
final PRR and start of production.<br />
According to ATLAS requirements the digitizer should,<br />
without damage, resist the following doses: 3.5 krad ionizing<br />
radiation and 2.3*10 12 1MeV eq neutrons/cm 2 .<br />
Corresponding numbers for components containing bipolars<br />
are 17.5 krad and 2.3*10 12 . These figures include the<br />
appropriate safety factors. Tests for single event effects<br />
(SEE) [4] should also be performed. With one exception<br />
(TTCrx), none of the digitizer components have a formal<br />
specification on radiation tolerance from the manufacturer.<br />
Several tests have therefore been made to select components<br />
with the best radiation tolerance, which also are acceptable<br />
from the price/performance point of view. During the process<br />
several types of components have been rejected.<br />
The full installation will contain around 22000 ADCs.<br />
About 40 samples from various batches have been radiated<br />
up to 50 krad and a subsample even up to 100 krad. Our<br />
result is that all samples stand 30 krad and some even 100
krad of ionizing radiation. 18 samples have been exposed to<br />
7.5*10 12 1 MeV eq n/cm 2 with no malfunction detected.<br />
Between 8 and 20 samples from each of the digitizer<br />
CMOS circuit have been exposed to 10 krad ionizing<br />
radiation and 5*10 12 1 MeV eq n/cm 2 . No errors were<br />
detected.<br />
Only one sample of the TileDMU (CMOS) has been<br />
tested with radiation levels as above. For that sample no<br />
malfunction was detected. Eight more will be tested shortly.<br />
A special test bench that tests the TileDMU has been<br />
developed for the study of transient errors.<br />
In the near future we plan to make SEE tests on system<br />
level using a 170 MeV proton beam. Also here we need to<br />
develop a dedicated test bench.<br />
D. Test of the PCB<br />
The vendor will test non-assembled boards for circuit<br />
breaks and short circuits. Faulty boards are rejected. After<br />
assembly the boards are visually inspected for obvious<br />
mistakes and tested for power short circuits. A superficial<br />
test using a dedicated test bench, similar to the one that will<br />
be used in the board SEE tests, is planned. A more thorough<br />
test is made after the burn in.<br />
E. Test of data and TTC connector<br />
In a drawer four digitizers are read out in a chain to the<br />
interface board. Data from the TileDMU, and data and the<br />
TTC signals from boards further away from the interface will<br />
pass through a high-density surface mounted connector on<br />
each digitizer board. A faulty connector can therefore<br />
generate errors when passing signals from another board.<br />
Before starting the full test procedure the connectors must be<br />
thoroughly tested. This will be done using three fully<br />
functional boards as reference boards. The board to be tested<br />
is placed close to the interface board so all connector pins are<br />
used. This is not the most efficient method since one would<br />
like to test more than one board at a time in the drawer. By<br />
testing the connector in a special connector test tool, much<br />
time can be saved. Such a tool is being developed.<br />
F. Functionality Test<br />
After burn-in and test of the data connector, the<br />
functionality of the entire board will be tested. The only way<br />
to do this is to first configure the digitizers and the 3-in-1system<br />
and then read out data. This is done simulating the<br />
TileCal front-end electronics environment. It can be difficult<br />
to identify the error in faulty boards by analyzing the data<br />
since most components interact. However, if a board passes<br />
the test this will guarantee that ATLAS functionality is<br />
maintained [5].<br />
First of all the TTCrx is tested to verify that the proper<br />
clocks are generated and that commands can be transferred to<br />
the digitizers. The TTC single error and double error flags<br />
[3] are monitored to verify the TTC transmission quality.<br />
This test also verifies that the TileDMUs are able to read out.<br />
The next step is to exercise the TileDMU verifying that<br />
all programming parameters can be set. The memory is tested<br />
with different bit patterns. This is done using a test mode<br />
where all data are generated internally.<br />
Test pulses are produced by the 3-in-1 system to verify<br />
that none of the ADC bits are stuck at one or zero. The<br />
pedestal RMS is then calculated to estimate the noise level.<br />
The pedestal levels are examined to determine whether a<br />
component adjustment is needed. An on-board DAC is used<br />
to scan part of the dynamic range to determine the linearity.<br />
All readout is protected by data word parity and cyclic<br />
redundancy check. Errors are not accepted.<br />
G. History files and QC-Sheet<br />
For all digitizer boards the history of modifications, test<br />
data and some characteristics are stored in a history file.<br />
These are auto-generated from the software and are stored in<br />
HTML/ASCII-format for accessibility. When a board has<br />
passed all tests a quality control sheet is generated in HTMLformat<br />
for easy access by the collaboration.<br />
H. Software<br />
The test software is split into two programs, the<br />
configuration and readout software (CRS) and the graphical<br />
user interface and analysis software (GUI). The CRS is<br />
running on the RIOII in the same crate as the TTCvi. It<br />
controls the configuration of the digitizers and the 3-in-1<br />
system via the TTCvi and receives data from the digitizers<br />
via an S-link PCI card attached to the RIOII. This part is<br />
called Coot-Boot. The GUI, called Baltazar, is running on a<br />
workstation (e.g. Windows NT PC) communicating with<br />
Coot-Boot via TCP/IP. The idea behind Baltazar is to<br />
implement a user-friendly interface making it easy to operate<br />
the test bench after a short introduction. The software<br />
automatically generates the history report files and makes<br />
them accessible from WWW.<br />
The GUI is coded in JAVA to achieve platform<br />
independence. JAVA is also easily available and well
documented on the web. Baltazar has also been used as<br />
reference software in TileCal test-beam. The data analysis is<br />
made by Baltazar requiring all data (~900kByte/run) to be<br />
transferred from Coot-Boot. This takes only a few seconds.<br />
The JAVA code makes the analysis sufficiently fast.<br />
III. CONCLUSIONS<br />
The TilCal digitizer is soon ready for full production. In<br />
order to assure fully tested boards and a high production<br />
yield many test procedures have been developed and inserted<br />
in different places along the production chain. This has<br />
required the development of different hardware and software<br />
test benches.<br />
However there are still a few tests that must be improved.<br />
This will be done before the final production.<br />
IV. ACKNOWLEDGEMENTS<br />
We would like to thank J. Molnar and A. Fenyvesi from<br />
Atomki in Debrecen for their help making the neutron<br />
irradiation of our components.<br />
V. REFERENCES<br />
1 ATLAS Tile Calorimeter Technical Design Report,<br />
CERN/LHCC 96-42<br />
2 Front-end Electronics for the ATLAS Tile Calorimeter,<br />
K. Anderson, J. Pilcher, H. Sanders, F. Tang, S.<br />
Berglund, C. Bohm, S-O. Holmgren, K. Jon-And, G.<br />
Blanchot, M. Cavalli-Sforza, Proceedings of the Fourth<br />
Workshop on Electronics for LHC Experiments, Rome,<br />
1998, p.239<br />
3 http://www.cern.ch/TTC/intro.html<br />
4 Atlas policy on Radiation Tolerant Electronics, ATLAS<br />
document no ATC-TE-QA-0001 21-July 00<br />
5 The ATLAS Tile Calorimeter Digitizer, S. Berglund, C.<br />
Bohm, M. Engström, S.-O. Holmgren, K. Jon-And, J.<br />
Klereborn, B. Selldén, S. Silverstein, K. Andersson, A.<br />
Hocker, J. Pilcher, H. Sanders, F. Tang, H. Wu,<br />
Proceedings of the Fifth Workshop on Electronics for<br />
LHC Experiments, Snowmass, Colorado, 1999, p.255<br />
6 S-link: a Prototype of the ATLAS Read-out Link, E. van<br />
der Bij, O.Boyle, Z.Meggyesi, Fourth Workshop on<br />
Electronics for LHC Experiments, Rome 1998, p. 375.
Low Voltage Control for the Liquid Argon Hadronic End-Cap<br />
Calorimeter of ATLAS<br />
Abstract<br />
The strategy of the ATLAS collaboration foresees a<br />
SCADA system for the slow control and survey of all subdetectors.<br />
As software PVSS2 has been chosen and for the<br />
hardware links a CanBus system is proposed.<br />
For the Hadronic End-caps of the Liquid Argon<br />
Calorimeter the control system for the low voltage supplies is<br />
based on this concept. The 320 preamplifier and summing<br />
boards, containing the cold front-end chips, can be switched<br />
on and off individually or in groups. The voltages, currents<br />
and temperatures are measured and stored in a database. Error<br />
messages about over-current or wrong output voltages are<br />
delivered.<br />
H.Brettel * , W.D.Cwienk, J.Fent, H.Oberlack,<br />
P.Schacht<br />
Max-Planck-Institut für Physik, Werner-Heisenberg-Institut,<br />
Foehringer Ring 6, D-80805 Muenchen<br />
brettel@mppmu.mpg.de<br />
Figure 1: DCS detector control system (principle structure)<br />
I. DETECTOR CONTROL SYSTEM OF ATLAS<br />
The slow control of detectors, sub-detectors and<br />
comp onents of sub-detectors is realized by a so-called<br />
SCADA software, installed in a computer net. It is an<br />
industrial standard for Survey, Control And Data Acquisition.<br />
The product, installed at CERN, is “PVSS2” from the<br />
Austrian company “ETM”.<br />
Links between net nodes and hardware can be realized in<br />
different ways. Between the last node and the detector<br />
electronics a CanBus is recommended by the collaboration for<br />
the transfer of slow control signals and the survey of<br />
temperatures, supply voltages and currents (figure 1).
II. LOW VOLTAGE CONTROL OF HEC<br />
A. System Overview<br />
The supply voltages for the cold front-end electronics of<br />
the two Hadronic End-cap wheels are generated in 8 power<br />
boxes near the front-end crates and distributed to the 320<br />
preamplifier- and summing boards inside the cryostats (see<br />
figure 3).<br />
Graphical windows on a PC (figures 2 and 4), tailored to<br />
operators needs, offer complete individual control und survey<br />
of the 320 supply channels .<br />
The application software in the PC is called a “PVSS2project”.<br />
It establishes the link to the DCS net by an<br />
appropriate protocol. The exchange of information with the<br />
low voltage channels takes place via a CanBus by a CanOpen<br />
protocol. CAN is a very reliable bus system widely used by<br />
industry, for example in cars for motor management, brake<br />
activation etc.<br />
At the ATLAS DCS the PC is bus master. The<br />
hardware interface board NICAN2 is controlled by the driver<br />
software OPC. CanBus slaves are offered by industry for<br />
different purposes. We use the ELMB from the CERN DCS<br />
group, which is tailored to our needs. It has 2 microprocessors<br />
inside, digital I/O ports, a 64-channel analog multiplexer and<br />
an ADC.<br />
Figure 4: PVSS2 main panel, “LV CONTROL”<br />
Figure 2: Liquid Argon Calorimeter<br />
Figure 3: HEC low voltage system (one end-cap)<br />
The panel displays the structure of the system. By mouse-click on items<br />
of this panel the operator can open other panels that show more details and offer full<br />
access to hardware components and the database (figures 5 and 6 at next page). To<br />
distinguish between different types of daughter panels, a colour code is applied: red<br />
signifies an action panel and violet a display of hardware details (mechanics, circuit<br />
diagrams).
Figure 5: PVSS2 daughter panel “CHANNEL CONTROL”<br />
B. Hardware<br />
Colours are changed and animated (blinking) in case of<br />
fault conditions, like wrong voltage or excessive current.<br />
Each of the two HEC-wheels consists of 4 quadrants<br />
served by a feed-through with a front-end crate on top of it<br />
(figure 2). Each quadrant is equipped with 40 PSBs, (the<br />
preamplifier and summing boards, which contain the cold<br />
GaAs front-end chips). A related power box, delivers the low<br />
supply voltages. For each wheel 4 boxes are needed. The are<br />
mounted between the fingers of the Tile Calorimeter, about<br />
half a meter away from the front-end crates.<br />
The input for a power box – a DC voltage in the<br />
range of 200 to 300V – is transformed into +8V, +4V and -2V<br />
at the 3 output lines by DC/DC converters and then split into<br />
40 channels at two control boards (figure 7). There is an<br />
individual ON/OFF control and a fine adjustment of the three<br />
supply voltages for each PSB.<br />
Figure 6: PVSS2 daughter panel “CALO QUADRANT”<br />
3-dimensional view with animated colours
1) Original design<br />
We intended to use the integrated low voltage<br />
regulators L4913 and L7913 from STm. They should be<br />
mounted on the control boards inside the power boxes<br />
Figure 7: Low voltage control board (original design)<br />
2) Actual problems<br />
Meanwhile it turned out that the ELMBs are not as<br />
radiation hard as we had expected and cannot be mounted at<br />
the foreseen position.<br />
The low voltage regulators from STm are confirmed<br />
to be radiation hard, but there seems to be still problems in the<br />
design or fabrication process.<br />
So we have to envisage alternate solutions, at least<br />
during the present phase of design work. Concerning the<br />
ELMBs there is no other way than to place them outside the<br />
Myon chambers, but we would still like to apply the above<br />
mentioned regulators for reasons of small size and low cost.<br />
3) Prototype designs<br />
During the assembly of the wheels at CERN and for<br />
combined tests, existing power supplies will be used. Control<br />
boards, corresponding to the original plan but in non-radiation<br />
hard technology, are in preparation. ELMBs will be mounted<br />
on these boards. Instead of STm-regulators other products<br />
must be used and can be replaced later by types of final<br />
choice (STm or other radiation hard regulators).<br />
Ongoing considerations about final solutions will<br />
result in a second version of control board with radiation hard<br />
components. The ELMBs, which will no more be mounted on<br />
the boards but far outside the power box, are connected by a<br />
multi wire cable. As a consequence of this arrangement an<br />
together with FPGAs from QuickLogic, which contain the<br />
necessary digital control circuitry, and the ELMBs as<br />
interfaces to the CanBus. By this arrangement the cable<br />
connections to the power boxes could have been minimized.<br />
array of analogue multiplexers is needed on the boards as well<br />
as much more complex logic in the FPGA.<br />
4) Final Solutions<br />
A) The American company “Modular Devices” is<br />
developing power supply boxes for EMEC under the direction<br />
of “BNL”. As the units will be mounted between the fingers<br />
of the tile calorimeter, radiation hardness is mandatory. As<br />
primary choice we envisage to adopt this solution. Only the<br />
output voltages would be adjusted to the values required by<br />
HEC, and two control boards from MPI mounted additionally<br />
inside. The main disadvantages are relative high cost and the<br />
present uncertainty about the STm-regulators.<br />
B) Therefore a second source is highly desirable. We<br />
are negotiating with the German company “GSG-Elektronik”<br />
near Munich, which is experienced in radiation hard<br />
electronics for space research. The company offered a design<br />
study and would be able to built prototypes in an acceptable<br />
time. Either one could apply big DC/DC converters (a certain<br />
number in parallel for redundancy), which deliver precisely<br />
the desired voltages, and then split the output into 40 channels<br />
with transistor switches in series, ore use for each channel a<br />
small DC/DC converter with remote on/off control. In any<br />
case there would be no need to have the problematic STmregulators.<br />
The negative aspect of these approaches is, that<br />
the company would put the responsibility and actions for<br />
radiation tests to MPI.
5) Safety aspects<br />
Temperature sensors are foreseen in the power<br />
boxes on each board as well as detectors for leaking cooling<br />
water. In case of a serious problem the main is switched off<br />
automatically.<br />
The power supplies have a build-in over-voltage<br />
protection and the low voltage regulators (or the small DC/DC<br />
converters respectively) have a current limitation. The<br />
maximal current is adjusted to such a low value, that the wires<br />
in the feed-through cannot be damaged in case of a steady<br />
short circuit inside the cryostat. In addition, in case of an<br />
over-current, an error signal is delivered and all 3 regulators<br />
that belong to faulty channel are switched off immediately by<br />
the internal logic. Afterwards a detailed description of the<br />
problem is sent to the PC.<br />
Under normal operating conditions the temperatures<br />
of the boards and the supply voltages and currents of all<br />
channels are registered regularly.<br />
Figure 8: PVSS2, a SCADA software for slow control<br />
D. Status of Development<br />
Tests of substantial hard and software components<br />
have been carried out. The work on control boards is<br />
progressing. A link between a PVSS2 test program on a PC<br />
and an ELMB (via OPC, NICANII and CanBus) is<br />
operational.<br />
For the case that a computer or the bus itself would<br />
fail, an emergency control system is planed, independent of<br />
the CanBus. By remote switches in the measuring hut, the<br />
operator can switch off or on all channels simultaneously.<br />
C. Software<br />
As mentioned before, the control program is written<br />
in PVSS2, which is based on the ANSI C-language. It has<br />
several graphics tools that help the programmer during the<br />
design phase.<br />
A data point structure and a list of data points has to<br />
be established first. The so-called “Data Points” are variables<br />
in the program, where the information about all hardware<br />
items is stored. By the aid of a graphics editor, panels are<br />
designed for various purposes (displays, actions, diagrams).<br />
Symbols on panels are connected to control scripts (Clanguage).<br />
At runtime the automatically generated main<br />
program uses the scripts as subroutines.<br />
We are gaining more and more experience with the<br />
PVSS2 software. Many examples of graphics panels and<br />
control scripts have been developed and are supposed to be<br />
the basis for the low voltage control program.<br />
A decision about a second source of power boxes<br />
should be taken in the near future.
On the developments of the Read Out Driver for the ATLAS Tile Calorimeter.<br />
J. Castelo 1 ,V.González 2 ,E.Sanchis 2 , J. Torres 2 ,G.Torralba 2 ,J.Martos 2<br />
1 IFIC, Edificio Institutos de Investigación - Polígono la Coma S/N, Paterna, Valencia, Spain<br />
Jose.Castelo@ific.uv.es<br />
2 Dept. Electronic Engineering, Univ. Valencia, Avda. Dr. Moliner, 50, Burjassot (Valencia), Spain<br />
Vicente.Gonzalez@uv.es, Enrique.Sanchis@uv.es, Jose.Torres@uv.es, Gloria.Torralba@uv.es, Julio.Martos@uv.es<br />
Abstract<br />
This works describes the present status and future<br />
evolution of the Read Out Driver for the ATLAS Tile<br />
Calorimeter. The developments currently under execution<br />
include the adaptation and test of the LiAr ROD to Tile Cal<br />
needs and the design and implementation of the PMC board<br />
for algorithm testing at ATLAS rates.<br />
The adaptation includes a new transition module with 4<br />
SLINK inputs and one output which match the initial TileCal<br />
segmentation for RODs. We also describe the work going on<br />
in the design of a DSP-based PMC with SLINK input for real<br />
time data processing to be used as a test environment for<br />
optimal filtering.<br />
I. INTRODUCTION<br />
At the European Laboratory for Particle Physics (CERN)<br />
in Geneva, a new particle accelerator, the Large Hadron<br />
Collider (LHC) is presently being constructed. In the year<br />
2006 beams of protons are expected to collide at a center of<br />
mass energy of 14 TeV. In parallel to the accelerator, two<br />
general purpose detectors, ATLAS and CMS, are being<br />
developed to investigate proton-proton collisions in the new<br />
energy domain and to study fundamental questions of particle<br />
physics .<br />
This new generation of detectors requires highly hardened<br />
electronics, able to deal with a huge amount of data in real or<br />
almost real time. The work we present here is included in the<br />
studies and development currently carried out at the<br />
University of Valencia for the Read Out Module (ROD) of the<br />
hadronic calorimeter TileCal of ATLAS.<br />
II. THE TILECAL ROD SYSTEM<br />
TileCal is the hadronic calorimeter of the ATLAS<br />
experiments. It consists, electronically speaking, of 10000<br />
channel to be read each 25 ns. Data gathered from these<br />
channels are digitized and transmitted to the data acquisition<br />
system (DAQ) following the assertions of a three level trigger<br />
system [1].<br />
In the acquisition chain, place is left for a module which<br />
has to perform preprocessing and gathering on data coming<br />
out after a good first level trigger before sending them to the<br />
second level. This module is called the Read Out Module<br />
(ROD).<br />
For TileCal, the ROD system will be built most probably<br />
around custom VME boards which will have to treat around 2<br />
Gbytes/s of data. Intelligence will be provided to do some<br />
preprocessing on data.<br />
For the reading of the channels we are working on a<br />
baseline of 64 ROD modules. Each one will process more<br />
than 300 channels. The studies currently going on at Valencia<br />
focus on the adaptation of the first prototype of the LiArg<br />
ROD to TileCal needs.<br />
The basic schema to use is based on the ROD crate<br />
concept in which ROD modules are grouped into VME crates<br />
jointly with a Trigger and Busy Module (TBM) and possibly<br />
other custom cards when needed. This ROD crate interfaces<br />
with the TileCal Run Control and the ATLAS DAQ Run<br />
control. Figure 1 shows this structure schematically [5].<br />
Figure 1: TileCal ROD System<br />
The basic functions and requirements of all ATLAS ROD<br />
can be found in [1] and may be summarize saying that the<br />
ROD board receives the data from the FEB which after some<br />
processing are sent to the ROB. These data may be buffered<br />
to be able to work with the maximum LVL1 trigger rate (100<br />
KHz) without introducing extra dead time.<br />
For each particular detector, some preprocessing could be<br />
done at ROD level. For TileCal, RODs will calculate energy<br />
and time for each cell using optimal filtering algorithms<br />
besides evaluating a quality flag for the pulse shape (χ 2 ).<br />
RODs will also do the data monitoring during physics runs<br />
and make a first pass in the analysis of the calibration data
leaving the complete analysis to the local CPU of the ROD<br />
crate.<br />
In some cases data will not be processed and will flow raw<br />
to LVL2. These include interesting events, large energy<br />
depositions in a cell or debugging stages.<br />
It will be also desirable to have the functionality to apply<br />
corrections to the energy or time estimators for example to<br />
correct the non-linearities in the shaper or in the ADC.<br />
Finally ROD will monitor the Minimum Bias and pile-up<br />
noise and will have the possibility of working in special runs<br />
at reduced trigger rate.<br />
III. LIARG ROD AND TILECAL ROD<br />
As mentioned before, our work develops now in the<br />
direction of adapting the first LiArg ROD prototype [7] to<br />
TileCal needs exposed before. The main reason to do this is<br />
the great similarity of the two detectors and the great<br />
difference in the requirements which make LiArg solutions<br />
suitable, with modifications, for TileCal.<br />
The basic differences in the ROD concept for LiArg and<br />
TileCal rise, in a first approximation, in the working baseline<br />
which is summarized in table 1.<br />
Table 1: ROD baseline for LiArg and TileCal<br />
Input links (32 bits @<br />
40 MHz<br />
Number of channels<br />
per board<br />
Number of DSP<br />
Processing Units<br />
Number<br />
channels/DSP PU<br />
of<br />
Output<br />
Mb/s)<br />
Links (800<br />
Motherboard<br />
ROD LiArg ROD TileCal<br />
Baseline<br />
2 4<br />
256 154<br />
(2*64b+2*31eb)<br />
4 4<br />
64 46b or 31eb<br />
1 1<br />
(1,14 Gb/s, expected)<br />
P1<br />
P0<br />
P2<br />
P3<br />
P2<br />
P3<br />
Transient M odule<br />
Input From FEB<br />
(optical or Cu)<br />
Input From FEB<br />
(optical or Cu)<br />
O utput to R O B<br />
(S -link ,....)<br />
Figure 3: LiArg ROD prototipe<br />
The block diagram of the LiArg ROD prototype is shown<br />
in figure 3. It is based in a 9U VME motherboard which holds<br />
four DSP-based processing units (PU) as mezzanines. These<br />
mezzanines are based on TI C6202 DSPs at 250 MHz with<br />
some external logic: FIFOs, FPGAs and memory. Figure 2<br />
shows the block diagram of the PUs [8].<br />
The input and output of data is place on a 9U transition<br />
module. For the first prototype this module has only two<br />
inputs and one output.<br />
To adapt this solution to TileCal needs we need to<br />
reconsider the following aspects:<br />
• Data input/output format and rates: we need 4 inputs<br />
and one output at the transition module. This implies<br />
a new design of this transition board.<br />
• Processing power: because of its great number of<br />
channels, LiArg DSP PU have a lot more computing<br />
power than needed in TileCal. This issues the<br />
question of whether it is necessary to use exactly the<br />
same type of PUs or we could use cheaper ones even<br />
not based on DSPs but on FPGAs.<br />
Because of the modularity of the LiArg solution, our work<br />
focuses on the design of a new transition module, leaving the<br />
decision about the PU postponed.<br />
Figure 2: Block diagram of the DSP Processing Units.<br />
IV. THE TM4PLUS1 TRANSITION MODULE<br />
Lets now get into the description of the new design carried<br />
out to adapt TileCal inputs to the LiArg motherboard. This<br />
new transition module is called TM4Plus1.
This module has been developed and implemented at CERN<br />
by the EP/ATE group. Its block diagram is shown in figure 4.<br />
Fifo<br />
F1<br />
Fifo<br />
Aux.<br />
Altera<br />
Leds<br />
F2<br />
F3<br />
Transceiver<br />
Ref.<br />
Altera<br />
Fifo<br />
Figure 4: TM4Plus1 block diagram.<br />
The transition module is a modified version of the one<br />
used by LiArg that includes 4 input SLINK channels in PMC<br />
format and 1 GLINK output integrated in the PCB. The PMC<br />
input channels are capable of reading 4x32 bits at 40 MHz<br />
and allow us to test different input technologies. The output<br />
will also run at 40 MHz with a data width of 32 bits [4] [6].<br />
On the board there are also 4 input FIFOs, 4Kwords each,<br />
to accommodate the differences between input speed and<br />
processing on the FPGAs.<br />
These lasts are implemented on two ALTERA devices.<br />
The tasks of each of the FPGAs are:<br />
• Reformatting Altera: data multiplexing and SLINK<br />
control. This devices will reformat and merge data in<br />
a 4 to 2 manner to produce data similar to what the<br />
motherboard is expecting if used with LiArg detector.<br />
• Auxiliary Altera: it holds the code for the integrated<br />
ODIN output. The free space left will be used in<br />
conjunction with the reformatting Altera.<br />
The data flow to these FPGAs is shown in figure 5.<br />
Following the tasks division the reformatting Altera receives<br />
the data from the 4 input channels to perform the 4 to 2<br />
multiplexing. Data is sent to the motherboard through P2<br />
connector on the VME backplane with the same format as<br />
LiArg.<br />
Signal mapping<br />
LD, UXOFF, URESET, Configuration<br />
LCTRL, LWEN, LCLK, Pins<br />
LDERR, LDOWN 10<br />
39<br />
bufferCLK LD[0..31], LCTRL, LDERR<br />
34* FIFO A<br />
UTDO, UDW0/1,<br />
URL[3..0]<br />
7<br />
39<br />
UTDO, UDW0/1,<br />
URL[3..0] 7<br />
Reformatting<br />
READEN, EMPTY,<br />
4*<br />
Altera<br />
FULL, UXOFF<br />
Same lines to<br />
(247pins)<br />
BUSM, FWFT, FIFO B, C, D<br />
3 MRESET*<br />
(+10conf.pins)<br />
2* URESET,<br />
LDOWN<br />
39 5<br />
bufferCLK S-LINK A<br />
4<br />
Auxiliary<br />
UTDO, UDW0/1,<br />
7* URL[3..0]<br />
Altera<br />
*: these lines count 4 times, one<br />
(243+10pins)<br />
for each S_LINK or FIFO.<br />
G-LINK<br />
corner<br />
68<br />
Signals to<br />
the G-LINK<br />
45<br />
**: LSC and LDC signals defined in the S-LINK specifications;<br />
UD, LFF, URESET, UTEST, UDW0/1, UCTRL,<br />
UWEN, UCLK, LRL[3..0], LDOWN, RSVD**<br />
J2A<br />
J2B<br />
J3A<br />
J3B<br />
Figure 5: Data flow on the TM4Plus1.<br />
Fifo<br />
The other Altera receives some lines from the first and<br />
hold the ODIN output code. Data once processed at the<br />
motherboard is sent through P3 connector to this FPGA to be<br />
sent to ROBs.<br />
The basic data processing on the reformatting Altera<br />
relates to the forma conversion between TileCal and LiArg.<br />
Fortunately, front-end data format on both detectors is quite<br />
similar and we have to deal only with a data width problem.<br />
This problem is due to the fact that on the motherboard, the<br />
32-bit path of P2 connector is splitted into two 16-bit paths<br />
each one going to a PU. For TileCal with have 32-bit paths<br />
already in the front-end data, so we have to divide these data<br />
into 16-bit blocks and send them consecutively to the<br />
motherboard.<br />
There is also a problem with data control due to the way<br />
the LiArg motherboard controls the data flow. Situations may<br />
arise when we could have data only on one of the two input<br />
provided to first check if this occurs and second to solve it.<br />
Our proposal implements a time-out activated on the<br />
arrival of the first data on one of the input links to be<br />
multiplexed. If after the time out no data are received, the<br />
space for that channel is filled with zeros and a flag is set on<br />
the header to let the PU treat these data as no-data instead of<br />
zeroes.<br />
These two processes are depicted schematically in figure<br />
6.<br />
SLINK-1<br />
SLINK-2<br />
A<br />
B<br />
C<br />
D<br />
Data multiplexing<br />
ALTERA<br />
B A 1<br />
1<br />
D<br />
C<br />
1<br />
15<br />
Time<br />
1<br />
SLINK-2<br />
C<br />
D<br />
SLINK-1<br />
delay<br />
A<br />
B<br />
Data present on links<br />
ALTERA<br />
Figure 6: Data multiplexing using FPGAs<br />
V. PRESENT DEVELOPMENTS<br />
At IFIC and University of Valencia there are three<br />
development fronts undergoing.<br />
The main is related with the tests and developments for the<br />
TM4Plus1 board and the final ROD prototype, the second<br />
goes towards the design and implementation of a PMC card<br />
for algorithm tests and the third deals with the software issues<br />
at the ROD controller.<br />
Lets review now the current status on each of these<br />
directions.<br />
A. The TM4Plus1 board and final ROD<br />
prototype<br />
The tasks on the TM4Plus1 board currently going on are<br />
of two kinds:<br />
Hardware:<br />
B<br />
0<br />
Time<br />
A<br />
0
•Test of motherboard and PUs. This is already finished.<br />
•Test of the TM4Plus1 transition module: not yet<br />
finished.<br />
•Design of a custom FPGA based PU: starting.<br />
Software:<br />
•TM4Plus1 FPGAs: going on. This work refers to the<br />
implementation of the processing commented before on the<br />
two Alteras<br />
•PUs: we need to reprogram the DSPs to do optimal<br />
filtering and also the input FPGA.<br />
For the final ROD prototype we are currently designing a<br />
new PU based on FPGA instead of DSPs. These will imply a<br />
reduction in cost, because the DSP PU are the most expensive<br />
component in the LiArg ROD, and an increase on parallelism<br />
as we will not be limited by the DSP architecture but by the<br />
FPGA capacity. Our working block diagram for a final<br />
TileCal ROD prototype is shown in next figure.<br />
ROD Motherboard<br />
Processing Unit (PFGA Unit)<br />
Ttype&BCID<br />
FIFO<br />
Fragment FIFO<br />
Processing Unit (PFGA Unit)<br />
Ttype&BCID<br />
FIFO<br />
Fragment FIFO<br />
Ttype &<br />
BCID<br />
Check<br />
Processing Unit Slot<br />
Not USED<br />
TTCFPGA&TTCrx<br />
Ttype &<br />
BCID<br />
Check<br />
Processing Unit Slot<br />
Not USED<br />
VME<br />
Comm<br />
Output<br />
Controller<br />
P1<br />
J2A<br />
P2 ---<br />
J2B<br />
P3 J3<br />
Transition Module<br />
FIFO S-LINK A - LDC<br />
FIFO S-LINK B - LDC<br />
FIFO S-LINK C - LDC<br />
FIFO S-LINK D - LDC<br />
Reformatting Altera (APEX 20k)<br />
FIFO/SLINK Control<br />
S2P<br />
Paroty, CRC &<br />
data alignment<br />
CHECK<br />
Auxiliary Altera (APEX 20k)<br />
SLINKControl<br />
Energy<br />
Fragment<br />
Builder J2B<br />
BCID, Ttype<br />
extractio n<br />
Energy & time<br />
calculation unit "m"<br />
Energy<br />
Fragment<br />
Builder J2A<br />
Energy & time<br />
calculation uni "n"<br />
Energy & time<br />
calculation unit "x"<br />
Energy & time<br />
calculation uni "y"<br />
Integrated G-LINK<br />
Figure 7: Final TileCal ROD prototype.<br />
FEB Data<br />
LVL2 Data<br />
As it can be seen we keep the LiArg motherboard to have<br />
VME access and the TM4Plus1 board, but substitute the DSP<br />
PUs by new FPGA based ones. Also a redistribution of tasks<br />
occurs, placing almost all processing issues on the FPGAs of<br />
the transition module where energy an timing estimation will<br />
take place. On the motherboard only data integrity checking<br />
and TTC operation will be done.<br />
By processing data on the transition module we reduce<br />
data volume flowing to the motherboard. This opens the<br />
possibility of increasing the number of input channels on the<br />
transition module by previously integrating them on the PCB<br />
(no more PMCs).<br />
B. The SLink PMC card<br />
Parallel to these activities we are also involved in the<br />
design and development of the DSP based PMC card with<br />
SLINK input for testing the optimal filtering algorithms on a<br />
commercial VME processor.<br />
The basic idea is to have a PMC with SLINK input<br />
capability and with some intelligence deployed on a FPGA<br />
and a TI 6X DSP [2]. We are currently working on the<br />
following block diagram.<br />
S-LINK interface<br />
Interface<br />
S-LINK<br />
FPGA XILINX TEXAS INSTRUMENTS<br />
FIFO<br />
Data reordening<br />
BCID checking<br />
BCID<br />
BCID<br />
EMIF<br />
Look up table<br />
McBSP1 McBSP0<br />
BCID L1 ID<br />
PCI interface<br />
Figure 8: PMC block diagram.<br />
CPU<br />
PCI<br />
interface<br />
For the DSP we are currently designing for the<br />
TMS320C6205 which includes a PCI interface that save us<br />
the task of implement this interface on a FPGA. For the<br />
FPGA we are designing with XILINX X2CS100 device.<br />
The DSP will load TTC data (BCID, EventID and Trigger<br />
Type) using two serial channels to make the data integrity<br />
operation and output data formatting, while the FPGA will<br />
take care of the SLINK interface, data reordering, BCID<br />
sequence check and the EMIF communication with the DSP.<br />
C. The ROD Controller<br />
Activity in this field is focuses on the adaptation of the<br />
LiArg ROD software libraries to the setup at Valencia based<br />
on a BIT3 VME-PC interface as ROD controller and the<br />
TileCal ROD integration with DAQ-1.<br />
The adaptation of the LiArg libraries is already finished<br />
and has required some effort on the driver side. For the DAQ-<br />
1 integration the work foreseen is the development of the<br />
Local ROD VME software, the online software and ROS<br />
dataflow. We expect to start this work very soon.<br />
VI. REFERENCES<br />
[1] ATLAS Trigger and DAQ steering group, “Trigger and Daq Interfaces<br />
with FE systems: Requirement document. Version 2.0”, DAQ-NO-103,<br />
1998.<br />
[2] TEXAS INSTRUMENTS, “TMS320C6205 Data sheet,” Application<br />
Report SPRS106, October 1999<br />
[3] K. Castille, "TMS320C6000 EMIF to external FIFO interface,"<br />
[4]<br />
Application Report SPRA543, May 1999<br />
J. Dowell, M. Pearce “ATLAS front-end read-out link requirements,”<br />
ATLAS internal note, ATLAS-ELEC---1, July 1998<br />
[5] C. Bee, O. Boyle, D. Francis, L. Mapelli, R. MacLaren, G. Mornacchi,<br />
J. Petersen, “The event formatr in the ATLAS DAQ/EF prototype-1,”<br />
Note number 050, version 1.5, October 1998<br />
[6] O. Boyle, R. McLaren, E. van der Bij, “The S-LINK interface<br />
specification,” ECP division CERN, March 1997<br />
[7] The LArgon ROD working group, “The ROD Demonstrator Board for<br />
the LArgon Calorimeter”<br />
[8] S. Böttcher, J. Parsons, S. Simion, W. Sippach “The DSP 6202<br />
processor board for ATLAS calorimeter”
Abstract<br />
After reviewing the architecture and design of the CMS<br />
data acquisition system, the requirements on the front-end data<br />
links as well as the different possible topologies for merging<br />
data from the front-ends are presented. The DAQ link is a standard<br />
element for all CMS sub-detectors: its physical specification<br />
as well as the data format and transmission protocol are<br />
elaborated within the Readout Unit Working Group where all<br />
sub-detectors are represented. The current state of the link definition<br />
is described here. Finally, prototyping activities<br />
towards the final link as well as test/readout devices for Front-<br />
End designers and DAQ developers are described.<br />
I. INTRODUCTION<br />
In the case of CMS, there will be about 9 different detectors<br />
providing ~1 MB of data per trigger to the DAQ (see Fig.<br />
1). Interfacing these sources with the DAQ is a critical point<br />
given the overall size and complexity of the final system (ondetector<br />
electronics, counting room electronics and DAQ).<br />
Trig.<br />
Throttle<br />
aTTS sTTS<br />
LV1A<br />
(binary)<br />
Back Pressure<br />
Back Pressure<br />
Back Pressure<br />
EVM<br />
<strong>Request</strong>s<br />
Trig. Data<br />
Commands<br />
Front-End<br />
Readout<br />
Column<br />
Switch<br />
Filter Column<br />
Fig. 1 CMS DAQ block diagram<br />
The Front-End (FE) operation is synchronous with the<br />
machine clock and is located in the underground areas (detector<br />
cavern and counting rooms). The distribution of fast signals<br />
(LV1A, machine clock, resets, fast commands) is carried out<br />
by the Timing, Trigger Control system (TTC) [1]. The TTC<br />
provides to the FE the signals needed to signal the presence of<br />
data on every bunch crossing and send the trigger selected data<br />
to the Readout Column (RC). In turns, the FE can throttle back<br />
the trigger by providing fast binary status signals to the synchronous<br />
Trigger Trottling System (sTTS) [2].<br />
The FE modules are read out by the RC (see Fig. 2) which<br />
is running asynchronously w.r.t. the machine clock, the RC<br />
being “trigger driven”. For every event, the FE pushes its data<br />
as soon as possible through the data transportation devices<br />
towards the RC located at the surface. The event data are then<br />
Front-End / DAQ Interfaces in CMS<br />
G. Antchev, E. Cano, S. Cittolin, S. Erhan, W. Funk, D. Gigi, F. Glege, P. Gras, J. Gutleber, C. Jacobs, F.<br />
Meijers, E. Meschi, L. Orsini, L. Pollet, A. Racz, D. Samyn, W. Schleifer, P. Sphicas, C. Schwick<br />
CERN, Div. EP, Meyrin CH-1211 Geneva 23 Switzerland<br />
DAQ link<br />
Computing<br />
40 Tbytes/sec<br />
40 MHz<br />
512 Readout columns<br />
100 Gbyte/sec<br />
100 kHz<br />
512x512 legs<br />
> 50Gbit/sec<br />
5.10 6 MIPS<br />
100 Mbyte/sec<br />
~100 Hz<br />
buffered in the Readout Unit Input (RUI). The RC receives its<br />
control messages through the Event Manager (EVM). The<br />
EVM is sub-divided into a Readout Manager (RM) and a<br />
Builder Manager (BM). The RM enables the data integrity<br />
check in the RUI and the writing of the event fragment into the<br />
Readout Unit Memory (RUM). The BM enables the Readout<br />
Unit Output (RUO) to send an event fragment to a requesting<br />
Builder Unit (BU) sitting on the other side of the switch network.<br />
As for the FE, the DAQ can also throttle the trigger by<br />
means of messages provided to the asynchronous TTS (aTTS)<br />
through a control network.<br />
RC<br />
FED<br />
RU<br />
DDU/<br />
FED/<br />
DCC<br />
RUI<br />
RUI bus<br />
RUM<br />
RUO bus<br />
RUO<br />
Detector data link<br />
RUS<br />
Switch data link<br />
200m<br />
DDU/<br />
FED/<br />
DCC<br />
RUI<br />
Fig. 2 CMS Readout column block diagram<br />
II. FRONT-END DATA SOURCES<br />
As mentionned in the introduction, 9 sub-detectors will<br />
provide a total of 1 MB of data for every trigger. The central<br />
DAQ is designed to acquire this 1 MB of data at a maximum<br />
trigger rate of 100 kHz.<br />
According to the most up-to-date information, the data are provided<br />
as follows:<br />
• Pixel: 32 sources @ [850..2100] bytes<br />
• Tracker: 442 sources @ [300..1500] bytes<br />
• Preshower: 50 sources @ 2 kByte<br />
• ECAL: 56 sources @ 2 kByte ± 10-20%<br />
• HCAL: 24 sources @ 2 kByte<br />
• Muon-DT: 60 sources @ ~170 bytes
• Muon-RPC: 5 sources @ ~300 bytes<br />
• Muon-CSC: 36 sources @ ~120 bytes<br />
• Trigger: 4 sources @ 1kByte<br />
This makes a total of 709 sources with individual data sizes<br />
ranging from 120 bytes to ~ 2kByte. In order to use efficiently<br />
the nominal bandwidth of the DAQ hardware, a minimum<br />
packet size must be achieved by the front-end data sources<br />
(see Fig. 3). Given the current situation, the Pixel detector, the<br />
Tracker detector and the Muon detectors may need an additional<br />
concentration layer to match this requirement.<br />
Fig. 3 Effective data throughput versus packet size<br />
III. READOUT INTERFACE<br />
The DAQ is the natural convergence point of the data produced<br />
by the sub-detectors. Reducing the diversity in the electronic<br />
devices is highly desirable if not outright necessary, in<br />
order to facilitate the system integration (especially during the<br />
initial debug phase) and also the maintenance operations.<br />
Therefore, the decision to use a common interface for all subdetectors<br />
was made at a very early stage of the DAQ design.<br />
The interface is defined and elaborated within the Readout<br />
Unit Working Group (RUWG) [3] where all data providers and<br />
data consumers are represented. A common functional specification<br />
document [4] is adopted by all CMS data producers.<br />
A. Detector Dependent Unit<br />
The Detector Dependent Unit (also known as Front End<br />
Driver) is hosting the interface between the DAQ and the subdetector<br />
readout systems. No sub-detector specific hardware is<br />
foreseen after the DDU in the readout chain. If the event size at<br />
the FED/DDU level is far from 2 KBytes, an intermediate Data<br />
Concentrator Card (DCC) merges several FEDs/DDUs in<br />
order to reach the 2 KB per event. This element is not needed<br />
for all sub-detectors. When a DCC is present in the sub-detector<br />
data flow, the DCC is seen by the DAQ as the interface<br />
between the DAQ and the sub-detector.<br />
The task of the DDU is to deal with the specificities of<br />
each sub-detector and make available the data to the DAQ<br />
transportation hardware according to the specifications [4].<br />
The specifications include the minimum functionalities to be<br />
performed by the DDU (header generation, alignment checking…)<br />
and the description of the DAQ slot which is located on<br />
the DDU where the DAQ transportation hardware is plugged.<br />
B. DAQ slot<br />
The DAQ slot is an S-LINK64 port [5]. S-LINK64 is based<br />
on S-LINK1 [6] which has been extended to match CMS needs<br />
(64 bits @ 100 MHz). The extension is implemented through<br />
an additional connector, hence allowing the usage of standard<br />
S-LINK product until the availability of the final DAQ hardware.<br />
FED/DCC<br />
Specific<br />
Detector<br />
Electronic<br />
Detector links<br />
S-LINK64 port<br />
Storage<br />
area<br />
Link<br />
FPGA<br />
VME Host<br />
Interface<br />
Fig. 4 DAQ slot on the FED/DCC<br />
S-LINK as well as S-LINK64 specifies a set of 2 connectors<br />
(a sending and a receiving one) but not the physical link in<br />
between. The design and the implementation of the S-LINK64<br />
port on the FED is under the responsibility of the sub-detector.<br />
IV. DAQ DATA TRANPORTATION<br />
A. Data transportation requirements<br />
Setup<br />
Control<br />
Messages<br />
- out-of-sync<br />
- failure<br />
Monitoring<br />
Fast signals:<br />
- busy/ready<br />
- overflow warning<br />
The required data throughput is 200 MB/s (2 KB @ 100<br />
KHz) over a distance of 200m (cable path between the underground<br />
areas and the surface DAQ building). The data transportation<br />
hardware must be able to absorb stochastic<br />
fluctuations on the event size and provide enough contingency<br />
to cope with large uncertainties on the LHC luminosity and the<br />
detector occupancy/noise. However, the available bandwidth<br />
will clearly have an upper limit that cannot be exceeded. It is<br />
assumed that data sources in need of higher bandwidth will<br />
have some of their channels readin by an additional FED.<br />
In order to have a good working efficiency, the event<br />
builder must receive a balanced traffic through its input ports<br />
and destination clashes must be avoided as much as possible.<br />
As shown in section II. on page 1, some detectors feature an<br />
important data size spread at the output of their data sources<br />
Therefore, the data transportation hardware must be able to<br />
average the traffic over several FEDs by appropriatly grouping<br />
FEDs with low and high data volumes per event.<br />
Regarding the staging policy, at day 0, the trigger rate and<br />
the event size will be far from the nominal one: the full capacity<br />
of the DAQ will be needed only after LHC luminosity<br />
ramping-up and nominal CMS detector efficiency. A capacity<br />
of 25% of the nominal one is planned to be available on day 0,<br />
doubling after 6 months of data-taking to reach 100% after one<br />
year of operation. This staging strategy is also requested to<br />
match the expected funding profile. Therefore, the data trans-<br />
1.Generic FIFO interface featuring 32 bits @ 40 MHz specified at CERN by<br />
R. McLaren and E. Van Der Bij
portation architecture must allow a progressive deployment of<br />
the DAQ.<br />
B. Data transportation architecture<br />
The data transportation architecture (see Fig. 5) is strongly<br />
driven by the event builder features and the staging strategy as<br />
well.<br />
The constituting elements of the data transportation are:<br />
• DAQ short reach link: transfers the FED data to the Front-<br />
End Readout Link card (FRL)<br />
• Front-End Readout Link card: receives the FED data via<br />
the short link and houses the long reach DAQ link<br />
• DAQ pit-PC: hosts the FRL and performs its configuration/control<br />
• DAQ long reach link: moves the data from the FED/DCC<br />
into the intermediate data concentrator located at the surface<br />
(200m cable path).<br />
• Intermediate data concentrator (FED Builder): implemented<br />
with an N x N crossbar switch. Each of the inputs<br />
is connected to a data source and depending on the<br />
deployment phase, up to N Readout Unit Inputs (RUI) are<br />
connected to the switch outputs. At LHC startup, only one<br />
RUI is connected and process the event fragments of N<br />
sources. Hence, by connecting hot and cold data sources,<br />
traffic balancing is performed de facto by the FEDB<br />
Whatever the deployment scenario is, the data transportation<br />
from pit to FEDB is never modified. Later, when<br />
higher bandwidths will need to be deployed, this will be<br />
achieved by connecting more RUIs in the system.<br />
PCI card hosted by pit-PC<br />
Underground area<br />
Surface area<br />
FED/DCC FED/DCC FED/DCC FED/DCC FED/DCC<br />
FED/DCC<br />
FED/DCC<br />
Short link<br />
FRL FRL FRL FRL FRL<br />
FRL<br />
FRL<br />
Long<br />
link<br />
FE Builder<br />
Fig. 5 Data transportation architecture<br />
The technologies used to build the central DAQ system are<br />
clearly those used in the telecommunication world. Hence,<br />
both the performance and the cost of the system profit from the<br />
evolution of this dynamic market. By adopting popular telecom/computer<br />
standards (i.e. PCI or Infiniband), custom<br />
developments can co-exist with commercial products. Custom<br />
developments are at the time of this paper, the only way to<br />
achieve the required performances (200 MB/s sustained<br />
throughput through all the RC). As the performance of commercial<br />
products approach the requirements, these ones will be<br />
considered at procurement time or as replacement for the custom<br />
implementation. Therefore, the use of popular standards<br />
in custom design is a necessity given the most likely evolution<br />
and the upgrades.<br />
RU<br />
V. PROTOTYPES<br />
The prototyping phase will extend until Q4 2002 (DAQ<br />
Technical Design Report submission). At this time, implementation<br />
choices will be frozen and the pre-series production/procurement<br />
phase will start.<br />
A. Short reach link prototype<br />
The current prototype is based on LVDS technology:<br />
• S-LINK64 compliant<br />
• max. cable length: 10m<br />
• max. throughtput: 869 MB/s<br />
• BER < 10 -15<br />
The sender card plugs into a FED and the receiver card is<br />
hosted by a multi-purpose PCI card (called Generic III or<br />
GIII). This forms the Hardware Readout Kit (HRK) provided<br />
to FED developers for laboratory work and beam test activities.<br />
SDRAM SDRAM<br />
SDRAM SDRAM<br />
UB<br />
APEX<br />
-x 3.3v<br />
-xv 5v<br />
FLASH<br />
LVDS<br />
B. FRL prototype<br />
R<br />
PCI
32 MB<br />
SDRAM<br />
Altera<br />
APEX<br />
PCI<br />
64bits<br />
66 MHz<br />
1 MB<br />
FLASH<br />
Fig. 8 Generic III block diagram<br />
Fig. 9 Generic III picture<br />
C. Long reach link prototype<br />
As one end of this link is connected to the FED builder, its<br />
technology will be identical to the switch technology. Currently,<br />
Myrinet [10] is considered as the baseline technology<br />
for the FED Builder. The possible options for implementing<br />
such a link are the following:<br />
• Off-the-shelf PMC hosted by the FRL card<br />
• Myrinet protocol/core implemented in FPGA<br />
• FRL with embedded Myrinet processor (Lanai-10)<br />
These options are currently evaluated and discussed with<br />
Myricom.<br />
The final decision will be taken for DAQ TDR submission<br />
(Q4 2002). Meanwhile, prototyping activities continue.<br />
A. FRL housing<br />
FPGA Bus<br />
VI. INFRASTRUCTURES<br />
High-speed<br />
connectors<br />
S-Link64<br />
port<br />
As presented above, there is one FRL per data source and<br />
the FRL is a PCI card. Therefore, PCI slots must be available<br />
in the underground counting rooms. Using PCs for hosting the<br />
FRLs would require much more space than PCI cages. Such<br />
cages have 13 PCI slots and an interface with the control PC.<br />
Rack-mounted PCs with 7 free PCI slots (4U) allow to control<br />
91 data sources within a standard 42U computer rack. A total<br />
of height racks is needed for the entire set of front-end data<br />
sources.<br />
Fig. 10 A PCI cage with 13 slots<br />
VII. CONCLUSION<br />
An important fraction of the DAQ system (~90%) will be<br />
based on standard commercial components from the telecom<br />
and computing industries. The breathtaking improvments in<br />
speed, capacity and cost of these industries is well established<br />
and also expected to continue. Clearly, the benefits from<br />
delaying design choices have to be balanced against the constraints<br />
of providing readout capability to the Front End electronics<br />
which are already in production now.<br />
The plan described in this paper addresses both of these constraints,<br />
by both providing hardware prototypes to the current<br />
developers and also delaying final technology choices<br />
upstream in the DAQ system.<br />
VIII. REFERENCES<br />
[1] http://ttc.web.cern.ch/TTC/intro.html<br />
[2] http://cmsdoc.cern.ch/cms/TRIDAS/horizontal/docs/<br />
tts.pdf<br />
[3] http://cmsdoc.cern.ch/cms/TRIDAS/horizontal/<br />
[4] DDU design specifications A. Racz<br />
CMS note 1999-010<br />
[5] The S-LINK 64 bit extension specification: S-LINK64 A.<br />
Racz, R. McLaren, E. van der Bij<br />
[6] The S-LINK Interface Specification<br />
O. Boyle, R McLaren, E. van der Bij<br />
[7] http://cmsdoc.cern.ch/~dgigi/uni_board.htm<br />
[8] http://cmsdoc.cern.ch/cms/TRIDAS/horizontal/DDU/content.html<br />
[9] Generic hardware for DAQ applications<br />
G. Antchev et al.<br />
LEB 1999 Proceedings<br />
[10]http://www.myri.com
The Embedded Local Monitor Board (ELMB)<br />
in the LHC Front-end I/O Control System<br />
B. Hallgren 1 , H.Boterenbrood 2 , H.J. Burckhart 1 , H.Kvedalen 1<br />
1 CERN, 1211 Geneva 23, Switzerland, 2 NIKHEF, NL-1009 DB Amsterdam, The Netherlands<br />
bjorn.hallgren@cern.ch, boterenbrood@nikhef.nl, helfried.burckhart@cern.ch, hallvard.kvedalen@cern.ch<br />
Abstract<br />
The Embedded Local Monitor Board is a plug-on board to<br />
be used in LHC detectors for a range of different front-end<br />
control and monitoring tasks. It is based on the CAN serial<br />
bus system and is radiation tolerant and can be used in<br />
magnetic fields. The main features of the ELMB are described<br />
and results of several radiation tests are presented.<br />
I. INTRODUCTION<br />
A versatile general-purpose low-cost system for the frontend<br />
control, the Local Monitor Box (LMB) was designed in<br />
1998 and tested by ATLAS sub-detector groups in test-beam<br />
and other applications [1]. Based on this experience and to<br />
match all the needs of the ATLAS sub-detector groups a<br />
modified version, the Embedded Local Monitor Board<br />
(ELMB) was designed. The main difference as compared to<br />
the LMB is the plug-on feature and the small size (50x67<br />
mm). It can either be directly put onto the sub-detector frontend<br />
electronics, or onto a general-purpose motherboard which<br />
adapts the I/O signals. In order to make the ELMB available<br />
for evaluation a small-scale production of 300 boards has<br />
been made.<br />
A. Environmental Requirements<br />
The ELMB is intended to be installed in the underground<br />
cavern of LHC detectors. As an example of such radiation<br />
environments the simulated radiation levels [2] for 10 years of<br />
operation of the ATLAS Muon MDT detectors are given<br />
below:<br />
• Total Ionising Dose (TID): 6.4 Gy,<br />
• Non-Ionising Energy Loss (NIEL): 3*10 11 neutrons/cm 2<br />
(equivalent to 1 MeV Si)<br />
• Single Event Effect (SEE): 4.8*10 10 hadrons/cm 2<br />
(>20 MeV)<br />
The magnetic field in which the Muon detectors operate is<br />
1.5 Tesla, which makes it difficult to use DC to DC converters<br />
and other ferromagnetic components including transformers<br />
that are often used in commercial, off-the-shelf systems.<br />
These components have been avoided in the design of the<br />
ELMB. Another requirement is remote operation up to a<br />
distance of 200 m.<br />
II. DESCRIPTION OF THE ELMB<br />
The ELMB has an on-board CAN-interface and is insystem<br />
programmable, either via an on-board connector or via<br />
CAN. There are 18 general purpose I/O lines, 8 digital inputs<br />
and 8 digital outputs. Optionally a 16-bit ADC and<br />
multiplexing for 64 analogue inputs is provided on-board as<br />
shown in Figure 1.<br />
….<br />
….<br />
….<br />
Analog Power<br />
5.5 to 12V,10mA<br />
ANALOG GND<br />
Voltage<br />
Regulators ±5V<br />
64 ch<br />
MUX<br />
ADC<br />
CS5523<br />
OPTO<br />
OPTO<br />
Digital Power<br />
3.5 V - 12V, 15 mA<br />
Voltage<br />
Regulator<br />
ATmega103L<br />
MASTER<br />
CAN bus<br />
cable<br />
Digital I/O<br />
Figure 1: Simplified block diagram of the ELMB module<br />
A. Power distribution<br />
4<br />
AT90S2313<br />
SLAVE<br />
3.3V<br />
DIGITAL GND<br />
DIP<br />
switches<br />
SAE81C91<br />
CAN<br />
controller<br />
Port A Port C Port F (other)<br />
8 8 8 10<br />
As seen from Figure 1 the ELMB is divided into three<br />
sections: analog, digital and CAN. They are separated with<br />
optocouplers to prevent current loops. The three parts are each<br />
equipped with a Low Dropout (LDO) 80 mA voltage<br />
regulator from Micrel (MIC5203). These regulators provide<br />
current and thermal limitations, which is a useful feature for<br />
protection against Single Event Latch-up (SEL). The analog<br />
circuits need ±5V, which is generated by a separate CMOS<br />
switched-capacitor circuit. The total analog current<br />
consumption is 10 mA. The power supply of the digital<br />
section is 3.3V, 15mA. The CAN part of the ELMB may be<br />
powered via the CAN cable and needs 20mA at 5.5V.<br />
B. The Analog Circuits -ADC<br />
OPTO<br />
OPTO<br />
CAN<br />
Power<br />
5.5 to12V,<br />
20mA<br />
CAN GND<br />
Voltage<br />
Regulator<br />
5V<br />
CAN<br />
Transceiver<br />
A 16 bit differential delta-sigma ADC with 7 bit gain<br />
control (Crystal CS5523) is used and placed on the back-side<br />
of the printed circuit board. The CS5523 is a highly<br />
integrated CMOS circuit, which contains an instrumentation
chopper stabilised amplifier, a digital filter, and calibration<br />
circuits. 16 CMOS analog differential multiplexers expand the<br />
number of inputs to 64. The AD680JR from ANALOG<br />
DEVICES supplies a stable voltage reference. The ADC input<br />
can handle a range between +4.5 and –4.5V. Figure 2 shows<br />
the backside of the printed circuit board with the ADC, the<br />
voltage reference and 16 multiplexer circuits.<br />
50 mm<br />
Figure 2: The backside of the ELMB printed circuit board<br />
C. The Digital Circuits<br />
67 mm<br />
The local intelligence of the ELMB is provided by 2<br />
microcontrollers of the AVR family of 8-bit processors,<br />
manufactured by ATMEL. This family of microcontrollers is<br />
based on a RISC processor developed by Nordic VLSI and is<br />
particularly efficient in power consumption and instruction<br />
speed. The ELMB’s main processor is the ATmega103L<br />
running at 4 MHz. This CMOS integrated circuit contains onchip<br />
128 Kbytes of flash memory, 4 Kbytes of SRAM, 4<br />
Kbytes of EEPROM and a range of peripherals including<br />
timers/counters and general-purpose I/O pins. The main<br />
monitoring and control applications are running on this<br />
processor.<br />
The second on-board microcontroller is a much smaller<br />
member of the same AVR family, the AT90S2313 with 2<br />
Kbytes flash-memory, 128 bytes of SRAM and 128 bytes of<br />
EEPROM. The main purpose of this processor is to provide<br />
In-System-Programming (ISP) via CAN for the ATmega103L<br />
processor. In addition it monitors the operation of the<br />
ATmega103L and takes control of the ELMB if necessary.<br />
This feature is one of the protections against SEE. In turn the<br />
ATmega103L monitors the operation of the AT90S2313 and<br />
provides ISP for it. Figure 3 shows the front-side of the<br />
ELMB printed circuit board with the two microcontrollers and<br />
the CAN circuit.<br />
Figure 3: The front side of the ELMB<br />
D. CAN circuits<br />
CAN is one of the three CERN recommended fieldbuses<br />
[3]. It is especially suited for sensor readout and control<br />
functions in the implementation of a distributed control<br />
system because of reliability, availability of inexpensive<br />
controller chips from different suppliers, ease of use and wide<br />
acceptance by industry. The error checking mechanism of<br />
CAN is of particular interest in the LHC environment where<br />
bit errors due to SEE will occur. The CAN controller registers<br />
a node's error and evaluates it statistically in order to take<br />
appropriate measures. These may extend to disconnecting the<br />
CAN node producing too many errors. Unlike other bus<br />
systems, the CAN protocol does not use acknowledgement<br />
messages but instead signals any error that occurs.<br />
For error detection the CAN protocol implements three<br />
mechanisms at the message level:<br />
• Cyclic Redundancy Check (CRC)<br />
• Message frame check<br />
• Acknowledgement errors<br />
The CAN protocol also implements two mechanisms for<br />
error detection at the bit level:<br />
• Monitoring<br />
• Bit stuffing<br />
If one or more errors are discovered by at least one station<br />
using the above mechanisms, the current transmission is<br />
aborted by sending an error message. This prevents other<br />
stations accepting the faulty message and thus ensures the<br />
consistency of data throughout the network. When the<br />
transmission of an erroneous message has been aborted, the
sender automatically re-attempts transmission (automatic<br />
repeat request).<br />
The on-board CAN-controller is the Infineon SAE81C91,<br />
a so-called ‘Full-CAN controller’ with buffers for 16<br />
messages. It is connected to the CAN bus via high-speed<br />
optocouplers to an interface circuit (Philips PCA82C250)<br />
which translates the logic levels to CAN levels. This bipolar<br />
integrated circuit has an operating temperature range of -40<br />
to 125 °C and contains several protection features. The<br />
microcontrollers communicate with the CAN-controller via a<br />
serial interface.<br />
E. Software<br />
CANopen [4] has been chosen as the higher layer<br />
protocol. CANopen standardises the way data is structured<br />
and is communicated. Of particular relevance for LHC<br />
applications is the network management. A master watches all<br />
the nodes to see if they are operating within their<br />
specifications. The most recent version of CANopen<br />
recommends using heartbeat messages for the supervision of<br />
the nodes. A general purpose CANopen embedded software<br />
program (ELMBio) for the ELMB Master processor has been<br />
developed [5]. 64 analog input channels, up to 16 digital<br />
inputs (PORTF and PORTA) and up to 16 digital outputs<br />
(PORTC and PORTA) are supported. The ELMBio conforms<br />
to the CANopen DS-401 Device Profile for I/O-modules and<br />
provides sufficient flexibility to make it suitable for a wide<br />
range of applications.<br />
The ELMBio source code is available as a framework for<br />
further developments and additions by users, who want to add<br />
or extend functionality, e.g. support for specific devices [6].<br />
Backside<br />
Front<br />
Back<br />
Figure 4: The ELMB motherboard<br />
III. ELMB MOTHERBOARD<br />
A motherboard is available in order to evaluate the ELMB<br />
and for non-embedded applications, see Figure 4. It contains<br />
two 100-pin SMD connectors for the ELMB and sockets for<br />
adapters for the 64 channel ADC. The purpose of the<br />
adapters is to convert the input signals to levels suitable for<br />
the ADC. Adapters are available for voltage measurements<br />
and for resistive sensors in 2- and 4-wire connections. The<br />
motherboard may be mounted in DIN rail housing of the size<br />
80x190 mm 2 . On the front side are connectors for the ADC<br />
inputs, digital ports, a SPI interface, the CAN interface and<br />
power connectors.<br />
IV. RADIATION TESTS<br />
Several radiation tests for TID, SEE effects and NIEL<br />
have been performed.<br />
A. TID tests<br />
Three pre-selection TID tests have been made on 4<br />
different ELMBs. Three of the ELMBs were of the first<br />
prototype series powered with 5V, while the 4th was from the<br />
3.3V series.<br />
1) The Pagure test<br />
Two ELMBs were exposed to a Co 60 γ-source [1 MeV] at<br />
the PAGURE facility [8]. They worked without problems<br />
until 30 Gy. At this point the power supply current started to<br />
increase by up to a factor of 10. Except for this increase, the<br />
ELMBs were basically working up to about 80 Gy when the<br />
measurements were stopped. The cause for the increase in the<br />
current was found to be the three CMOS components<br />
ATmega103L, AT90S2313 and the SAE81C91. The dose<br />
rate at this test was 77 Gy/h, which is 10 5 times higher than<br />
the ELMB is expected to receive at LHC. It was therefore<br />
decided to repeat tests at lower rates at CERN.<br />
2) The first GIF test<br />
The CERN Gamma Irradiation Facility (GIF) has a Cs 137<br />
γ-source [0.6 MeV]. The dose rate can be chosen in a wide<br />
range from 0.5 Gy/h down to 0.02 Gy/h. A test was done<br />
with one ELMB (given the identifier ELMB3) with a dose<br />
rate of 0.48 Gy/h [9]. The result was similar to the PAGURE<br />
test with the current increase starting at about 35 Gy. The test<br />
was stopped at 43 Gy when the current had increased by 20%.<br />
Both microcontrollers were still functional. However the insystem<br />
programming function of the master failed. The slave<br />
processor was found to be working without any faults.<br />
3) Accelerated ageing test<br />
After the radiation test the ELMB3 was tested for 12 days<br />
in a climate chamber [9]. At the same time a non-irradiated<br />
ELMB (ELMB4) was also tested for comparison. The total<br />
number of equivalent device hours reached was about 40000 h<br />
at 25 °C. Figure 5 shows how the current varied during the<br />
test. The current of the irradiated ELMB increased after each
temperature increase but then decreased exponentially. The<br />
current of the non-irradiated ELMB did not show this<br />
behaviour. Both ELMBs were still operating at 85 °C, but<br />
stopped working at a temperature of 100 °C, which is outside<br />
the specifications of the components. After the test the current<br />
of both ELMBs returned to the original value. The master<br />
processor had fully recovered and could be reprogrammed.<br />
Figure 5: Current variations during the temperature test<br />
4) GIF test 2<br />
In order to test when the reprogramming function of the<br />
ELMB microcontrollers cease to work, an additional test of a<br />
3.3V ELMB (ELMB5) was performed [10]. The irradiation<br />
was done in periods of approximately 10 hours and thereafter<br />
14 hours break. After each step the reprogramming function<br />
was checked. This function failed after 35 Gy. At this<br />
moment a small current decrease could be observed. From<br />
then on the ELMB received a continuous dose and the current<br />
increased. Figure 6 shows how the digital currents changed<br />
for all the TID tests.<br />
Figure 6: Comparison of the digital current for all TID tests<br />
5) Conclusions from the TID test of the ELMBs<br />
It was observed that the re-programming function of the<br />
flash memory and EEPROM in the master microcontroller<br />
ceases to work at a total received dose of around 35 Gy. Also<br />
the digital current increases substantially for a total dose of<br />
about 40 Gy. However this did not influence the operation of<br />
the ELMB up to about 80 Gy<br />
B. SEE tests<br />
The ELMB was irradiated with 60 MeV protons at the<br />
CYClotron of LOuvain-la-NEuve (CYCLONE) of the<br />
Université Catholique de Louvain, in Belgium [11]. The main<br />
purpose was to study SEE effects on the ELMB but also some<br />
TID measurements were made. A total fluence of 3.28*10 11<br />
protons/cm 2 was divided among 11 ELMBs. Each ELMB<br />
received an ionising dose of 39 Gy. Two types of tests were<br />
performed: a systematic test of memories and register and a<br />
functional test. They are described in detail in [11].<br />
1) Result of the systematic memory and register tests<br />
Special software was run in the ELMB, which in addition<br />
to the normal program also performed systematic bit tests of<br />
the different memories and registers in the ELMB. Figure 7<br />
shows the addresses of the ATmega103L SRAM where the bit<br />
errors were located versus fluence. (The total fluence reached<br />
was 3.28*10 11 protons/cm 2) .<br />
Figure 7: Addresses of the SRAM where SEE occurred versus<br />
fluence<br />
A summary of the memory and register errors is shown in<br />
Table 1. No error was found in the flash memory or in the<br />
EEPROM. Many errors were found in the SRAM as expected.<br />
The SRAM is twice as sensitive as the registers in the CAN<br />
controller SAE81C91 and ADC CS5523.<br />
Table 1: Results of the systematic SEU test<br />
No of bits<br />
Tested<br />
No of<br />
errors<br />
Cross-section<br />
cm 2 /bit<br />
SRAM 16384 2320 4.3*10 -13<br />
EEPROM 28672
2) The result of the functional SEE test<br />
There will be in the order of 3000 ELMBs installed in<br />
ATLAS. For topological and operational reasons at most 64<br />
ELMBs will form a CAN branch. As shown in this paper,<br />
errors due to radiation will occur. Table 2 lists the different<br />
types of errors with their symptoms, the method to recover<br />
from them and their maximally allowed rate.<br />
Table 2: Maximum allowed SEE rates in DCS system<br />
SEE category / Error<br />
Symptoms recovery<br />
Soft SEE / Automatic<br />
Data readout errors recovery<br />
Soft SEE / Software<br />
CAN node hangs reset<br />
Soft SEE / Power<br />
CAN branch hangs cycling<br />
Hard SEE / Replace<br />
Permanent error ELMB<br />
Destructive SEE/ Power<br />
Damage<br />
limitation<br />
Maximum allowed rate<br />
1 every 10 minutes per<br />
CAN branch<br />
1 every 24 hours per<br />
CAN node<br />
1 every 24 hours per<br />
CAN branch<br />
1 every 2 months for<br />
3000 ELMBs<br />
Not allowed<br />
In total there were 29 abnormal situations detected in<br />
131157 CAN messages recorded for 3.28*10 11 protons/cm 2 ,<br />
These events are divided in categories according to how the<br />
normal behaviour was restored, see Table 3.<br />
Table 3: Results of the SEE test compared with requirements<br />
SEE category/<br />
Recovery<br />
Result of<br />
the SEE test<br />
Requirements<br />
Soft SEE /<br />
Automatic recovery<br />
20 2604<br />
Soft SEE /<br />
Software reset<br />
5 1157<br />
Soft SEE /<br />
Power cycling<br />
4 18<br />
Hard SEE 0 0.006<br />
Of the SEEs, which required power cycling, one was due to<br />
an increase in the digital current and therefore believed to be a<br />
SEL. All other SEEs are soft SEE. No hard or destructive<br />
SEEs were found. All ELMBs were working perfectly after<br />
the test.<br />
4) TID effects<br />
The dose amounted to 39 Gy for 10 of the ELMBs and to<br />
44.5 Gy for one of the ELMBs. The TID is estimated using a<br />
conversion factor 1.0*10 10 protons/cm 2 corresponding to an<br />
ionising dose of 13 Gy for 60 MeV protons. The average<br />
fluence per ELMB was 3.0*10 10 protons/cm 2 . The power<br />
supply currents were measured on-line. The change measured<br />
was negligible (< 0.3%). All voltages of the LDO regulators<br />
and the ADC voltage reference were also found to be<br />
unchanged. Finally all ELMBs were checked to see if the<br />
reprogramming function of the microcontrollers was still<br />
working. They all proved to work perfectly.<br />
C. NIEL<br />
Tests on 10 ELMBs at the PROSPERO reactor with 1<br />
MeV neutrons were done to test the bipolar components of the<br />
ELMB. 5 of the ELMBs were irradiated to 6*10 11 n/cm 2<br />
(equiv. 1 MeV Si) while the other 5 to 3*10 12 n/cm 2 (equiv. 1<br />
MeV Si). All 10 were found to be perfectly working after the<br />
test. Measurements on the bipolar LDO voltage regulators and<br />
the voltage references AD680JR showed that they were all<br />
within specifications.<br />
V. CONCLUSIONS<br />
The ELMB has proven to be a versatile general-purpose<br />
I/O device, very well matched to the needs of the LHC<br />
experiments. All ATLAS subdetectors have decided to use it<br />
on a large scale - the biggest system comprising 1200<br />
ELMBs. CAN is an excellent choice for the read-out due to its<br />
robustness and error handling facilities. It has also been<br />
shown that by using COTS a certain level of radiation<br />
tolerance can be achieved. For example the requirements for<br />
the ATLAS Muon detector MDT are fulfilled concerning SEE<br />
and NIEL. The required TID figures including a safety factor<br />
of 7 varies from 9.3 Gy to 44.7 Gy for the different MDT<br />
chambers. For more than 97% of them the requirements are<br />
fully satisfied. More investigations and possibly some special<br />
measures may be required to use the ELMB for the rest of the<br />
chambers.<br />
VI. ACKNOWLEDGEMENTS<br />
We would like to thank M.Dentan for helping us with the<br />
definition and execution of the radiation tests. We are grateful<br />
to the EP-ESS group for collaborating in the production of the<br />
ELMB and to the EP division for the support as a Common<br />
Project.<br />
VII. REFERENCES<br />
[1] B. Hallgren et al, “A Low-Cost I/O Concentrator using the<br />
CAN fieldbus”, ICALEPS99conference, Trieste, Italy, 4 –<br />
8 October, 1999<br />
[2]_http://atlas.web.cern.ch/Atlas/GROUPS/FRONTEND/W<br />
WW/RAD/RadWebPage/RadConstraint/Radiation_Tables<br />
_031000.pdf<br />
[3] G.Baribaud et al, "RECOMMENDATIONS FOR THE<br />
USE OF FIELDBUSES AT CERN", CERN ECP 96-11,<br />
June 1996. http://itcowww.cern.ch/fieldbus/report1.html.
[4] CAN in Automation (CiA), D-91058 Erlangen<br />
(Germany). http://www.can-cia.de/<br />
[5] H.Boterenbrood, “Software for the ELMB (Embedded<br />
Local Monitor Board) CANopen module”, NIKHEF,<br />
Amsterdam, 25 July 2001.<br />
[6] http://www.nikhef.nl/pub/departments/ct/po/html/<br />
ELMB/ELMBresources.html<br />
[7] CAN in Automation (CiA), “CANopen Device Profile for<br />
Generic I/O Modules”, CiA DS-401, Version 2.0, 20<br />
December 1999.<br />
[8] H.Burckhart, B. Hallgren and H. Kvedalen, ' Irradiation<br />
Measurements of the ATLAS ELMB', CERN ATLAS<br />
Internal Working Note, DCS- IWN9, 8 March, 2001<br />
[9] J. Cook, B. Hallgren and H. Kvedalen, ' Radiation test at<br />
GIF and accelerated ageing of the ELMB ', CERN<br />
ATLAS Internal Working Note, DCS- IWN10, 2 May,<br />
2001<br />
[10] B. Hallgren and H. Kvedalen, ' Radiation test of the 3.3V<br />
version ELMB at GIF ', CERN ATLAS Internal<br />
Working Note, DCS- IWN11, 31 August, 2001<br />
[11] H. Boterenbrood, H.J. Burckhart, B. Hallgren H.<br />
Kvedalen and N. Roussel, 'Single Event Effect Test of<br />
the Embedded Local Monitor Board',CERN ATLAS<br />
Internal Working Note, DCS- IWN12, 20 September,<br />
2001
Design of a Data Concentrator Card for the CMS Electromagnetic Calorimeter Readout<br />
Abstract<br />
J. C. Silva (1) ,N. Almeida (1) , V. Antonio (2) , A. Correia (2) , P. Machado (2) , I. Teixeira (2) , J. Varela (1) (3) ,<br />
The Data Concentrator Card (DCC) is a module that in the<br />
CMS Electromagnetic Calorimeter Readout System is<br />
responsible for data collection in a readout crate, verification<br />
of data integrity and data transfer to the central DAQ. The<br />
DCC should sustain an average data flow of 200 Mbyte/s. In<br />
the first part of the paper we summarize the physics<br />
requirements for the ECAL readout and give results on the<br />
expected data volumes obtained with the CMS detector<br />
simulation (ORCA software package). In the second part we<br />
present the module's design architecture and the adopted<br />
engineering solutions. Finally we give results on the expected<br />
performance derived from a detailed simulation of the<br />
module's hardware.<br />
1. INTRODUCTION<br />
The CMS Electromagnetic Calorimeter comprises<br />
approximately 77.000 crystals, organized in supermodules, in<br />
the barrel, and D-shaped modules in the endcaps (Figure 1).<br />
After amplification, the crystals signals are digitized (on the<br />
detector) and then transmitted to the counting room by highspeed<br />
optical links (800 Mbit/s). The upper level readout and<br />
trigger boards (ROSE boards) are designed to receive 100<br />
optical links corresponding to the same number of crystals.<br />
Within each ROSE, data is stored in pipeline memories<br />
waiting for the first-level trigger decision (L1A), and in<br />
parallel it is used to compute trigger primitives that are sent to<br />
the regional calorimeter trigger system. Seventeen ROSE<br />
boards, filling a single VME-9U crate, handle the 1700<br />
crystals in a barrel supermodule. Endcap crates are equipped<br />
with 14 readout boards. In each crate, the DCC is the common<br />
collection point of data from the ROSE boards, performing<br />
local event building and data transmission to the DAQ<br />
system.<br />
If every crystal (ten data samples each) were read-out,<br />
something like 2.4 Mbytes would need to be handled for each<br />
triggered event. Due to several constraints this data volume is<br />
unacceptable. By agreement, the ECAL fraction of the DAQ<br />
bandwidth is constrained to be approximately 10% of the<br />
CMS total, bringing the average data rate per DCC to<br />
200 Mbyte/s, for the maximum first-level trigger rate<br />
(100kHz). Techniques for obtaining optimal, efficient use of<br />
the allocated bandwidth, such as zero suppression and<br />
selective readout were studied using data generated with the<br />
CMS physics simulation and reconstruction software (ORCA-<br />
Object Oriented Reconstruction for CMS Analysis) [1].<br />
(1) LIP-Lisbon, (2) INESC, Lisbon, (3) CERN<br />
Figure 1: ECAL Endcap and Barrel geometry.<br />
This paper presents the DCC conceptual design, validated by<br />
a detailed modeling and simulation of the hardware using<br />
ECAL physics data as input. In section 2 we describe ECAL<br />
data generation process, section 3 presents the DCC<br />
conceptual design, section 4 describes the hardware modeling<br />
and finally in section 5 we present the hardware simulation<br />
results.<br />
2. ECAL RAW DATA GENERATION<br />
To motivate the best DCC design, the DCC hardware<br />
simulation was done using as input the expected ECAL data,<br />
for both Endcap and Barrel, as derived from a detailed<br />
detector simulation with ORCA version 4_4_0 optimized.<br />
The simulation was performed for jet events, generated with<br />
transverse energy between 50 and 100 GeV, which are<br />
representative of the CMS triggered events. High luminosity<br />
(L~10 34 /cm 2 /s) running was assumed, corresponding to<br />
approximately 17 pileup events per crossing. The pileup<br />
simulation is particularly important since a large fraction of<br />
the ECAL data volume results from the pileup minimum-bias<br />
events. Various scenarios of zero suppression and tower<br />
selective readout were applied to the data, allowing the<br />
evaluation of the DCC performance under different data rate<br />
conditions.<br />
The target data volume for the ECAL is approximately<br />
2KByte per DCC, which implies a reduction factor of ~20 on<br />
the total ECAL data volume. This data reduction can be<br />
achieved, without hurting the ECAL physics potential,
applying selective readout and zero suppression techniques to<br />
the data. Zero suppression implies the suppression of crystals<br />
with energy lower than a programmable threshold (typically,<br />
the threshold is set between zero and two r.m.s. of the<br />
electronics noise). Zero supression is complemented by a<br />
Selective Readout scheme based on the Trigger Towers<br />
transverse energy sum. The tower transverse energy is<br />
compared to two programmable thresholds, typically set at 1.0<br />
and 2.5 GeV. Crystals in towers with ET above the lower<br />
threshold, as well as crystals in towers surrounding a central<br />
tower with ET larger the higher threshold, are selected for<br />
readout. As shown in figure2 the average data volume per<br />
DCC is reduced from ~3.3KB to ~1.3KB when Selective<br />
Readout is used together with a milder Zero Supression. On<br />
the other hand, the choosen organization of the readout<br />
channels garanttees that the DCC event size is constant in the<br />
whole detector, as it is shown in figure 2.<br />
DATA/DCC_BARREL<br />
NO SR ; ZS(2sigma)<br />
DATA/DCC_BARREL<br />
SR(2.5,1.0) ; ZS(0sigma)<br />
Figure 2: Event size per DCC in the barrel (without and with SR).<br />
Figure 3 shows the total ECAL event size for various<br />
combinations of the Zero Suppression and Selective Readout<br />
thresholds. The target value of 100 kBytes is easily achieved<br />
using very low thresholds and consequently without loosing<br />
significant physics data.<br />
Figure 3: Zero Suppression and Selective Readout Scenarios<br />
3. DCC CONCEPTUAL DESIGN<br />
The DCC is implemented, or partitioned, in two p.c. boards: a<br />
main 9U board (DCC Mother Board) and a 6U transition<br />
board (DCC Input Board). In this way the interconnection at<br />
the crate level is simplified. The DCC Input Board contains<br />
17 input links (point to point links), input memories and<br />
respective handlers. The DCC Mother Board houses the Event<br />
Builder, the output memories and the output links. Two<br />
output links are included, one transporting ECAL data to the<br />
main DAQ, and another transporting only trigger data to the<br />
trigger monitoring system. The DCC can be accessed via the<br />
VME bus, for initialization, board monitoring and data<br />
readout in spy mode.<br />
The Event Builder is the most important and complex part of<br />
the DCC. The main purpose of this block is to assemble the<br />
data fragments arriving from 17 ROSE boards in a single data<br />
packet (DCC event), and to store it in the output memories<br />
ready for transmission. A number of checks and<br />
complementary operations are performed by the Event<br />
Builder. It verifies the integrity of the input data fragments,<br />
checks the event fragments synchronization, monitors the<br />
occupancy of the input and output memories and generate<br />
“empty events” on special error conditions. A complete<br />
description of the Event Builder design is described on [2].<br />
Some technological choices have been made to fit the DCC<br />
performance requirements. All inputs use LVDS Channel<br />
Link that insures quality and reliability while reducing the<br />
interconnection pins. The Input Handlers, Event Builder and<br />
Output Handlers are implemented using re-programmable<br />
logic circuits (ALTERA). The output ports have been defined<br />
to be S-Link-64 compatible, the protocol adopted for<br />
transmission to the central DAQ. The DCC Internal Data Bus,<br />
bridging the input memories to the output link, is being now<br />
designed to have a throughput of 528MB/s.<br />
4. DCC MODELING ...<br />
The modeling of the DCC hardware was done using the<br />
Rational Rose Real Time software.<br />
From the DCC conceptual design three main classes emerge:<br />
the Input Handler, the Event Builder and the Output Handler.<br />
These classes were modeled on so called “capsules” that<br />
reproduce the behavior of the three classes using a real time<br />
simulation. To complete the modeling two more classes are<br />
needed, the clock emulator and L1A trigger emulator and a<br />
Data capsule that uses the ECAL raw data as input. Each of<br />
these classes allows us to model, configure and modify the<br />
main parameters of the DCC design.<br />
The Input Handler capsule, controls the occupancy of each<br />
input memory, allows to modify the segmentation of each<br />
input memory (technically called iFIFO’s) and to set up the<br />
handshake timing.<br />
The Event Builder performs all the data checks, builds the<br />
DCC event from the different selected inputs and updates the<br />
status and error registers.
Figure 4: DCC modeling overview<br />
The Output Handler controls the communication with the<br />
output link and sends each entire event to the DAQ.<br />
The Data capsule reads from a file the input data. The data<br />
used on the modeling and simulation is the physics data<br />
generated by the ORCA software, as referred before.<br />
The Clock capsule controls the clock frequency and the L1A<br />
trigger generation.<br />
5. ...AND SIMULATION RESULTS<br />
In this chapter we will present the first (<strong>preliminary</strong>) results<br />
from the simulation of the DCC conceptual design.<br />
Two simulated data sets were applied to the Data capsule.<br />
Both data sets were obtained with selective readout high<br />
threshold set to 2.5GeV and low threshold set to 1GeV.<br />
Zero suppression thresholds of 1σ and 0σ were used. The<br />
corresponding ECAL event sizes were 53 Kbytes and<br />
65Kbytes, respectively, per triggered event.<br />
14000<br />
13000<br />
12000<br />
11000<br />
10000<br />
9000<br />
8000<br />
7000<br />
6000<br />
5000<br />
4000<br />
3000<br />
2000<br />
1000<br />
Event total time inside system, Event time until enter EB<br />
0<br />
Event nr. 1 Event nr. 10 Event nr. 20 Event nr. 30 Event nr. 41 Event nr. 51 Event nr. 62 Event nr. 72 Event nr. 83<br />
Event Number<br />
Event toal tim<br />
Event time un<br />
Figure 5: An example of input memory overflow. The plot shows<br />
the event time within the system as a function of the event<br />
number.<br />
Various simulations for different options of the DCC Internal<br />
Bus bandwidth, Event Builder processing speed and Output<br />
Link speed were performed. The maximum simulated DCC<br />
bandwidth was 320MB/s. For each condition, the occupancy<br />
of the input and output memories and the event latency inside<br />
the DCC were investigated. Figure 5 shows an example of a<br />
simulation were an overflow of the input memories is<br />
observed (simulated bandwidth was 160 MB/s).<br />
Simulation results showed that, relatively to the 320 MB/s<br />
deled design, either the input memory or the processing speed<br />
needs to be increased. For 65Kbytes per triggered event we<br />
estimated an overflow probablilty of 5.10 -8 (figure 6). This<br />
corresponds to an overflow condition every 3 min which is far<br />
from acceptable (even more if we aim ~100Kbytes per event).<br />
One must notice that no trigger rules have been appiled to the<br />
simulation inputs, and therfore this results are “whorst case”<br />
figures. Further results are expected in the near future for the<br />
aimed bandwidth of 528MB/s.<br />
Number of L1A<br />
10000<br />
1000<br />
100<br />
10<br />
1<br />
0.1<br />
0.01<br />
0.001<br />
0.0001<br />
1E-05<br />
1E-06<br />
1E-07<br />
1E-08<br />
1E-09<br />
Distribution of Nb events in iFIFO<br />
Selective R eadout & Zero Supression 0 Sigm a<br />
1E-10<br />
0 5 10 15 20 25 30<br />
6. CONCLUSIONS<br />
Detailed ORCA simulations showed that a combination of<br />
Selective Readout and Zero Suppression techniques reduces<br />
the CMS-ECAL average data to the target value<br />
(100KB/event) without loosing significant physics data.<br />
A conceptual design of the ECAL data concentrator was<br />
developed aiming a data throughput of 528MB/s.<br />
Simulations of the DCC hardware, using physics data as<br />
input, were used to guide the design. Further simulations are<br />
undergoing to validate the final design choices.<br />
7. REFERENCES<br />
Probability of 32 events in iFIFO = 5E-8<br />
Num ber of Events in iFIFO<br />
Figure 6: iFIFO overflow probability.<br />
[1] Selective Readout in the CMS ECAL, T. Monteiro, Ph. Busson,<br />
W. Lustermann, T. Monteiro, J. C. Silva, C. Tully, J. Varela, in<br />
Proceedings of 'Fifth Workshop on Electronics for LHC<br />
Experiments', Snowmass, Colorado, USA, 1999.<br />
[2] Description of the Data Concentrator Card for the CMS-ECAL,<br />
Jose C. Da Silva, J. Varela, CMS NOTE in preparation.
Vertical Slice of the ATLAS Detector Control System<br />
H.Boterenbrood 1 , H.J. Burckhart 2 , J.Cook 2 , V. Filimonov 3 , B. Hallgren 2 , F.Varela 2a<br />
1 NIKHEF, Amsterdam, The Netherlands, 2 CERN, Geneva, Switzerland, 3 PNPI, St.Petersburg, Russia,<br />
Abstract<br />
The ATLAS Detector Control System consists of two<br />
main components: a distributed supervisor system, running on<br />
PCs, called Back-End system, and the different Front-End<br />
systems. For the former the commercial Supervisory Control<br />
And Data Acquisition system PVSS-II has been selected. As<br />
one solution for the latter, a general purpose I/O concentrator<br />
called Embedded Local Monitor Board has been developed.<br />
This paper describes a full vertical slice of the detector control<br />
system, including the interplay between the Embedded Local<br />
Monitor Board and PVSS-II. Examples of typical control<br />
applications will be given as well.<br />
I. SCOPE OF DCS<br />
The ATLAS Detector Control System (DCS) [1] must<br />
enable a coherent and safe operation of the ATLAS detector.<br />
It has also to provide interaction with the LHC accelerator and<br />
the external services such as cooling, ventilation, electricity<br />
distribution, and safety systems. Although the DCS will<br />
operate independently from the DAQ system, efficient bidirectional<br />
communication between both systems must be<br />
ensured. ATLAS consists of several subdetectors, which are<br />
operationally quite independent. DCS must be able to operate<br />
them in both stand-alone mode and in an integrated fashion as<br />
a homogenous experiment.<br />
DCS is not responsible for safety, neither for personal nor<br />
for equipment. It also does not deal with the data of the<br />
physics events.<br />
II. ARCHITECTURE OF DCS<br />
The ATLAS detector is hierarchically organised, starting<br />
with the subdetectors (e.g. Transition Radiation Tracker, Tile<br />
Calorimeter, etc.), and on the further levels down following<br />
their respective subsystems (e.g. barrel and end-cap parts or<br />
High Voltage, Low Voltage, gas systems, etc.). This<br />
organisation has to be accommodated in the DCS architecture.<br />
The DCS equipment is geographically distributed in three<br />
different areas as shown in figure 1. The main control room is<br />
situated at the surface, in SCX1 and houses the supervisory<br />
stations for the operation of the detector. This equipment is<br />
connected via a LAN to the Local Control Stations (LCS)<br />
placed in the underground electronics room USA15, which is<br />
accessible during running of the experiment. The Front-End<br />
(FE) electronics in UX15 is exposed to radiation and a strong<br />
a also University of Santiago de Compostela, Spain<br />
magnetic field. This equipment is distributed over the whole<br />
volume of the detector with cable distances up to 200 m. The<br />
communication with the equipment in USA15 is done via<br />
fieldbuses.<br />
Figure 1: Architecture of ATLAS DCS<br />
A. Back-End System<br />
The highest level is the overall supervision as performed<br />
from the control room by the operator. Apart from the Human<br />
Interface it includes analysis and archiving of monitor data,<br />
‘automatic’ execution of pre-defined procedures and<br />
corrective actions, and exchange of data with systems outside<br />
of DCS. The middle level consists of LCSs, which operate a<br />
sub-detector or a part of it quite independently. These two<br />
levels form the Back-End (BE) system. The commercial<br />
Supervisory Control And Data Acquisition (SCADA) package<br />
PVSS-II [2] has been chosen, in the framework of the Joint<br />
COntrols Project (JCOP) [3] at CERN, to implement the BE<br />
systems of the 4 LHC experiments. PVSS-II gathers<br />
information from the FE equipment and offers supervisory<br />
control functions such as data processing, execution of control<br />
procedures, alert handling, trending, archiving and web<br />
interface. It has a modular architecture based on functional<br />
units called managers, which perform these individual tasks.<br />
PVSS-II is a device-oriented product where devices are<br />
modelled by structures called data-points. Applications can be<br />
distributed over many stations on the network running on both<br />
Linux and WNT/2000. These features of modelling and<br />
distribution facilitate the mapping of the control system onto<br />
the different subdetectors. Due to the large number of<br />
channels to be handled in ATLAS, the event-driven<br />
architecture of the product was a crucial criterion during the
selection process. PVSS-II also provides a wide set of<br />
standards to interface hardware (OPC, fieldbus drivers) and<br />
software (ODBC, DDE, DLL, API).<br />
B. Front-End System<br />
The responsibility for the FE systems is with the subdetector<br />
groups. In order to minimise development effort and<br />
to ease maintenance load, a general purpose I/O system,<br />
called Embedded Local Monitor Board (ELMB) has been<br />
developed, which is described in detail in another contribution<br />
to this workshop [4]. It comprises ADC and digital I/O<br />
functions, is radiation tolerant for use outside of the<br />
calorimeters of the LHC detectors and can operate in a strong<br />
magnetic field. Further functions such as DAC and interlock<br />
capability can be added. The readout is done via the fieldbus<br />
CAN [5], which is an industry standard with well-supported<br />
commercial hardware down to the chip level. Due to its very<br />
performent error detection and correction and its flexibility,<br />
CAN is particularly suited for distributed I/O as needed by the<br />
LHC detectors. CANopen is used as high-level<br />
communication protocol on the top of the physical and data<br />
link layers defined by CAN. It comprises features such as<br />
network management and supervision, a wide range of<br />
communication objects for different purposes (e.g. real-time<br />
data transfer, configuration) and special functions for network<br />
synchronisation, time stamping, error handling, etc.<br />
C. Connection FE-BE<br />
The interface PVSS-CANopen is based on the industry<br />
standard OPC (OLE for Process Control) [6]. OPC is a<br />
middle-ware based on the Microsoft DCOM (Distributed<br />
Compound Object Model) which comprises a set of interfaces<br />
designed to facilitate the integration of control equipment into<br />
Windows applications. OPC is supported by practically all<br />
SCADA products. This standard implements a multiclient/multi-server<br />
architecture where a server holds the<br />
process data or OPC items in the so-called address space and<br />
a client may read, write or subscribe to them using different<br />
data access mechanisms (synchronous, asynchronous, refresh,<br />
or subscribe). An OPC server may organise the items in<br />
groups on behalf of the client assigning some common<br />
properties (update rate, active, call-back, dead-band, etc.).<br />
Another important aspect of OPC is that it transmits data only<br />
on change, which results in a substantial reduction of the data<br />
traffic.<br />
Several firms offer CANopen OPC servers, but those<br />
investigated are based on their own special hardware interface<br />
and they support only limited subsets of the CANopen<br />
protocol. Although these subsets fulfil most of the industrial<br />
requirements, they do not provide all functionality required in<br />
high energy physics. Therefore we have developed a<br />
CANopen OPC Server supporting the CANopen device<br />
profiles required. This package is organised in a part which<br />
acts like a driver for CANopen and is specific to the PCI-<br />
CAN interface card chosen, and a hardware-independent part<br />
which implements all OPC interfaces and main loops<br />
handling communication with external applications. This<br />
CANopen-OPC server imports from a configuration file all<br />
information needed to define its address space, the bus<br />
topology and the communication parameters.<br />
III. IMPLEMENTATION OF VERTICAL SLICE<br />
A full “vertical slice” of the ATLAS DCS has been<br />
implemented, which ranges from the I/O point (sensor or<br />
actuator) up to the operator interface comprising all elements<br />
described above, like ELMB, CAN, OPC Server and PVSS-II.<br />
The software architecture of the vertical slice is shown in<br />
figure 2.<br />
Figure 2: Software Architecture<br />
The system topology in terms of CANbus, ELMBs and<br />
sensors is modelled in the PVSS-II database using data-points.<br />
These data-points are connected to the items in the CANopen-<br />
OPC server address space. Setting a data-point in PVSS-II<br />
will trigger the OPC server to send the appropriate CANopen<br />
message to the bus. In turn, when an ELMB sends a<br />
CANopen message to the bus, the OPC server will decode it,<br />
set the respective item in its address space and hence transmit<br />
the information to a data-point in PVSS-II. The SCADA<br />
application carries out the predefined calculations to convert<br />
the raw data to physical units, possibly trigger control<br />
procedures, and trend and archive the data. The vertical slice<br />
also comprises PVSS-II panels to manage the configuration,<br />
settings and status of the bus.<br />
This vertical slice has been the basis for several control<br />
applications of ATLAS subdetectors like the alignment<br />
systems of the Muon Spectrometer, the cooling system of the<br />
Pixel subdetector, the temperature monitoring system of the<br />
Liquid Argon subdetector and the calibration of the Tile<br />
Calorimeter at a test beam. As an example the latter will be<br />
discussed in the next paragraph.<br />
A subset of the Tile Calorimeter modules needs to be<br />
calibrated with particles in a test beam. The task of DCS is to<br />
monitor and control the three different subsystems, the high<br />
voltage, the low voltage and the cooling system. A total of<br />
seven ELMBs were connected to the CAN bus. For the low<br />
voltage system, the standard functionality of the vertical slice
was easily extended in order to drive analogue output<br />
channels by means of off-board DAC chips. This application<br />
also interfaced to the CERN SPS accelerator in order to<br />
retrieve the beam information for the H8 beam line. Data<br />
coming from all subsystems were archived in the PVSS-II<br />
historical database and then passed to the Data Acquisition<br />
system for event data by means of the DCS-DAQ<br />
Communication software (DDC) [7].<br />
Figure 3 shows the PVSS graphical user interface of this<br />
application. The states of the devices integrating the DCS are<br />
colour-coded and the current readings of the operational<br />
parameters are also shown in the panel. Dedicated panels for<br />
each subsystem and graphical interfaces to the historical<br />
database and for alert handling are also provided. The system<br />
has proven to work very reliably.<br />
Figure 3: Control Panel for Tile Calorimeter Calibration<br />
IV. CAN BRANCH TEST<br />
Several thousand ELMB nodes will be used in ATLAS,<br />
the largest sub-detector comprising alone 1200 nodes. When<br />
organising them in CANbuses, conflicting requirements like<br />
performance, cost, and operational aspects have to be taken<br />
into account. For example, a higher number of ELMBs per<br />
branch – the maximum number possible is 127 – reduces the<br />
cost, but also reduces performance and increases the<br />
operational risk, i.e. in case of failure a bigger fraction of the<br />
detector may become in-operational. Additionally, several<br />
CAN messages having different priorities may be transferred<br />
at the same time. This calls for an efficient design of the bus<br />
to minimise the collisions of the frames. The priority is<br />
defined by the so-called Communication Object Identifier<br />
(COB-ID) in CANopen, which is built from the node<br />
identifier and the type of message.<br />
A 200m long CAN branch with 16 ELMBs has been set up in<br />
order to measure its performance in terms of data volume and<br />
readout speed, and to identify possible limiting elements in<br />
the readout chain. The set-up used is shown in figure 4. The<br />
ELMBs were powered via the bus using a 9 V power supply.<br />
The total current consumption was about 0.4 A.. The total<br />
number of channels and their transmission types are given in<br />
table 1.<br />
Figure 4: Set-up of CAN branch test<br />
COB-ID Type Channels Mode<br />
0x180 + NodeId Analogue Input 1024 Sync<br />
0x200 + NodeId Digital Input 128 Async + Sync<br />
0x280 + NodeId Digital Output 256 Asyn<br />
Table 1: I/O points of the CAN branch test<br />
Due to the large number of channels, the analogue inputs of<br />
the ELMBs were not connected to the sensors. A special<br />
version of the ELMB software was used to generate random<br />
ADC data ensuring new values at each reading and therefore<br />
maximising the traffic through the OPC server. The digital<br />
output and input ports were interconnected to check the<br />
transmission of CAN messages with different priorities on the<br />
bus, i.e. output lines can be set while inputs are being read.<br />
Figure 5 shows the bus activity after a CANopen SYNC<br />
command is sent to the bus. All ELMBs try to reply to this<br />
message at the same time causing collisions of the frames on<br />
the bus. The CAN collision arbitration mechanism handles<br />
them according to the priority of the messages. In this figure,<br />
the Bus Period is defined as the time taken for all synchronous<br />
messages to be received from all nodes after the SYNC<br />
command has been sent to the bus. δ defines the time between<br />
consecutive CAN frames on the bus and is a function of the<br />
bus speed, which is limited by the CANbus length (typically<br />
0.7ms at 125kbaud). The time between successive analogue<br />
channels from a single ELMB, which is dependent upon the<br />
ADC conversion rate, is given by ∆. The OPC Server<br />
generates the SYNC command at predefined time intervals<br />
and this defines the readout rate.<br />
The SCADA application was distributed over two PCs<br />
running WinNT (128 MB of RAM and 800 MHz clock<br />
frequency). The hardware was connected to a PC acting as<br />
Local Control Station (LCS), where the OPC server and<br />
control procedures where running. All values were archived to<br />
the PVSS-II historical database. The second PC, acting as<br />
operator station, was used to access the database of the LCS
Figure 5: Bus activity after a SYNC<br />
and to perform data analysis and visualisation. The<br />
communication between the two systems was internally<br />
handled by PVSS-II. A third PC, running as a CAN analyser,<br />
was used to log all CAN frames on the bus to a file for later<br />
comparison with the values stored in the SCADA database.<br />
This CAN analyser is a powerful diagnostic tool. It allows for<br />
debugging of the bus enabling visualisation of the traffic and<br />
sending of messages directly to the nodes.<br />
The test was performed for different settings of the ADC<br />
conversion rate and of the update rate of the OPC server. This<br />
parameter defines the polling rate of the internal cache of the<br />
OPC server for new data to be transferred to the OPC client.<br />
The readout rate was also varied from values much greater<br />
than the bus period down to a value close to it. The CPU<br />
behaviour was monitored under these sets of conditions.<br />
We have observed excellent performance of the ATLAS DCS<br />
vertical slice at low conversion rates (1.88 and 3.71 Hz). All<br />
messages transmitted to the bus have been logged in the<br />
PVSS historical database. This result is independent of the<br />
SYNC interval as long as this parameter is kept above the bus<br />
period. However, some ATLAS applications call for a faster<br />
readout. Results at 15.1 and 32.5 Hz show a good behaviour<br />
when the SYNC interval is higher than the bus period.<br />
Performance deteriorates when the SYNC interval tends to the<br />
bus period. Crosscheck with the CAN analyser files showed<br />
that many messages were not in the PVSS-II database. Two<br />
major problems were identified: overflows in the read buffer<br />
of the NI-CAN interface card, and the PVSS-II archiving<br />
taking close to 100% of the CPU time while the avalanche of<br />
analogue channels is on the bus. It was also found that these<br />
results are very sensitive to the OPC update rate. The faster<br />
the update takes place, the more CPU time is required limiting<br />
its availability for other processes like the PVSS archiving.<br />
This suggests to split the PVSS application in such a manner<br />
that only the OPC interface runs on the LCS while all<br />
archiving is handled higher up in the hierarchy shown in<br />
figure 4. However, further tests must be performed to<br />
address the limitation of each of the individual elements<br />
quantitatively.<br />
V. CONCLUSIONS AND OUTLOOK<br />
PVSS-II has been found to be a good solution to implement<br />
the BE system of the ATLAS DCS. It is device oriented and<br />
allows for system distribution to aid the direct mapping of the<br />
DCS hierarchy. The ELMB I/O module has been shown to<br />
fulfil the requirements of the majority of the sub-detectors.<br />
Both PVSS-II and the ELMB are well accepted by the<br />
ATLAS sub-detector groups. The vertical slice comprises of<br />
these two components interconnected via the CANopen OPC<br />
Server. Many applications have been developed using this<br />
vertical slice and they have shown that it offers high<br />
flexibility, good balance of the tasks, reliability and<br />
robustness. A full branch test has been performed with the<br />
aim of estimating its performance. Good results were obtained<br />
for low ADC conversion rates. Tests at higher ADC<br />
conversion rates allowed the identification of several<br />
problems, such as the read buffer size of the PCI-CAN card<br />
causing overflows of CAN messages. For this and other<br />
reasons, such as cost and architecture, this interface card will<br />
be replaced. CPU usage increases to unacceptable levels with<br />
high data flow when the OPC Server and archiving are both<br />
run on a single processor. The vertical slice tests have helped<br />
to better define the load distribution amongst different PVSS-<br />
II systems.<br />
Further tests are required to define the CAN topology to be<br />
used in ATLAS. The main issues to be addressed are; the<br />
system granularity in terms of number of buses per PC, the<br />
number of ELMBs per bus (between 16 and 32 seems to fulfil<br />
most requirements) and powering. In addition, bus behaviour<br />
needs to be investigated further, e.g. the ELMB may only<br />
send data on change. In response to a sync, a status message<br />
would be sent giving a bit flag for each channel of the ELMB<br />
indicating whether an error had occurred. If values exceed<br />
pre-defined acceptable limits, then this could also be signaled<br />
by the ELMB. Bus supervision and automatic recovery must<br />
also be investigated. It must be possible to reset individual<br />
nodes, reset the bus or perform power cycling, depending<br />
upon the severity of any error encountered.<br />
VI. REFERENCES<br />
[1] H.J. Burckhart, “Detector Control System”, Fourth<br />
Workshop on Electronics for LHC Experiments, Rome<br />
(Italy), September 1998, p. 19-23.<br />
[2] PVSS-II, http://www.pvss.com/<br />
[3] JCOP, http://itcowww.cern.ch/jcop/<br />
[4] B.Hallgren et al., “The Embedded Local Monitor Board<br />
(ELMB) in the LHC Front-End I/O Control System”,<br />
contribution to this conference.<br />
[5] CAN in Automation (CiA), D-91058 Erlangen<br />
(Germany). http://www.can-cia.de/<br />
[6] OLE for Proccess Control, http://www.opcfoundation.org/<br />
[7] H.J. Burckhart et al., “Communication between<br />
Trigger/DAQ and DCS”, International Conference on<br />
Computing in High Energy and Nuclear Physics, Beijing<br />
(China) September 2001, p. 109-112.
A rad-hard 8-channel 12-bit resolution ADC<br />
for slow control applications in the LHC environment<br />
G. Magazzù 1 ,A.Marchioro 2 ,P.Moreira 2<br />
1 INFN-PISA, Via Livornese 1291 – 56018 S.Piero a Grado (Pisa), Italy (Guido.Magazzu@pi.infn.it)<br />
2 CERN, 1211 Geneva 23, Switzerland<br />
Abstract<br />
The damage induced by radiation in detector sensors<br />
and electronics requires that critical environmental<br />
parameters such as leakage currents of the silicon<br />
detectors, local temperatures and supply voltages are<br />
carefully monitored. For the CMS central tracker, an<br />
ASIC, the Detector Control Unit (DCU) has been<br />
developed to monitor these quantities in a commercial<br />
sub-micron technology. A set of layout design rules<br />
guarantees for this device the radiation hardness that is<br />
requested for the LHC environment. The key circuit of<br />
the DCU is an 8-channel 12-bit resolution ADC. The<br />
structure and the performances of this ADC are described<br />
in this work.<br />
I. INTRODUCTION<br />
The CMS tracker silicon micro-strip detectors, when<br />
exposed to the LHC high levels of radiation, are subject<br />
to a number of damaging phenomena. The main effects<br />
are an increase of the detector leakage current and a<br />
change in the detector depletion voltage. Maintaining the<br />
detector integrity and efficiency over their expected 10<br />
years life requires careful monitoring of the detector<br />
environmental conditions. A VLSI circuit, the Detector<br />
Control Unit (DCU), has been developed for that purpose<br />
in the CMS tracker.<br />
The detector hybrid block diagram is shown in Figure<br />
1. The figure represents the global relations between the<br />
DCU, the Si micro-strip detectors and the readout chips<br />
(the APVs) [1]. The detector leakage currents are<br />
monitored using a sensing resistor. This voltage is<br />
measured by the DCU Analogue-To-Digital Converter<br />
(ADC) and can then be read by the slow control system<br />
using the DCU I2C interface [2]. The sensor temperature<br />
is measured in two different points using two Negative<br />
Temperature Coefficient (NTC) thermistors in parallel. A<br />
third NTC thermistor is used to monitor the temperature<br />
near the APV ICs. Two temperature and power supply<br />
independent currents (20µA and 10µA) are generated<br />
inside the DCU and used to drive the thermistors. The<br />
APV power supply voltages (2.5V and 1.25V) are<br />
monitored by the DCU through two external resistive<br />
dividers.<br />
Figure 1: CMS Tracker monitoring system<br />
A single ADC is used inside the DCU to convert the<br />
several input voltages using an analogue 8-to-1<br />
multiplexer.<br />
2.<br />
II. DCU DESCRIPTION<br />
The global architecture of the DCU is shown in Figure<br />
Figure 2: DCU architecture<br />
The DCU is composed of the following blocks: an 8to-1<br />
analogue multiplexer, a 12-bit ADC, an I2C interface<br />
and, a band-gap voltage reference and two temperature<br />
and power supply independent current sources. In the<br />
final version of the ASIC a diode based integrated<br />
temperature sensor has been added. Due to lack of<br />
experimental results this last feature will not be described<br />
here.<br />
This work will be focused on the operation and<br />
performance of the analogue multiplexer and the ADC.<br />
The design specifications are summarised in Table 1.
Table 1: DCU Specifications<br />
# of channels 8<br />
Resolution 12 bits<br />
Input Range GND→1.25V<br />
INL 1LSB<br />
DNL 1LSB<br />
Power Dissipation
Figure 5: DCU test-board block diagram<br />
Figure 6: DCU test setup<br />
All the digital functions of the IC have been<br />
successfully tested.<br />
The A/D converter parameters like gain, Integral Non-<br />
Linearity (INL), Differential Non-Linearity (DNL) and<br />
Transition Noise (TN) RMS, have been measured for the<br />
two operating modes on all of the input channels. The<br />
evaluation and characterisation tests were automated<br />
using specific test programs running in the microcontroller.<br />
The test results were read from the micro<br />
controlled after completion via the RS232 interface.<br />
A sequence of 128 input voltages has been applied in<br />
the two input ranges GND→1.25V in the LIR mode and<br />
1.25V→VDD in the HIR mode. In both operating modes<br />
the gain is between 2.18 and 2.20 LSB/mV corresponding<br />
to a resolution of 500uV/LSB.<br />
Figure 7 shows the A/D INL for input voltages in the<br />
0 to 1.25V range (LIR). The periodic “saw-tooth” shape<br />
observed in the picture reveals no intrinsic ADC nonlinearity<br />
over this range, where the INL is less than 1 LSB<br />
according to the specifications. If the extended range, 0 to<br />
2.5 V, is taken, the non-linearity due to limited common<br />
mode range of the comparator is revealed as can be seen<br />
from Figure 8<br />
Figure 7: ADC INL in the LIR mode (input range =<br />
GND→1.25V)<br />
Valid working<br />
range ( in LIR<br />
mode)<br />
Figure 8: ADC INL in the LIR mode (input range =<br />
GND→VDD)<br />
The differential (Figure 9) non-linearity has been<br />
evaluated and reveals a monotonic A/D converter<br />
characteristic and no missing codes.
Figure 9: ADC DNL in the LIR mode (input range =<br />
GND→1.25V)<br />
Finally, 1024 samples of a fixed voltage were taken to<br />
evaluate the internal A/D noise (transition noise). From<br />
Figure 10 it can be concluded that the transition noise has<br />
an RMS value smaller than one LSB. The RMS of the<br />
transition noise can be evaluated applying a sequence of<br />
very small voltage steps to the ADC input and evaluating<br />
the steepness of the transition between two adjacent ADC<br />
outputs. This analysis leads to an noise RMS around 0.25<br />
LSB.<br />
Figure 10: ADC output distribution for a fixed input<br />
voltage<br />
Power dissipation has been evaluated: with the ADC<br />
working at the maximum acquisition rate: the absorbed<br />
power is less than 40mW. A <strong>preliminary</strong> ADC<br />
temperature characterisation shows no significant changes<br />
in ADC performance with temperature.<br />
Several samples of the A/D converter have been<br />
irradiated with X-rays up to 10 Mrad (dose rate 25<br />
Krad/min). No changes in the INL and in the transition<br />
noise have been observed. A gain decrease of<br />
0.4% / Mrad with the irradiation dose has been measured<br />
(see Figure 11).<br />
Figure 11 : A/D converter gain as a function of the dose<br />
IV. SUMMARY<br />
As part of the CMS tracker slow control system, a<br />
mixed-mode ASIC has been developed to monitor the<br />
detector leakage currents, temperatures and power supply<br />
voltages. The IC has been implemented in a commercial<br />
sub-micron technology using a special set of layout<br />
design rules to guarantee the level of radiation tolerance<br />
required in the LHC environment. The main building<br />
block of this IC is a 12-bit ADC whose characteristics<br />
have been described in this work. The ASIC has been<br />
irradiated with X-rays up to 10 Mrad with only minor<br />
changes in the circuit performance.<br />
A second version of the IC including an integrated<br />
temperature sensor has now been submitted for<br />
fabrication<br />
V. REFERENCES<br />
[1] CMS Technical Design Report, CERN/LHCC/98-6<br />
(1998)<br />
[2] I2C Bus Specification, Signetics (1992)<br />
[3] B.Razhavi, Principles of Data Conversion System<br />
Design, IEEE Press (1995)
Design specifications and test of the HMPID’s control system in the ALICE experiment.<br />
Abstract<br />
The HMPID (High Momentum Particle Identification<br />
Detector) is one of the ALICE subdetectors planned to take<br />
data at LHC, starting in 2006. Since ALICE will be located<br />
underground, the HMPID will be remotely controlled by a<br />
Detector Control System (DCS).<br />
In this paper we will present the DCS design,<br />
accomplished via GRAFCET (GRAphe Fonctionnel de<br />
Commande Etape/Transition), the algorithm to translate into<br />
code readable by the PLC (the control device) and the first<br />
results of a prototype of the Low Voltage Control System.<br />
The results achieved so far prove that this way of proceeding<br />
is effective and time saving, since every step of the work is<br />
autonomous, making the debugging and updating phases<br />
simpler.<br />
I. INTRODUCTION<br />
The HMPID DCS can be considered as made of five main<br />
subsystems: High Voltage, Low Voltage, Liquid Circulation,<br />
Physical Parameters and Gas. Each of them requires a specific<br />
control and all of the controls have to be integrated into the<br />
ALICE DCS mainframe. The HMPID DCS will be<br />
represented via a single interface which will include the<br />
above-mentioned systems and will be part of the whole<br />
ALICE DCS.<br />
We will deal with three main subjects:<br />
1. Providing a common way to represent and design the<br />
control system<br />
2. Designing the Low Voltage control system<br />
3. Presenting the first results of tests performed on the<br />
Low Voltage System.<br />
A possible software architecture of the HMPID’s control<br />
is shown in Fig.1. It actually mirrors the hardware<br />
architecture, since one can distinguish the three main layers:<br />
Physical, Control and Supervisor, each characterised by a<br />
specific functionality [1].<br />
In fact, the lowest layer [2] will deal with PLC<br />
programming (by mean of Instruction List language) in order<br />
to read data from the physical devices (pressure and<br />
temperature sensors) and to send commands to actuators<br />
(switches, motors, valves).<br />
E. Carrone<br />
For the ALICE collaboration<br />
CERN, CH 1211 Geneva 23, Switzerland,<br />
Enzo.Carrone@cern.ch<br />
The Control layer permits the communication between the<br />
other two layers: indeed, it translates data from the bottom<br />
into a language understandable by the SCADA (Supervisory<br />
Control and Data Acquisition system) software and also<br />
translates commands coming from the top into a language<br />
understandable by the PLC. The communications among the<br />
layers are accomplished via an OPC (OLE for Process<br />
Control) server. In addition, the PVSS DBASE (a module of<br />
the SCADA software) stores data for subsequently retrieval.<br />
The supervisory level represents the highest control, since<br />
it runs control programs by means of Man Machine Interfaces<br />
remotely located.<br />
The three layers communicate over the Ethernet via the<br />
TCP/IP protocol.<br />
Figure 1: DCS software architecture
II. A SYSTEMATICAL APPROACH TO DCS DESIGN<br />
Since we have to program the whole DCS (meaning that<br />
we have to deal with all of the three layers, and program PLC<br />
as well as SCADA systems) it is compulsory to establish a<br />
very well defined way of designing the system. This becomes<br />
necessary as many people are going to make intervention on<br />
the system itself; these people, in most cases, will not be<br />
control specialists. Clarity and portability are the two main<br />
concerns.<br />
In order to satisfy to these needs, we have defined six<br />
fundamental steps required for the DCS design:<br />
1. Definition of the Operations List.<br />
The Operation List is the first tool we use to understand<br />
how the detector works. Actually, it contains as much<br />
details as possible about the specifications of the system.<br />
The list has to be written in strong collaboration with the<br />
designers of the system, which are the most valuable<br />
source to understand the actions which are to be<br />
performed via the automatic control.<br />
2. Description of the process as a Finite State machine<br />
(FSM).<br />
This step represents the first attempt to interpret the<br />
system into a fashion closer to the control design: the<br />
Transitions Diagram describes the evolution of the system<br />
yet without going deep into the controls aspects, but<br />
giving a general idea.<br />
3. GRAFCET modelling.<br />
The GRAFCET language [3] is a further step towards the<br />
definition of the control system: not only it is a visual tool<br />
near to the FSM representation, but it is a powerful<br />
language useful for the description of whatever system. It<br />
means that it does not matter if one is going to program<br />
PLCs or SCADA: GRAFCET describes the system in a<br />
fashion which is completely independent from the<br />
hardware one will use. Furthermore, it is also simple and<br />
clear to non-control specialists. Among the other<br />
possibilities (i.e. Petri Nets above all) GRAFCET remains<br />
for us the best choice<br />
4. Coding of GRAFCET into Instruction List.<br />
The PLCs adopted hereby belong to the family of Siemens<br />
S-300. However, the procedures are applicable to any<br />
PLC. Moreover, since GRAFCET allows the design of<br />
very complex systems, the PLC language which best suits<br />
the needs for complex instructions managing and<br />
execution speed is the Instruction List (IL), included into<br />
the IEC 1131-3 rules [4]. In order to accomplish this task<br />
we developed an original algorithm to translate univocally<br />
the GRAFCET into IL. This step corresponds to the<br />
programming of the PLC.<br />
5. Check of the parameters read by the PLC<br />
Once the PLC runs its program, one needs to check how<br />
the program is running and the values read by, e.g., the<br />
ADC (Analog-to-Digital Converter) modules.<br />
Siemens PLCs are supplied with the Step7 programming<br />
environment, which comprises the Variable Table (VAT)<br />
reading utility. It means that one can display the variables<br />
read by the ADC modules directly on the workstation used<br />
for programming.<br />
6. Coding of the Man-Machine Interfaces into the<br />
SCADA PVSS environment.<br />
At this step the PLC is running autonomously the control<br />
program, but the operations have to be performed by the<br />
operator manually (e.g. pushing buttons). To operate the<br />
system remotely one needs to program an interface at high<br />
level, by means of synoptic panels where each<br />
functionality of the system is represented and the user can<br />
send commands, read values, generate historical trends<br />
and so on. These panels are programmed into the PVSS<br />
environment, which is the SCADA adopted by CERN for<br />
all the LHC experiments’ DCS.<br />
All the subdetectors’ DCS will merge into the most<br />
general control system, the Experiment Control System<br />
(ECS).<br />
III. THE LOW VOLTAGE SYSTEM<br />
The HMPID detector consists of seven modules, each<br />
sizing about 142 x 146 x 15 cm 3 and including three radiator<br />
vessels, a Multi Wire Proportional Chamber (MWPC), the<br />
Front End Electronics (FEE) and the Read-Out Electronics.<br />
In [5] we have already reported some results from the<br />
Liquid Circulation sub-System, when the design phase was<br />
accomplished, along with some <strong>preliminary</strong> considerations on<br />
the High Voltage (HV) and Low Voltage (LV) subsystems.<br />
In the following we will focus on the Low Voltage control<br />
system, starting from the control of the Power Supply units up<br />
to the Man-Machine Interface.<br />
The system we will deal with represents a “custom”<br />
solution to provide the Low Voltage supply to the HMPID<br />
front-end and read-out electronics; as a result of the tests and<br />
the evaluations subsequently performed (costs, reliability,<br />
maintenance) we will be able to decide on the implementation<br />
of this solution for the whole detector.<br />
In order to guarantee continuity of operations, even in case<br />
of faults, the “custom” layout is intended to split the available<br />
power into different channels via a PLC. The power supply of<br />
each module has been divided into six Low Voltage and High<br />
Voltage segments, and other four segments for electronics<br />
circuits. In this layout, a fault of a single chip will not<br />
compromise the functioning of the entire module.<br />
A. The apparatus set up<br />
We set up a test bench station in order to carry out some<br />
tests on a single Low Voltage power supply segment. A<br />
schematic representation of the test bench is shown in Fig. 2.
Figure 2: DCS software architecture<br />
The power supply is an Eutron BVD720S, 0-8V, 0-25 A,<br />
0.1±1 dgt. The PLC belongs to the S-300 Siemens family,<br />
equipped with two ADC 12 bit modules. The dummy load is<br />
made of resistors which represent the LV segment, while the<br />
“sensing board” is a resistors network needed for the current<br />
detection and the signal conditioning.<br />
In fact, we measure the current drained by the load by<br />
means of the voltage drop on a “sensing resistor”; but, in<br />
order to overcome the common mode voltage UCM=2.5 V,<br />
characteristic of the ADC input preamplifier, a resistor<br />
network has been designed and assembled. So, both the<br />
sensing resistor and network providing the signal conditioning<br />
have been placed on the sensing board.<br />
Afterwards, the sensing board and the dummy load have<br />
been connected to the ADC module of the PLC, to get voltage<br />
and current values.<br />
Fig.3 shows the electrical diagram of one bipolar channel,<br />
including the sensing wires.<br />
Figure 3: Test bench wirings<br />
The scheme of the sensing board is shown in details in<br />
Fig.4.<br />
Figure 4: Sensing board scheme<br />
The new voltage values are evaluated according to the<br />
following equation:<br />
V<br />
sr+<br />
+ V<br />
V+<br />
s −V−<br />
s = Vin<br />
⎛ R2<br />
R4<br />
⎞<br />
⋅ ⎜ − ⎟ +<br />
⎝ R1<br />
+ R2<br />
R3<br />
+ R4<br />
⎠<br />
⎛ R4<br />
⎞<br />
g ⋅⎜<br />
⎟<br />
⎝ R3<br />
+ R4<br />
⎠<br />
= +<br />
sensin<br />
The calibration of the sensing board let us provide the<br />
correct algorithm to the PLC program in order to present in<br />
the VAT the correct values of voltage and current.<br />
Subsequently, the sensitivity obtained in this way amounts<br />
to 2.8 mA and is enough to detect even a single FEE chip<br />
failure.<br />
B. The LV control system<br />
According to the 6-steps list introduced above, first we<br />
study the system and write the Operations List; the most<br />
important constraint is given by the relationship with the High<br />
Voltage system: actually, the ON/OFF switching is the most<br />
critical, along with the current and voltage values.<br />
When the LV chain has to be switched ON, since the FEE<br />
requires ±2.8 V, both these polarities must be supplied<br />
simultaneously.<br />
When the LV is switched OFF, the facing HV segment<br />
must be checked: it must be turned OFF before the LV. This<br />
sequence is mandatory to prevent FEE breakdowns due to<br />
charge accumulation on the MWPC cathode pads. (In fact the<br />
ground reference to the MWPC sense wires is ensured<br />
through the FE electronics, then the low voltage at the
corresponding FE electronics segment must be applied before<br />
the HV segment is switched ON).<br />
Current and voltage must be within ranges:<br />
V load <<br />
min V Vmax<br />
< , I min I load I max <<br />
If max<br />
< .<br />
I load > I , then the corresponding HV-LV segments<br />
must be automatically switched OFF, according to the LV<br />
switching OFF sequence.<br />
The subsequent step is the design of the transitions<br />
diagram, as shown in Fig. 5.<br />
Figure 5: LV transitions diagram<br />
After the OFF state, the first state encountered is<br />
CALIBRATE, which is intended to set voltages and currents<br />
out of the power supply; it means that no power is yet given to<br />
the FEE. Then, the CONFIGURE allows the user choosing<br />
how many (and which) segments he wants to power. In STBY<br />
the HV power is checked: this state is indispensable for a<br />
correct shut down procedure of the LV.<br />
When the ON status is active, voltages and currents are<br />
monitored over all the FEE segments active at that moment.<br />
Whenever one of these values is out of range, the system<br />
goes into the ALARM state, the related segment goes OFF<br />
and a notification is sent to the HV system in order to set OFF<br />
the facing HV segment also.<br />
The GRAFCET design follows the states just described.<br />
Actually, we have three Master grafcet which are needed to<br />
manage alarm and stop conditions, and a Normal grafcet to<br />
describe the normal evolution of the system, as in Fig. 6.<br />
What has to be pointed out is that states 2 and 3 are<br />
actually Macro-States, meaning that they contain some other<br />
grafcet to manage the calibration and configuration of each<br />
segment. This way, the grafcet shown is the most general one,<br />
while the deeper control is demanded to the other sub-grafcet.<br />
This is a very useful facility to simplify the view of the<br />
system and concentrate on the general functioning.<br />
Figure 6: Normal grafcet<br />
The algorithm we designed operates the conversion from<br />
grafcet (sequential and parallel processes) to Instruction List<br />
(a strictly sequential language), as in Fig. 7.<br />
Copy Commands from<br />
HW Inputs buffer<br />
yes<br />
Start Main Cycle<br />
Create the<br />
Process Image<br />
Interlock<br />
?<br />
Analyze Input Values<br />
(signal conditioning)<br />
no<br />
Remote<br />
?<br />
Calculate the Transitions<br />
Activate States<br />
Do Actions for Active<br />
States<br />
Output the<br />
Process Image<br />
End<br />
yes<br />
Copy Commands from<br />
OPC buffer<br />
Figure 7: Grafcet → Instruction list conversion algorithm<br />
The initialisation reads the input variables and decides<br />
whether to put them into a local or remote buffer, in<br />
dependence of the local/remote operation. Then, the<br />
transitions are evaluated: each of them will be considered<br />
crossed if the related condition is true and the preceding state
is active. If the transition is crossed, the next state is activated,<br />
while the preceding state is deactivated.<br />
The VAT shows the exactness of our calculations, as in<br />
Fig. 8.<br />
The first elements (PIW) represent the raw data read by<br />
the ADC module: it is a decimal number in the range [-27648,<br />
+27648]. In order to read currents and voltages, we applied<br />
the algorithms for the offset correction. The final results are<br />
the “Iload” and “Vload” values. The last two elements are useful<br />
to check the real voltage going into the ADC module from the<br />
sensing board.<br />
PIW 288 “V sensing + ADC” --- DEC 8872<br />
PIW 290 “V sensing – ADC” --- DEC -14440<br />
PIW 292 “V load + ADC” --- DEC 15496<br />
PIW 294 “V load – ADC” --- DEC -15496<br />
MD 100 "I load +“ --- REAL 3.737275<br />
MD 108 "I load -“ --- REAL -4.101968<br />
MD 132 "V load +“ --- REAL 2.802372<br />
MD 124 "V load -“ --- REAL -2.802372<br />
MD 20 "V sensing + input ADC“ --- REAL 25.67129<br />
MD 28 "V sensing - input ADC“ --- REAL -41.7824<br />
Figure 8: LV VAT<br />
Although not shown above, the VAT can also read the<br />
states of the system; we can check whether it is in OFF or ON<br />
or CALIBRATE, or whatsoever. Moreover, we can simulate<br />
alarm conditions via some switches that let us produce short<br />
circuits, or wiring interruptions.<br />
The last point of our six-steps method consists in<br />
programming the Man-Machine Interfaces into the PVSS<br />
environment; these interfaces let the user operate the system,<br />
monitor parameters, perform actions, acknowledge alarms.<br />
For instance, we monitored the values of current and<br />
voltage; the trend is shown in Fig. 9. It confirms subsequently<br />
the reading of the VAT, but presents the same data into a<br />
more readable fashion.<br />
Figure 9: LV variables trend<br />
In order to avoid a proliferation of interfaces different<br />
from each other, the JCOP (Joint COntrol Project) at CERN is<br />
releasing layouts written into PVSS and named “framework”,<br />
in which dimensions, colours, positions of all the elements of<br />
the panels are defined, giving a coherent look to every control<br />
interface of whatever detector or experiment.<br />
Our efforts are now directed towards the programming of<br />
all the panels according to the JCOP’s framework guidelines.<br />
The first step will consist into the integration of both the<br />
Liquid Circulation and the low Voltage system into a single<br />
panel. The other control systems will follow and find place<br />
into the same framework, which will represent the whole<br />
HMPID DCS.<br />
IV. CONCLUSIONS<br />
The methodology hereby introduced and adopted has<br />
shown to be effective and time saving; in fact, it allows an<br />
easy interaction between control engineers and physicists in<br />
charge of the design and operation of the systems. The<br />
GRAFCET language has proved to be powerful and useful for<br />
the programming of the system at every level of hierarchy.<br />
Moreover, the measurements displayed on the VAT are<br />
readable directly also on a man-machine interface in form of<br />
diagram, making easy a monitoring over long times in order<br />
to check stability and performance of the power supply<br />
system.<br />
V. ACKNOWLEDGEMENTS<br />
The author would like to thank A. Franco for the<br />
fundamental contributions given during the design stage and<br />
the hardware set up as well.<br />
Not only he gave precious aids in programming the<br />
SCADA systems, but also he helped in the whole PLCs<br />
environment.<br />
VI. REFERENCES<br />
[1] Swoboda D., The Detector Control System for ALICE :<br />
Architecture and Implementation, 5th Conference on<br />
Electronics for LHC Experiments, Snowmass, 1999<br />
[2] Lecoeur G., Määtta E., Milcent H., Swoboda D., A<br />
control system for the ALICE-HMPID liquid distribution<br />
prototype using off the shelf components, CERN Internal note,<br />
Geneva, 1998<br />
[3] David, R. Grafcet: A Powerful Tool for Specification<br />
of Logic Controllers. IEEE transactions on control systems<br />
technology, Vol. 3, N.3, September 1995<br />
[4] Seok Kim H., Young Lee J., Hyun Kown W., A<br />
compiler design for IEC 1131-3 standard languages of<br />
programmable logic controllers, Proceeding of SICE 99,<br />
Morioka, 28-30 luglio 1999<br />
[5] De Cataldo G., The detector control system for the<br />
HMPID in the ALICE experiment al LHC, 6 th Conference on<br />
Electronics for LHC Experiments, Krakov, 2000
Abstract<br />
THE CMS HCAL DATA CONCENTRATOR: A MODULAR,<br />
STANDARDS-BASED IMPLEMENTATION<br />
The CMS HCAL Upper Level Readout system processes<br />
data from 9300 detector channels in a system of about 26<br />
VME Crates. Each crate contains about 18 readout cards,<br />
whose outputs are are combined on a Data Concentrator<br />
Card, with real-time synchronization and error-checking<br />
and a throughput of 200 Mbytes/s. The implementation is<br />
modular and based on industry and CERN standards: PCI<br />
bus, PCI-MIP and PMC carrier boards, S-Link and LVDS<br />
serial links. A prototype system including front-end emulator,<br />
HTR cards and Data Concentrator has been prototyped<br />
and tested. A VME motherboard provides a standard platform<br />
for the data concentrator. Implementation details and<br />
current status are described.<br />
200Mbytes/s<br />
Average<br />
DAQ<br />
TTC<br />
Optical<br />
Fanout<br />
On−detector Front End: QIE ADC<br />
Gigabit Optical Link Tx<br />
H<br />
R<br />
C<br />
T<br />
T<br />
C<br />
D<br />
C<br />
C<br />
H<br />
T<br />
R<br />
H<br />
T<br />
R<br />
E. Hazen, J. Rohlf, S. Wu, Boston University, USA<br />
A. <strong>Bad</strong>en, T. Grassi, University of Maryland, USA<br />
3 Channels/fiber<br />
@ 1.6 Gbit/s<br />
18 HTR Cards<br />
per VME crate<br />
LVDS Serial<br />
80 Mbyte/s<br />
Local TTC Fanout (BCR, ECR, CLK)<br />
H<br />
T<br />
R<br />
Figure 1: HCAL DAQ Crate<br />
1 OVERVIEW<br />
L1 CMS<br />
Calorimeter<br />
Trigger<br />
Vitesse<br />
800Mbit/s<br />
Copper<br />
The CMS HCAL trigger/DAQ system consists of about<br />
(26) 9U VME64xP crates (Fig. 1) with up to 18 HCAL<br />
Trigger Readout (HTR) modules, one Data Concentrator<br />
Card (DCC), and one HCAL Readout Controller (HRC).<br />
Front-end data is carried from the on-detector front-end<br />
electronics to the crate by 100m optical fibers, each carrying<br />
3 front-end channels. LVDS data links are used to<br />
transfer data from the HTR modules to the DCC and for local<br />
fanout of TTC (Trigger, Timing, Control) signals. The<br />
primary DAQ output is via an S-Link/64[1] carrying an average<br />
data volume of 200 Mbytes/s from each crate.<br />
2 HCAL TRIGGER READOUT CARD<br />
The HTR module is a 9U VME module (Figure 2) equipped<br />
with optical receivers, TTCrx circuitry, outputs on serial<br />
FE<br />
Data<br />
TTCRx<br />
8 or 16 Fibers<br />
24 or 48 Channels<br />
Local BC0, Clock<br />
Level 1 Pipeline<br />
Level 1 Pipeline<br />
Level 1 Pipeline<br />
Level 2 Pipeline<br />
Fanout BC0, Clock<br />
Serial<br />
Link<br />
Board<br />
Figure 2: HCAL Trigger Readout Card<br />
Trigger<br />
Primitives<br />
To<br />
Level 1<br />
Trigger<br />
Level 1+2<br />
Data<br />
To<br />
DCC<br />
LVDS (Channel Link) and a custom mezzanine card. The<br />
optical inputs receive data from the HCAL front-end electronics,<br />
with one charge sample per bunch crossing (BX).<br />
The high-speed serial inputs require special board layout<br />
techniques. The CMS HCAL is a trigger detector, thus<br />
the HTR includes two data pipelines: the trigger pipeline,<br />
which assigns Front-End data to a BX and sends them to<br />
the CMS regional trigger, and the DAQ pipeline where the<br />
FE-data are pipelined, triggered and sent to the Data Concentrator<br />
Card.<br />
9<br />
OTE<br />
Lineariz.<br />
LUT<br />
GOL<br />
Deser<br />
System CK<br />
E<br />
16<br />
RXData<br />
Recov.<br />
CK<br />
L1-TPG<br />
Filter<br />
Test RAM<br />
Local<br />
Clock<br />
Synch<br />
16<br />
Fiber<br />
to<br />
Fiber<br />
Alignment<br />
E T &<br />
Compr.<br />
(LUT)<br />
μ<br />
window<br />
8<br />
MIP bit<br />
¢¡¤£<br />
E T,compr<br />
9<br />
Ch A<br />
Ch B<br />
Ch C<br />
SLB-PMC<br />
(ECAL<br />
design)<br />
Figure 3: HTR Input and Level 1 Pipeline<br />
The HTR input processing and Level 1 Pipeline is shown<br />
in Figure 3. The raw fiber data stream is deserialized, then<br />
synchronized to the local clock. A programmable delay of<br />
up to a few clocks is used to align data from different input<br />
fibers. A test RAM can substitute for the input data stream.<br />
Finally, the 3 channels carried on one fiber are demulti-
plexed. Each channel is then fed to a linearizing look-up<br />
table which converts raw input data to a 16-bit linear energy<br />
value. Next a finite-impulse response (FIR) filter is<br />
used to subtract the pedestal and assign all the energy to<br />
a single bunch crossing. This performs the same function<br />
as a traditional analog shaper, but has the advange of being<br />
easily reprogrammable. Finally, the energy is converted<br />
to ¥§¦<br />
and compressed to 8 bits according to a non-linear<br />
transformation specified by the CMS level 1 calorimeter<br />
trigger, and a comparison is done to see if the signal may<br />
represent a muon. This compressed output plus a muon ID<br />
bit is sent to level 1. The final synchronization and serial<br />
transmission is performed by a Synchronization and Link<br />
Board (SLB) described in detail elsewhere[2]. The latency<br />
of the level 1 pipeline is critical; it must be less than ¨<br />
23 BX periods. Currently the theoretical minimum for the<br />
HTR implementation is 16 BX periods.<br />
�����<br />
���<br />
��� � �¤���<br />
����������� �<br />
����� � ��� ����� � �<br />
� �� ��� ����<br />
����� �������<br />
Primitives<br />
����� � ��� ����� � �<br />
�� ��� � ��� �<br />
�����<br />
�¤��������� �����<br />
� � ��� ��� ���<br />
����������� �����<br />
� � � ��� ���<br />
��� © �����<br />
��� � ������� �<br />
�����<br />
��� � � �¤�<br />
©<br />
Zero<br />
Sup.<br />
� � � � �¢�<br />
��� �<br />
�<br />
��� � �¤�<br />
Figure 4: HTR Level 2 (DAQ) Pipeline<br />
� � � ������<br />
���<br />
� �������<br />
The HTR Level 2 Pipeline is shown in Figure 4. First<br />
is a pipeline of programmable depth which stores data during<br />
the CMS level 1 latency period (a fixed value). Then<br />
comes a “derandomizer” buffer into which data is copied<br />
at each level 1 accept. The derandomizer can hold up to<br />
10 charge samples (one per BX) per event although currently<br />
we anticipate only processing 5 samples. Note that a<br />
given charge sample can in principle participate in multiple<br />
events, so the pipeline-to-derandomizer copy logic must<br />
handle overlapping events. From the derandomizer, data<br />
is linearized by a LUT, filtered by an FIR filter similar to<br />
that in the level 1 pipeline, and a threshold is applied for<br />
zero-supression. At this point either the output of the filter,<br />
the raw data or both may be inserted into the output data<br />
stream.<br />
A similar pipeline is used to store the level 1 trigger<br />
primitives, synchronized with the corresponding level 2<br />
data. Finally the data is packaged in a variable-length block<br />
format along with any error information from the input<br />
links and transmitted using an LVDS serializer to the data<br />
concentrator.<br />
3 DATA CONCENTRATOR CARD<br />
The Data Concentrator Card is composed of a VME motherboard,<br />
six LVDS link receiver boards and a PMC-type<br />
logic board. The motherboard is a VME64x 9Ux400mm<br />
single-slot module. The motherboard[3] (Fig. 5) supports<br />
VME access up to A64/D32, and contains three bridged<br />
PCI busses. Six PC-MIP[4] mezzanine sites are arranged<br />
in groups of three on two 33MHz 32-bit PCI busses. A third<br />
33MHz 64-bit PCI bus is bridged to the VME bus using a<br />
Tundra Universe II VME-to-PCI bridge.<br />
PC−MIP<br />
PC−MIP<br />
PC−MIP<br />
PC−MIP<br />
PC−MIP<br />
PC−MIP<br />
T<br />
T<br />
T<br />
T<br />
T<br />
T<br />
JTAG<br />
33MHz<br />
PCI bus 1<br />
Bridge<br />
33MHz<br />
PCI bus 2<br />
Bridge<br />
5V<br />
T<br />
Local<br />
Control<br />
M<br />
M<br />
PMC (triple)<br />
PMC<br />
(Standard)<br />
M/T<br />
Figure 5: VME 9U Motherboard<br />
T<br />
Universe II<br />
VME−PCI<br />
33MHz 64 bit<br />
PCI bus 3<br />
VME<br />
64x<br />
A single large logic mezzanine board has access to all<br />
three PCI busses for high-speed application-specific processing,<br />
and an additional standard PMC site is available.<br />
A local control FPGA on the motherboard provides access<br />
to on-board flash configuration memory, a programmable<br />
multi-frequency clock generator, and JTAG.<br />
The LVDS link receiver boards[5] (Fig. 6 use Channel<br />
Link[6] technology from National Semiconductor. Each<br />
board contains three independent link receivers which can<br />
operate at 20–66MHz (16-bit words). Buffering for 128K<br />
32-bit words is provided for each link with provision to<br />
discard data if buffer occupancy exceeds a programmable<br />
threshold. Event building, protocol checking, event number<br />
checking and bit error correction are performed independently<br />
for each link. A PCI target interface provides<br />
single-word and burst access to the data stream, plus<br />
numerous monitoring registers. A single PCI burst read<br />
serves to build an event from fragments found in each of<br />
the three input buffers. The expected event number (low<br />
eight bits) is provided as part of the PCI address, and a<br />
mis-match causes an error bit to be set in the link trailer.<br />
LVDS Rx<br />
DS90CR285A<br />
LVDS Rx<br />
DS90CR285A<br />
LVDS Rx<br />
DS90CR285A<br />
Altera<br />
ACEX 1K100<br />
FPGA<br />
256k x 36<br />
Synchronous<br />
ZBT SRAM<br />
PCI<br />
33MHz<br />
32 bit<br />
Figure 6: PC-MIP 3-Channel Link Receiver Board<br />
The logic mezzanine board (Fig. 8) contains the core
HTR<br />
L1A 40MHz<br />
Derand.<br />
Buffer<br />
Derand.<br />
Buffer<br />
Derand.<br />
Buffer<br />
Derandomizer<br />
Buffer Protected<br />
against overflow<br />
Link<br />
Tx<br />
Link<br />
Tx<br />
Link<br />
Tx<br />
Link<br />
Rx<br />
Link<br />
Rx<br />
Link<br />
Rx<br />
by trigger rules logic: no bottleneck<br />
256kb FIFO<br />
>500 events<br />
LVDS link speed<br />
same as HTR processing<br />
data concentrator logic. The prototype was implemented<br />
using a Xilinx XC2V1000 for the logic, plus three Altera<br />
EP1K30 for three PCI bus interfaces.<br />
The event builder logic merges two data streams from<br />
the two PCI busses, and re-orders the incoming data so<br />
that the various sub-types (Level 1, Level 2...) are in contiguous<br />
blocks in the output stream. An on-board TTCrx<br />
stores level 1 accepts (L1A) into a FIFO which drives the<br />
event builder. For each L1A, the data decoder triggers a<br />
PCI burst read on the PCI-1 and PCI-2 interfaces simultaneously.<br />
As data is transferred it is sorted into various<br />
sub-types and summary and monitoring information is collected.<br />
Each sub-type is pushed into a unique FIFO. After<br />
the end of the event has been processed (block trailer received<br />
from LRB) an end-of-event marker is pushed into<br />
each of the FIFOs. The event builder reads data from each<br />
of the sub-type FIFOs in turn, inserting protocol words as<br />
needed. The DCC logic is designed to operate continuously<br />
at the full speed of the two input PCI busses, namely<br />
33MHz� 32 bits� 2. The event builder and output logic must<br />
thus run at an average rate of at least 66MHz (32-bit words)<br />
or 264MBytes/sec.<br />
The event builder output is sent in parallel to several destinations.<br />
Each output path contains a filter which can be<br />
programmed to select specific portions of events or a specific<br />
subset of events (prescaled, specially marked, etc.).<br />
1. The DAQ Output. Every event is sent via SLINK-64<br />
to the CMS DAQ. The detailed contents of each event<br />
may be controlled by configuration registers.<br />
2. The Trigger Data Output. The trigger primitives sent<br />
to the CMS L1 trigger are also sent to via SLINK-64<br />
to a special “trigger DAQ” system for monitoring of<br />
the trigger performance.<br />
3. The Spy Output. A selected subset of events is sent to<br />
PCI<br />
32/33<br />
DCC<br />
(2) 33MHz 32 bit<br />
PCI busses<br />
~100MHz<br />
Processing<br />
Event<br />
Builder<br />
Figure 7: HCAL DAQ Buffering<br />
TTCRx<br />
PCI 1<br />
PCI 2<br />
8Mb Buffer<br />
>4000 events<br />
SLINK<br />
LSC<br />
Busy/Ready Overflow Level 2 Data<br />
Warning to Readout Unit<br />
a VME-accessible memory for monitoring and diagnostics.<br />
L1A<br />
FIFO<br />
Data<br />
Decoder<br />
Data<br />
Decoder<br />
Level 2<br />
FIFO<br />
Level 1<br />
FIFO<br />
Summary<br />
FIFO<br />
Level 2<br />
FIFO<br />
Level 1<br />
FIFO<br />
Summary<br />
FIFO<br />
Monitoring (via VME to CPU)<br />
Event<br />
Builder<br />
Figure 8: DCC Logic PMC<br />
DAQ<br />
FIFO<br />
Trigger<br />
FIFO<br />
Spy<br />
FIFO<br />
Error detection and recovery are a primary consideration<br />
in a large synchronous system and the DCC contains<br />
logic dedicated to this purpose. Figure 7 shows the main<br />
DAQ data pipeline and buffering in the HCAL readout system.<br />
Hamming error correction is used for the LVDS links<br />
between the HTR and DCC. All single-bit errors are corrected<br />
and all double-bit errors are detected by this technique.<br />
Event synchronization is checked by means of an<br />
event number in the header and trailer of each event, which<br />
are checked by the LRB logic against the TTC event number.<br />
Buffer overflow is avoided by the expedient of discarding<br />
the data payload and retaining only header and<br />
trailer words when the LRB buffer occupancy exceeds a<br />
programmable level. Additionally, an “overflow warning”<br />
output is provided which is delivered to the CMS trigger<br />
throttling system to request a reduction in the rate of L1A.<br />
Data transfers from the LRB to DCC logic are protected by
parity checks on the PCI busses. The event builder operates<br />
at a processing speed sufficient to handle 100% occupancy<br />
of the two PCI busses. After the event builder is a large<br />
memory, which can contain several thousand average-size<br />
event.<br />
The main bottleneck (speed limitation) in the DCC is<br />
the two 32/33 PCI busses through which all data must<br />
flow. The theoretical maximum bandwidth for one of these<br />
busses is 33MHz x 4 or 132 Mbytes/s per bus. In practice<br />
we expect to achieve about 100Mbytes/s, for a total of<br />
200Mbytes/s throughput on the two busses. This is exactly<br />
the maximum average data volume permitted on one input<br />
of the CMS DAQ switch.<br />
S-Link<br />
TTC (fiber)<br />
CLK<br />
HRC TTCvx TTCvi<br />
DCC Ch A Ch A FEE<br />
HTR HTR<br />
VME Bus<br />
LVDSrx LVDStx LVDStx<br />
S-Link<br />
TTCrx<br />
TTCrx TTCrx<br />
TTCtx<br />
Data (G-Link)<br />
Ch B Ch B<br />
Figure 9: HCAL Readout Demonstrator<br />
4 PROTOTYPE TESTING<br />
L1A<br />
& LHC<br />
Emulator<br />
A “demonstrator” (first prototype) of the entire system<br />
is being built (see Figure 9). The HTR demonstrator is<br />
a 6U VME module with 4 G-Link receivers running at<br />
800Mbyte/s and an Altera APEX family FPGA for the processing<br />
logic. The (second) prototype and production HTR<br />
modules will be 9Ux400mm VME modules using CERN<br />
GOL links. The DCC demonstrator is built on the 9U<br />
VME motherboard as described above, and is quite close<br />
in hardware configuration to the anticipated production design.<br />
A custom front-end emulator (FEE) which simulates<br />
LHC timing and produces dummy front-end data is used<br />
to provide simulated input data to the HTR for testing. A<br />
G-Link based optical S-Link is used to transport data from<br />
the DCC demonstrator to a VME CPU for verification.<br />
As of this writing, a simplified demonstrator using one<br />
FEE, one HTR, one DCC and S-Link to CPU has been successfully<br />
tested for use in a high-rate radioactive source test<br />
at Fermilab. Data was transferred through the entire chain<br />
without error at a continuous rate of 80 Mbytes/s. The S-<br />
Link data is received on the CPU in a large DMA buffer<br />
(400+ Mb) and when full written to disk for off-line analysis.<br />
We expect to complete the full demonstrator shortly,<br />
though only highly simplified FPGA code will be implemented<br />
in the HTR and DCC.<br />
5 SUMMARY<br />
A demonstrator of the CMS HCAL DAQ has been assembled<br />
and testing has begun. The data concentrator makes<br />
extensive use of standard interfaces and busses, and was assembled<br />
from “multifunction” components developed separately.<br />
This resulted in significant savings by sharing development<br />
costs between multiple projects. The design of<br />
the full-fuction prototypes will continue through the remainder<br />
of 2001, with a working prototype system expected<br />
in 2002.<br />
6 REFERENCES<br />
[1] “The S-LINK 64 bit extension specification: S-LINK64”,<br />
A. Racz, R. McLaren, E. van der Bij, EP Division, CERN.<br />
see http://hsi.web.cern.ch/HSI/s-link/<br />
[2] See http://cmsdoc.cern.ch/carlos/SLB/.<br />
[3] See http://ohm.bu.edu/%7Ehazen/my d0/mb9u/<br />
[4] “PC*MIP Specification”, VITA 29. VMEbus International<br />
Trade Association Standards Organization<br />
[5] See http://ohm.bu.edu/%7Ehazen/my d0/TxRx/<br />
[6] The National Semiconductor family of LVDS point-to-point<br />
serial links. See for example the transmitter data sheet at:<br />
http://www.national.com/pf/DS/DS90CR285.html
EMI Filter Design and Stability Assessment of DC Voltage Distribution based on<br />
Switching Converters.<br />
Abstract<br />
The design of DC power distribution for LHC front-end<br />
electronics imposes new challenges. Some CMS sub-detectors<br />
have proposed to use a DC-power distribution based on DC-<br />
DC power switching converters located near to the front-end<br />
electronics.<br />
DC-DC converters operate as a constant power load. They<br />
exhibit at the input terminals dynamic negative impedance at<br />
low frequencies that can generate interactions between<br />
switching regulators and other parts of the input system<br />
resulting in instabilities. In addition, switching converters<br />
generate interference at both input and output terminals that<br />
can compromise the operation of the front-end electronics and<br />
neighbour systems. Appropriated level of filtering is<br />
necessary to reduce this interference.<br />
This paper addresses the instability problem and presents<br />
methods of modelling and simulation to assess the system<br />
stability and performance. The paper, also, addresses the<br />
design of input and output filters to reduce the interference<br />
and achieve the performance required.<br />
I. INTRODUCTION<br />
DC power distribution has been used by aerospace and<br />
telecommunication industries [1][2]. This topology distributes<br />
a high voltage (HV) and converts it to low voltage (LV) either<br />
locally or near the electronics equipment. In high-energy<br />
physics (HEP), some CMS and Atlas sub-detectors [3][4]<br />
have proposed similar topologies to power-up the front-end<br />
electronics. In such proposals, the AC mains is rectified in the<br />
control room and DC high voltage (200-300V) is distributed a<br />
distance about 120-150 mts. to the periphery of the detector.<br />
At that location, DC-DC converters transform with high<br />
efficiency the HV into the LV required by the front-end (FE)<br />
electronics. Those converters are located about 10-20 mts.<br />
from the front-end electronics due to the intense magnetic<br />
field that exists inside the detector.<br />
For LHC experiments, converters have to operate reliably<br />
under high-energy neutron radiation and fringe magnetic<br />
field. Converters have to present high efficiency, galvanic<br />
isolation between input and output, and couple low amount of<br />
noise to the surrounding electronic equipment. Intrinsically<br />
switching power converters generate a noise level that, in<br />
general, is not compatible with the sensitive electronics used<br />
F. Arteche 1 , B. Allongue 1 , F. Szoncso 1 , C. Rivetta 2<br />
1 CERN, 1211 Geneva 23, Switzerland<br />
Fernando.Arteche@cern.ch<br />
2 FERMILAB, P.O.Box 500 MS222, Batavia Il 60510 USA<br />
rivetta@fnal.gov<br />
in HEP experiments. Input and output filters are necessary to<br />
attenuate the level of noise coupled by conduction and<br />
radiation through the cables. Also, interactions among<br />
converters with input filters and distribution lines can<br />
deteriorate the performance or induce instabilities in the<br />
system because the converter operate as a constant power<br />
load.<br />
3 phase mains<br />
400V/50Hz.<br />
AC/DC<br />
+ filter<br />
Distribution<br />
line<br />
~150mts<br />
Figure 1: DC distribution system<br />
BUS<br />
DC-DC converter unit<br />
DC-DC converter unit<br />
DC-DC converter unit<br />
N<br />
In this paper, analysis and design approaches for the<br />
system are presented. Section II presents an overall view of<br />
the problem, section III resumes the standards related with<br />
conducted interference emissions, section IV describes the<br />
design of the system considering stability issues, while section<br />
V addresses the design of the input filter to reduce conductive<br />
interference.<br />
II. PRESENTATION OF THE PROBLEM.<br />
~20mts<br />
Front-end<br />
Front-end<br />
Front-end<br />
All switching converters generate and emit high frequency<br />
noise. The emission can be coupled to the sensitive FE<br />
electronics and neighbour subsystem electronics by<br />
conduction and/or radiation. This noise can interfere with the<br />
sensitive FE electronics and cause malfunction. The<br />
frequency range of the electromagnetic interference (EMI)<br />
spectrum generated by power electronics equipment can<br />
extend up to 1GHz.<br />
For conducted EMI there are two principal modes of<br />
propagation, differential (DM) and common mode (CM). The<br />
propagation of the differential mode EMI takes place between<br />
conductor pairs, which form a conventional return circuit (e.g.<br />
negative/positive conductors, line phase/neutral conductors).<br />
The DM EMI is the direct result of the fundamental operation<br />
of the switching converter. The propagation of the common<br />
mode EMI takes place between a group of conductors and
either ground or another group of conductors. The path for the<br />
CM EMI often includes parasitic capacitive or inductive<br />
coupling. The origin of the CM EMI is either magnetic or<br />
electric. CM EMI is electrically generated when a circuit with<br />
large dv/dt has a significant parasitic capacitance to ground.<br />
Magnetically generated CM EMI appears when a circuit loop<br />
with large di/dt in it has significant mutual coupling to a<br />
group of nearby conductors. Also, it is important to mention<br />
there is an important energy exchange between modes. This<br />
effect is known differential-common mode conversion.<br />
In switching power converters, the same fundamental<br />
mechanisms that are responsible for conducted EMI can also<br />
generate radiated EMI. Metal cases around the converter tend<br />
to attenuate the internal high frequency electromagnetic<br />
fields. Input and output cables or improperly grounded<br />
apparatus can still lead to substantial radiation.<br />
Additional filtering is necessary at the input and output of<br />
converters to reduce the conducted noise. Filters have to<br />
provide attenuation in a wide range of frequencies between<br />
the switching frequency and up to 30-50 MHz. To fulfil these<br />
requirements, cascade of low-pass filters attenuating both low<br />
and high frequency ranges are used. Figure 2 depicts the DC-<br />
DC converter unit composed by two commercial VICOR<br />
i+<br />
i-<br />
+<br />
_<br />
ig<br />
GND<br />
Cd1 Cd2<br />
Cc1 Cc2<br />
INPUT<br />
III. EMI REGULATIONS AND STANDARDS.<br />
Regulation about EMIs began in early days of electronics.<br />
Today exists a vast collection of standards covering<br />
equipment in industry, military, commerce and residence. In<br />
Europe, limits for high frequency interference are specified<br />
either by generic standards (EN50081-1 for residential,<br />
commercial, and light industry, EN50081-2 for industrial<br />
environment) or by standards for specific product families<br />
(EN55014 for household appliances, EN55022 for<br />
information technology equipment, or EN55011 for radiofrequency<br />
equipment) for industrial, medical and scientific<br />
applications. In USA, the Federal Communication<br />
Commission (FCC) issues electromagnetic compatibility<br />
(EMC) standards, with different limits for class A and class B<br />
devices. Both FCC standards are defined for digital equipment<br />
marketed for use in commerce, industry or business<br />
environment (class A) and a residential environment (class B).<br />
Typically, European standards for conducted high frequency<br />
emissions are specified in the frequency range from 150KHz<br />
Vicor converter<br />
Vicor converter<br />
Figure 2: Scheme of the DC-DC converter unit<br />
converters [5]. Low-pass filters attenuating the high<br />
frequency (HF) range are included at the output of each<br />
converter to reduce both differential and common mode noise<br />
conducted to the distribution cable located inside the detector.<br />
A HF low-pass filter, common to both VICOR converters, is<br />
present at the input. This filter is in cascade with the internal<br />
input filter of the converters and the set has to be designed to<br />
provide noise attenuation in a wide range of frequencies. The<br />
HF filter is designed to attenuate both DM and CM in high<br />
frequency while the internal filter is tuned to reduce DM low<br />
frequency components. These filters can interact adversely<br />
with the converter at low frequency, resulting in severe<br />
performance degradation or even instability.<br />
Power converters, operating with tight close-loop<br />
regulation of the output voltage, present negative input<br />
impedance in a range of frequencies where the feedback is<br />
effective. This negative impedance interacts with the input<br />
filter, input distribution cables, and other converters<br />
connected to the same distribution line, giving place to<br />
instabilities or deterioration of the dynamic performance.<br />
Input EMI filters have to be properly designed to avoid this<br />
problem and also to provide the adequate attenuation in a<br />
wide frequency range.<br />
OUTPUT<br />
5V<br />
to 30MHz, and in the United States form 450KHz to 30MHz.<br />
The allowed conduction emission levels are between 46 dBuV<br />
and 79 dBuV. These limits are imposed to the input cord of<br />
the equipment under test and the compliance is verified<br />
inserting a line impedance stabilization network (LISN) in<br />
series with the unit’s AC power cord. The measured values<br />
correspond to the voltage level registered across any input<br />
wire when it is terminated at the source by 50 ohms<br />
impedance to ground (LISN termination). The standards do<br />
not distinguish between CM and DM coupling mechanism.<br />
Military standards for conducted emissions (MIL-STD-461<br />
CE-03) differ from the other standards. It does not use the<br />
LISN, it directly measures the emission current using a<br />
current probe. Also it specifies that conducted emissions have<br />
to be measured on other cables in addition to the power cord.<br />
The range of frequencies covered is between 14 KHz and 50<br />
MHz and the emission level are between 86dBuA and<br />
20dBuA [6]. To compare these standards we should normalize<br />
the measurement to dBuA or dBuV assuming a normalized<br />
impedance of 50 ohms. Figure 2 compares three standards<br />
normalized in dBuV.<br />
+<br />
_<br />
+<br />
_ 7.5V
dBuV<br />
1 0 2<br />
1 0 1<br />
1 0 4<br />
1 0 5<br />
M I L -S T D<br />
E U<br />
1 0 6<br />
fre q . [ H z . ]<br />
F C C - B<br />
Figure 3: Conductive EMI standards [Normalized to 50 ohms]<br />
In HEP community there has not been a systematic<br />
approach to define both emission and susceptibility policies of<br />
EMI signals [7]. Some experiments have written policies<br />
considering issues about grounding and shielding. Also, they<br />
have included as a rule to purchase equipment that complain<br />
with either European or American standards, but there is no<br />
quantitative limit in the emission level of power distribution<br />
and signal cables routed inside the detectors. CMS is trying to<br />
define limits for both emission and susceptibility of the<br />
electronic equipment to be installed in the experiment. They<br />
will be based on measurements of prototypes and analysis of<br />
the cross effect among radiator-receiver electronics. The<br />
future standards applied to power supply distributions will be<br />
based on direct measurement of the noise current level as<br />
required by the military standard and the level imposed will<br />
be close to that required by commercial standards. Also it will<br />
address some limitations on the common mode current levels<br />
to avoid cross-talk among equipments due to ground currents.<br />
IV. NEGATIVE INPUT IMPEDANCE OF DC-DC<br />
CONVERTERS<br />
DC-DC switching converters with tight output voltage<br />
regulation operate as constant power loads. The instantaneous<br />
value of the input impedance is positive but the incremental or<br />
dynamic impedance is negative. Due to this negative input<br />
impedance characteristic interaction among switching<br />
converters and another part of the system connected to the<br />
same distribution bus may result in system instability.<br />
To analyse the behaviour of the converter and the<br />
interaction with the rest of the system a reduced model of the<br />
system is necessary. The reduced model has to represent the<br />
behaviour of the system at low frequency in the range<br />
between DC and frequencies near the bandwidth of the power<br />
converter. In this frequency range, the power converter<br />
behaves at the input as a constant power load in cascade with<br />
the input filter. The rest of the system can be modelled as<br />
follows: the distribution line can be approximated by a<br />
lumped inductance in series with a resistor and the HF filter<br />
can be reduced to the DM capacitors.<br />
To present a qualitative behaviour of the converter at the<br />
input terminals, let us consider first the simple equivalent<br />
circuit depicted in figure 3. It represents a VICOR converter<br />
connected to a primary source with short leads. Using as state<br />
variables the inductor current il and the capacitor voltage vc,<br />
the state equation is:<br />
1 0 7<br />
1 0 8<br />
E<br />
dv c P c<br />
C = il<br />
−<br />
dt v c<br />
di l<br />
L = E − il.<br />
r l − v c<br />
dt<br />
+<br />
-<br />
r l<br />
il<br />
L<br />
Figure 4: Model at the input terminals of the DC-DC converter.<br />
This equation has two real valued equilibrium points if the<br />
condition rl < E 2 / (4.Pc) is verified. Figure 4 shows the state<br />
portrait of eqn. 1. This picture shows there is a region of<br />
convergence around the equilibrium point SS1 if it is stable.<br />
The stability of this point is defined by the condition<br />
rl > (Pc.L)/(C.vc 2 ), where vc is the capacitor voltage at<br />
equilibrium. The equilibrium point SS2 is not depicted in the<br />
figure, but it is located at low voltage and high current and, in<br />
general, is unstable. In the same plot, it is possible to see an<br />
unstable region near the origin of coordinates. Transient<br />
operating points falling into this region does not converge to<br />
the equilibrium point SS1 but escape at vc = 0.<br />
Figure 5: Phase portrait of equation 1<br />
Il [amps]<br />
15<br />
10<br />
5<br />
0<br />
-5<br />
To limit the operating region of the converter to the region<br />
of convergence of the stable equilibrium point, converters<br />
either include some limits into the dynamic range of the<br />
control circuit or disable the operation of the power transistor<br />
for low values of the input voltage. VICOR converters disable<br />
the unit if the input voltage value is outside of a voltage band<br />
around the equilibrium point (e.g. Vnom=300V, Vin=180-<br />
375V). In this case, the converter can still be modelled by<br />
equation 1 but including the condition, Pc ¢¡<br />
if vc is between<br />
180V and 375V, and Pc = 0 if vc is outside of this region.<br />
As conclusion from this brief analysis, to analyse the<br />
stability of the system, the converter model can be simplified<br />
v c<br />
C<br />
V IC O R C O N V E R T E R<br />
S S1<br />
P c /v c<br />
-10<br />
0 50 100 150 200<br />
Vc [volts]<br />
250 300 350 400<br />
(1)
y a linearized model around the equilibrium point (smallsignal<br />
analysis). The region of convergence can be estimated<br />
analytically or by simulation using a non-linear model of the<br />
converter. The linearized model of the converter at input<br />
terminal is characterized by a negative resistance of<br />
magnitude rn = -vn / in, where the vn is the DC input voltage<br />
and in is the DC input current. This current depends of the<br />
load of the power converter and rn can take different values<br />
according to the operating conditions.<br />
Let us consider now the DC power distribution system<br />
composed by one AC/DC converter, a distribution line of 150<br />
mts and N converter units connected to the end-point, as it<br />
was depicted in figure 1. Each DC-DC converter unit is<br />
composed by 2 VICOR converters, connected in parallel at<br />
the input. Only one input HF filter is used per unit as it was<br />
shown in figure 2. At the distribution bus, the system can be<br />
represented by the simplified block diagram showed in figure<br />
6. The source sub-system contains the impedance of the AC<br />
mains, the AC/DC converter and the HV distribution cable.<br />
The load sub-system is composed by N DC-DC converter<br />
units. The source sub-system is stable when loaded by a<br />
resistor. Each DC-DC converter unit is stable if connected<br />
directly to a power supply.<br />
E<br />
Source sub-system<br />
Figure 6: Simplified block diagram<br />
Assuming the source sub-system has an input/output<br />
transference Fs and each DC-DC converter a transference Fc,<br />
the overall transference between any output voltage and input<br />
voltage is giving by.<br />
v on<br />
E<br />
Fs . Fc<br />
=<br />
Zo<br />
1 +<br />
Zin<br />
Load sub-system<br />
Fs Fc<br />
Zo Zin<br />
Fs . Fc<br />
=<br />
1 + Tm<br />
Von<br />
where Zo is the output impedance of the source sub-system<br />
and Zin is the input impedance of the load sub-system . Due to<br />
both Fc and Fs are stable transference functions; the stability<br />
of the system is defined by the term (1/1+Tm) that represents<br />
the loading effect between the source and load sub-systems.<br />
If |Zin| >> |Zo| for all frequencies, the loading effect is<br />
negligible. This condition can be difficult to achieve in all the<br />
frequency range. This rule prevents any noticeable interaction<br />
between source and load sub-systems and may be overly<br />
conservative. If |Zo| is larger than |Zin| a considerable loading<br />
effect exists. It does not necessarily imply a stability problem.<br />
In this case, either the Nyquist criterion or Bode based<br />
analysis can be applied to the gain Tm to determine the system<br />
stability [8][9].<br />
Figure 7, in the upper plot, shows the Bode plot of the<br />
output impedance of the source sub-system and the input<br />
impedance of the load sub-system for different capacitance<br />
CD= CD1 + CD2 (fig.2). This capacitance is included to<br />
improve the LF noise filtering and improve the stability in the<br />
high frequency region (around point B). In that area, fig. 7<br />
shows that Tm is equal to one and the phase is near 180°. Plots<br />
in figure 7 depict the load impedance for only one DC-DC<br />
converter unit connected to the bus. For increasing number of<br />
converters connected to the bus, the input impedance Zin<br />
decreases, and the stability of the system becomes critical at<br />
low frequency (point A). At this frequency, there exists<br />
interaction between the AC/DC converter filter and the<br />
negative impedance of the DC-DC converters. In this case, to<br />
improve the stability margin is necessary to increase the<br />
|Zo| |Zin|<br />
|Tm|<br />
Tm phase (deg.)<br />
10 4<br />
10 2<br />
10 0<br />
10 -2<br />
10 5<br />
10 0<br />
10 -5<br />
300<br />
200<br />
100<br />
10 0<br />
10 0<br />
10 0<br />
0<br />
Zin<br />
Zo<br />
damping of the AC/DC converter.<br />
Figure 7: Bode Plot of Tm<br />
V. CONDUCTIVE EMI INPUT FILTER<br />
The noise generated by DC-DC converters depends<br />
strongly on the topology of the converter, layout design,<br />
parasitic elements, etc. To prevent EMI entering to the<br />
distribution cables, usually passive filters are inserted between<br />
the converter and the lines.<br />
Filters can be considered as multi-port networks, where<br />
the input currents or output currents are linked by the<br />
condition ig = i + − i − (figure 2), assuming there is not<br />
radiation in the frequency range of interest. For analysis and<br />
design those variable are decomposed into two orthogonal<br />
components the differential and the common mode<br />
components. These variables are defined as;<br />
i DM<br />
=<br />
i<br />
10 1<br />
10 1<br />
10 1<br />
+ −<br />
i + +<br />
CM<br />
− i<br />
2<br />
A<br />
10 2<br />
10 2<br />
10 2<br />
i<br />
freq.[Hz.]<br />
=<br />
i<br />
2<br />
The main consideration in the filter designing is to provide<br />
adequate attenuation to both EMI signal components using the<br />
smallest filter circuit. Additional important considerations are<br />
the filter damping and parasitic elements of filter components.<br />
10 3<br />
Cd = 0.1uF<br />
10 3<br />
10 3<br />
−<br />
Cd = 0.1uF<br />
Cd = 5 uF<br />
10 4<br />
10 4<br />
10 4<br />
Cd = 5 uF<br />
B<br />
Cd = 5 uF<br />
Cd = 0.1uF<br />
10 5<br />
10 5<br />
10 5
The methodology followed in designing both the input and<br />
output filters consisted in measuring the conducted EMI<br />
signal generated by the power converter at both the input and<br />
output, estimating the adequate attenuation to satisfy some<br />
standard and defining the filter attenuation or component<br />
values by simulation. Several measurements using a current<br />
transformer and a spectrum analyser in peak-mode have been<br />
performed on the input and output cables of vicor converters.<br />
Input currents were registered for individual units and also for<br />
both units connected in parallel at the input. Representative<br />
spectrums normalized to 50 ohms are depicted in figure 8.<br />
The upper plot shows the current noise of the positive input<br />
while the lower one, the input common mode current of the<br />
Vicor converter V300B12C250AL operating at Vin=200V,<br />
Vout = 7.5V and Iout = 20A.<br />
I+ dBuV<br />
Icm dBuV<br />
120<br />
100<br />
80<br />
60<br />
40<br />
20<br />
10 5<br />
0<br />
120<br />
100<br />
80<br />
60<br />
40<br />
20<br />
10 5<br />
0<br />
10 6<br />
10 6<br />
Frequency - Hz<br />
Frequency - Hz<br />
Figure 8: Current noise at input of the DC-DC converter<br />
From these plots it is possible to understand the dominant<br />
component at low frequency (up to 2MHz.) is the differential<br />
mode component, while in high frequency both differential<br />
and common mode components have similar magnitude.<br />
Assuming the system has to complain with the European<br />
norm EU55022 (fig. 3), the attenuation required in the filter<br />
can be estimated from figure 8. It is necessary attenuations<br />
greater than 60dB at low frequencies for DM and noise<br />
reductions greater than 40dB in high frequency range for both<br />
DM and CM components. It is interesting to point out if a<br />
simple DM filter is used to attenuate the noise spectrum<br />
depicted in fig. 8, upper plot, the result after filtering will be<br />
similar to the noise spectrum depicted in the lower plot. The<br />
common mode components will remain unaffected and the<br />
system will not comply the standard.<br />
There exists a vast variety of commercial high frequency<br />
EMI filters. Manufactures specify the insertion loss of these<br />
filters for DM and CM components covering the frequency<br />
range up to 30MHz. This information allows understanding<br />
the effect of parasitic elements in the attenuation reduction. It<br />
also allows defining simulation models to estimate the<br />
attenuation when the filter operates under different load<br />
conditions. Figure 9 shows the current noise after a HF filter<br />
and CD=5uF is included at the input of the DC-DC converter<br />
unit. This plot is based on an estimation of the filter<br />
attenuation calculated by simulation.<br />
10 7<br />
10 7<br />
I+ dBuV<br />
Icm dBuV<br />
80<br />
60<br />
40<br />
20<br />
0<br />
-20<br />
10 5<br />
-40<br />
80<br />
60<br />
40<br />
20<br />
0<br />
-20<br />
10 5<br />
-40<br />
Figure 9: Current noise at the input after filtering<br />
VI. CONCLUSIONS<br />
Guidelines to design the EMI filters taking into account<br />
the level of attenuation required and the stability of the overall<br />
system have been presented. The design is based on model<br />
simulation of the converter, filter and measurements of the<br />
noise currents.<br />
VII. REFERENCES<br />
[1]- P.Lindman, L. Thorsell, “Applying Distributed Power<br />
Modules in Telecom Systems” IEEE Trans. on Power<br />
Electronics, Vol 11, No.2, 365-373, March 1996.-<br />
[2]- B. Cho, F. Lee, “Modeling and Analysis of Spacecraft<br />
Power Systems” IEEE Trans. on Power Electronics, Vol 3,<br />
No.1 44-54, January 1988.-<br />
[3]- J. Kierstead, H. Takai, “ Switching Power Supply<br />
Technology for ATLAS Lar Calorimeter”, Proc. 6 th Workshop<br />
on Electronics for LHC experiments, 380-382, Sept. 2000.-<br />
[4]- B. Allongue et al. “Design Consideration of low Voltage<br />
DC Power Distribution” Proc. 6 th Workshop on Electronics<br />
for LHC experiments, 388-392, September 2000.-<br />
[5]- Vicor Corporation. http://www.vicr.com<br />
[6]- C. Paul, ‘Introduction to Electromagnetic Compatibility’<br />
1992, ISBN 0-471-54927-4<br />
[7]- Szoncso, F. ‘EMC in High Energy Physics’<br />
http://s.home.cern.ch/s/szoncso/www/EMC/<br />
10 6<br />
10 6<br />
F requenc y - Hz<br />
F requenc y - Hz<br />
[8] C. Wildrick et.al. “A method of defining the load<br />
impedance specification for a Stable Distributed Power<br />
System” IEEE Trans. on Power Electronics, Vol.10, No 3 pp<br />
280-285. May 1995<br />
[9] Choi, B. Cho, B. “Intermediate Line Filter Design to Meet<br />
Both Impedance Compatibility and EMI Specifications” IEEE<br />
Trans. on Power Electronics, Vol.10, No 5 pp 583-588.<br />
September 1995.<br />
10 7<br />
10 7
! "# $ %& ''()*" $<br />
% + (% %<br />
,# - -<br />
. - -<br />
- / . 0 1<br />
- . 2 . .<br />
0 1%<br />
. - 3(&'' -4 #5 . .<br />
6 & - / 2<br />
7'' 8 2 - 2<br />
05' 8 9 91%<br />
$ - . .<br />
(*' $ . /<br />
- "# %<br />
2 -<br />
- . 2<br />
% "# : #<br />
- ; . .<br />
- 3(&'' -4 #5 . . 6 &<br />
/ . - - <<br />
. . . . . .<br />
! - (%<br />
- (= > . . . -<br />
*''8<br />
&''8<br />
F'%'(μ<br />
F&'μ<br />
! 7 @?8<br />
"#$ % & " (&&5''<br />
/ - / - - - 9<br />
/ 2 22 0 1 *' .<br />
. . / - . -<br />
. .% - / 2 - &'' *''<br />
8 6 0 > . /<br />
7)' 81% - / 2 - -<br />
. . 2 2<br />
/ $ . . - σ0 1? % 2<br />
2 . '%*@% # > .<br />
. - . 0 . - $ 2<br />
. . $ 2 9 2 1<br />
'%&@ . % / $<br />
. - 2 - - %<br />
. 6 .$. 2 . &$2<br />
. 0 - 1 -<br />
±'%(@ 0" 1 / %<br />
7@?8 0 . ?8 / .<br />
A*'12 / ±'%(@ .<br />
±77 8 033 8 1 . 6 .<br />
. . . . . . 0 $ " $<br />
- $" 2 1 > / %<br />
. -<br />
. 2 > .<br />
. . 0 ( " 2<br />
. &×(' (7 ?. & /<br />
> 1% - . .<br />
. - .- > (*' 2%<br />
. . .<br />
2 ! (55 < 2 8 2 0
- > %<br />
o . 0<br />
. / 2 9<br />
. . . .<br />
- - / 2 . 0 2 (1%<br />
. . . / - / 2 2 "<br />
9 . 0" 1 /<br />
2
2 &= . .<br />
- 7= . .<br />
, .<br />
" . 2 !<br />
-<br />
. / (. ? (. ?<br />
, ' J('''<br />
:<br />
8 2<br />
' *''%''8<br />
. ± &' 8<br />
' (*<br />
*''<br />
. (@<br />
2 - 2 -<br />
2 . / . / 2 %<br />
4 2 2 .$ L 9 - M<br />
6 % 2 L 9 - M<br />
6 %: 2 $<br />
. . %<br />
N(O%<br />
. . .<br />
2 7=<br />
% P<br />
-<br />
- - " %0<br />
2 71% - . 2 2<br />
5))- 0 2 <<br />
8 / 1 6 .<br />
. . . L M<br />
% 2 .<br />
- . 9 Q&<br />
. 2*' . .<br />
2 &<br />
.. - -<br />
. - ! 0± %'* K1<br />
2<br />
9 2=<br />
o < 75*) 2 )R 2<br />
. -<br />
. - 0 . 1<br />
. .. .<br />
2 - %<br />
0 7''8 (%B 8 / B'<br />
.. . ** 8? K - 1<br />
o I 9<br />
E 9 . . 9<br />
. =<br />
0 6%1<br />
0 2 6%1<br />
2 $ / 2 . 2<br />
- 2. . .=<br />
- 0F&5 1<br />
.-
"<br />
" 2 0 ? 1<br />
.<br />
8 2 .<br />
- / 2 0 > . / 2 /<br />
7)'81 - 2<br />
. . .=<br />
- .<br />
2 - 0J(' 1<br />
- (* 5' K 2<br />
2 .-<br />
8% "# # G<br />
/
- 3=
Power Supply and Power Distribution System for the ATLAS Silicon Strip Detectors<br />
J.Bohm A<br />
, V.Cindro D<br />
, L.Eklund E<br />
, S.Gadomski C&E<br />
, E.Gornicki C<br />
, A.A.Grillo I<br />
, J.Grosse−Knetter E<br />
,<br />
S.Koperny B<br />
, G.Kramberger D<br />
, A.Macpherson E<br />
, P.Malecki C<br />
, I.Mandic D<br />
, M.Mikuz D<br />
, M.Morrissey H<br />
,<br />
H.Pernegger E<br />
, P.W.Philips H<br />
, I.Polak A<br />
, N.A.Smith F<br />
, E.Spencer I<br />
, J.Stastny A<br />
, M.Turala C<br />
, A.Weidberg G<br />
ATLAS SCT Collaboration<br />
A<br />
Academy of Sciences of the Czech Republic, Prague, Czech Republic<br />
B<br />
Faculty of Physics and Nuclear Techniques of the UMM, Cracow, Poland<br />
C<br />
Institute of Nuclear Physics, Cracow, Poland<br />
D<br />
Josef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana, Slovenia<br />
E<br />
CERN, Geneva, Switzerland<br />
F<br />
Department of Physics, Oliver Lodge Laboratory, University of Liverpool, Liverpool, UK<br />
G<br />
Department of Physics, Oxford University, Oxford, UK<br />
H<br />
Rutherford Appleton Laboratory, Chilton, Didcot, UK<br />
I<br />
Institute of Particle Physics, University of California, Santa Cruz, CA, USA<br />
Piotr.Malecki@ifj.edu.pl<br />
Abstract<br />
The Semi−Conductor Tracker of the ATLAS experiment<br />
has modular structure. The granularity of its power supply<br />
system follows the granularity of the detector. This system<br />
of 4088 multi−voltage channels providing power and control<br />
signals for the readout electronics as well as bias voltage for<br />
silicon detectors is described.<br />
Problems and constraints concerning power distribution<br />
lines are also presented. In particular, optimal choice<br />
between concurrent requirements on material, maximum<br />
voltage drop, space available for services, assembly<br />
sequence etc. is discussed.<br />
I. POWER SUPPLY SYSTEM FOR THE ATLAS SCT<br />
The ATLAS SCT detector[1] consists of 4088 modules of<br />
which 2112 form four barrel cylinder layers and 1976 are<br />
mounted on end cap wheels. Single−sided micro−strip<br />
detectors are glued back−to−back to form one double−sided<br />
module with 1536 strips. The module is equipped with a<br />
hybrids, a small boards carrying 12 ABCD3T readout chips<br />
and electronics to transfer digital data from and to SCT<br />
modules. Present barrel and end cap hybrids are<br />
substantially different in many aspects but identical from the<br />
point of view of power supplies.<br />
A. Basic design principles<br />
The ATLAS SCT readout chips and electronics for the<br />
optical transmission of digital data to the off−detector<br />
stations (as well as timing, trigger and control data to SCT<br />
modules) require several low voltage supplies. In addition<br />
silicon micro−strip detectors operating in the LHC high<br />
radiation environment require the bias voltage which can be<br />
regulated in 0 − 500 V range.<br />
SCT power supply and power distribution system has<br />
been designed according to following basic requirements:<br />
� modularity of the power supply system follows the<br />
modularity of the detector,<br />
� power supply modules are fully isolated and voltages in<br />
modules are "floating",<br />
� every detector module is powered by separate, multiwire<br />
line (tape or cable).<br />
In the context of this article it seems appropriate to<br />
underline that among other consequences for the detector<br />
performance the above mentioned design rules allow for<br />
the maximum flexibility in selection of optimal shielding<br />
and grounding scheme<br />
B. Requirements for Low Voltage power<br />
supplies
Present requirements[2] for low voltage power supplies<br />
result from several iterations of readout chip design and<br />
from many beam and radiation tests of module<br />
prototypes. Main objects of concern are Vcc, "analog"<br />
voltage supplying analog circuits of the readout chip,<br />
and Vdd, "digital" voltage supplying digital part of the<br />
ABCD3T chip, as well as electronics for the optical<br />
links (DORIC4 and VDC ASICs). These two voltages<br />
should provide relatively high currents of the order of<br />
1A. Their load may, in addition, vary over a wide<br />
range.<br />
Low voltage power supply channel should also provide<br />
several low power voltages and control signals: bias<br />
voltage for the photodiode, control voltage for VDC<br />
ASIC, voltage (two current sources) for the temperature<br />
monitoring, module reset and clock select signals.<br />
Nominal values for voltages and signal levels with<br />
typical and maximal loads are listed in Table 1. The<br />
inclusion of these extra power and control signals, as<br />
well as the temperature readout mentioned in section G<br />
below, in the low voltage supply channel is to insure a<br />
common reference potential for all electrical signals on<br />
the detector module. This minimises the possibility for<br />
electrical pick−up or extraneous noise.<br />
Table 1: LV Power Requirements<br />
Name Nominal<br />
value [V]<br />
Current<br />
[mA]<br />
Max.<br />
Current<br />
[mA]<br />
Vcc 3.5 900 1300<br />
Vdd 4.0 570 1300<br />
VCSEL 1.6 − 6.6 6 8<br />
PIN bias 10.0 0.5 1.1<br />
Current<br />
source 0<br />
Current<br />
Source 1<br />
Max. 8.0 0.08<br />
Max. 8.0 0.08<br />
RESET Vdd/−0.7 0.4<br />
SELECT Vdd/−0.7 1.3<br />
These main requirements, together with others referring<br />
to voltage setting resolutions, voltage and current<br />
monitoring accuracy, over voltage and over current trip<br />
limits and maximum output ripple, lead to specifications<br />
for LV power supplies which have recently been fixed<br />
[2].<br />
C. HV requirements<br />
Bias voltage power supplies should provide stable,<br />
digitally controlled voltage in 0 − 500 V range and<br />
precise measurement of the output current in 40 nA −5<br />
mA range. One should be able to set the current trip<br />
limit individually for each channel in range from<br />
hundreds nA to 5 mA as well as select one of the<br />
predefined voltage ramping rates: 50, 20, 10, 5 V/s. It is<br />
also required that the maximum allowable noise level is<br />
not higher than 40 mV peak to peak[3].<br />
D. Basic block characteristics<br />
Several low voltage modules are grouped onto one board<br />
equipped with the board micro−controller. The low<br />
multi−voltage power module[4] consists of separate<br />
floating supplies for analog and digital voltages. Each<br />
module is controlled and monitored by its own micro−<br />
controller which receives commands and receives or<br />
transmits data to/from the board micro−controller.<br />
The HV power module has very similar structure and<br />
similar basic components: rectifier, filter, regulator,<br />
error amplifier, DAC and ADC.<br />
Two voltages of a relatively high current, Vcc and Vdd,<br />
differ from others by using sense wires. This is<br />
necessary for the considerable resistance of each<br />
transmission line and hence the considerable voltage<br />
variation at the module side in response to variations of<br />
the current draw. Analog data from sense wires is<br />
converted to digital and processed by the channel<br />
micro−controller for the appropriate output voltage<br />
adjustment.<br />
LV and HV channels are powered from the 48V, 48 kHz<br />
square wave generators. An isolation of individual<br />
channels and individual voltages is realised by HF<br />
transformers on the power path and by optical couplers<br />
on communication lines.<br />
More details concerning design and performance of<br />
prototypes of the LV power supply modules are in ref.<br />
[4]. HV power supply module design has been<br />
presented in the Proceedings of the previous, 6−th<br />
Workshop on Electronics for the LHC Experiments [5].<br />
E. LV/HV integration<br />
Low and high voltage power supplies form together one<br />
system. Multi−channel LV and HV 6U cards are<br />
integrated in common 19’’ EURO crate equipped with<br />
the custom back plane, the crate controller, the interlock<br />
card, and a common power pack. One crate will house<br />
48 SCT power supply multi−voltage channels mounted<br />
on 12 LV cards, 4 channels each, and on 6 HV cards,<br />
each containing 8 channels.<br />
All cards in one crate are supplied from the crate 1.6 kW<br />
power pack which provides 48V, 48 kHz square wave
for HF transformers and DC supply for the commercial<br />
crate controller as well as for all card controllers.<br />
All SCT LV power supply cards are identical.<br />
Similarly, there is full interchangeability between HV<br />
power supply cards. The card address and consequently<br />
the channel address is determined from the card position<br />
in the crate. The custom back plane design predefines<br />
positions for LV and HV cards. The mechanical<br />
construction prevents card misplacement.<br />
It has been decided to use commercial crate controller<br />
which communicates within the crate with the eighteen<br />
card controllers via parallel 8−bit bus. This<br />
communication is serviced by a simple and efficient<br />
custom made protocol.<br />
The crate controller is equipped with the CAN bus<br />
interface for communication with the higher levels of<br />
the Detector Control System.<br />
Custom back plane bears on the back side special 48<br />
connectors for the multi−wire cables which connect<br />
power supply modules with the first patch panel (PP3)<br />
on the way to detector modules. These connectors<br />
(CONEC 17W5) have five thick pins allowing for<br />
connection of four high cross section wires and another<br />
wire with hv insulation. Twelve thin pins serve the rest<br />
of low current lines (of which one is reserved for the<br />
drain wire of the cable).<br />
F. Location of power supplies<br />
The choice of the power supplies location has important<br />
implications on the power distribution system, discussed in<br />
the next section, as well as on the power supply design and<br />
specifications. For example, the maximum power path<br />
length determines the maximum voltage drop and hence the<br />
maximum output voltage which power supply must provide<br />
to reach the nominal value at the detector side. Anticipating<br />
some results of the following discussion it is worth to<br />
mention here that the Vcc supply should be designed for the<br />
maximum output voltage of 8.75 V and Vdd for 9.33 V to be<br />
able to provide the nominal values on the detector module<br />
side in the case of the maximum voltage drop (with 1 V<br />
margin included)[2].<br />
The standard location for the off−detector electronics is<br />
in the cavern (named USA15) next to the detector hall. For<br />
such location the length of the power path will be in range of<br />
100 − 130 m. SCT plans to locate 50% of its power supply<br />
crates in another cavern, on the other side of ATLAS<br />
detector (named US15). This will shorten the path length by<br />
30 − 40 m for that part of modules, making the longest path<br />
about 100m.<br />
II. POWER DISTRIBUTION FOR THE ATLAS SCT<br />
About 23 kW are needed for the normal operation of the<br />
ATLAS SCT. The delivery of that power to the inner part of<br />
the Inner Detector can not be done without extra material,<br />
more power dissipation, space for services, cost etc. In the<br />
following short review we will concentrate on a group of<br />
requirements which mostly concern high current lines for<br />
Vcc and Vdd voltages. Grounding and shielding problems,<br />
which are very important for the system of thousands of<br />
wires distributed over large surfaces, are discussed elsewhere<br />
[6].<br />
G. List of lines<br />
In the following list the first four lines, out of all<br />
seventeen, should have considerably larger cross section to<br />
conduct high current, 1.3 A maximum. All other lines can<br />
practically use as thin conductor as technologically possible.<br />
1. Vdd − digital voltage<br />
2. DGND − digital ground<br />
3. Vcc − analog voltage<br />
4. AGND − analog ground<br />
5. HV − bias voltage<br />
6. Hvgnd<br />
7. Vddsense<br />
8. DGNDsense<br />
9. Vccsense<br />
10. AGNDsense<br />
− bias ground<br />
11. VCSEL − driver<br />
12. SELECT − clock redundancy<br />
13. RESET − clock<br />
14. PIN − diode bias<br />
15. TEMP1 − sensor<br />
16. TEMP2<br />
17. drain wire<br />
− sensor<br />
Several voltages (VCSEL, SELECT, RESET,<br />
PIN.TEMP1, TEMP2) use the digital ground, DGND, as the<br />
common return. In all present tests of SCT module<br />
prototypes analog and digital grounds are directly connected.<br />
AGNDsense and DGNDsense wires sense then the same<br />
point but it has been agreed to keep both lines as the present<br />
laboratory practice may not be continued in the final<br />
installation.<br />
H. Conflicting requirements<br />
The design of the power delivery system for the detector<br />
located in the innermost part of the ATLAS experimental<br />
setup should satisfy several conflicting requirements. One<br />
should minimise material, voltage drop, power dissipation<br />
and costs. One should observe rules concerning the radiation<br />
hardness of and flame resistance of materials used. The<br />
design is also strongly influenced by the limited space for<br />
services as well as by the foreseen assembly sequence.<br />
An optimisation process has different priorities in<br />
different regions of the detector. Consequently, it has been<br />
decided to divide the power path between the SCT modules<br />
and power supplies into four parts.
I. 4−fold way<br />
Final design of the power distribution system for the<br />
ATLAS SCT is in progress and is closely related to the<br />
process of integration of all services.<br />
1) Low mass tapes<br />
In the innermost part material introduced by cables seems<br />
to be the most critical parameter. This first part, from<br />
detector modules to the first patch panel (PPB1 for the<br />
barrel, PPF1 for end cap modules) located at the cryostat<br />
wall, is served by low mass tapes [7]. These tapes are made<br />
from 25 micron thick Kapton and 25 micron glue substrate<br />
with 50 micron aluminium conductors covered by another<br />
Kapton and glue layer of 25 micron. The width of conductor<br />
lines as well as the space between conductors can −for<br />
technological reasons −be made in steps of 0.5 mm. Our<br />
four critical lines (for Vcc and Vdd) are chosen to be 4.5 mm<br />
wide while all other lines have the minimal allowable width<br />
of 0.5 mm. The length of these tapes is in range 0.7 − 1.6 m<br />
for barrel modules and reaches about 3 m for some of end<br />
cap modules. A contribution to the material budget is often<br />
characterised by calculating a cumulative amount of material<br />
in certain region of the detector and "dilute" it over some<br />
characteristic surface. An example for 6 tapes serving one<br />
barrel half stave averaged over the surface of one module<br />
shows the material contribution of about 0.24% of Xo.<br />
Total maximum power dissipated by low mass tapes is<br />
estimated on about 3 kW. Maximum voltage drop for some<br />
barrel tapes reaches 0.6V and for end cap tapes 1 V.<br />
2) "Very thin" cables<br />
It is estimated[8] that the distance from the PPB1 (barrel)<br />
to the next patch panel PP2 should not exceed 9 m and<br />
the corresponding one for the end cap (PPF1 − PP2) 5 m.<br />
Final numbers depend on the PP2 location and details of<br />
routing, subject of the decision of the ID coordination.<br />
We have originally planed to use Al on Kapton tapes also<br />
for this part, but with the conductor thickness increased<br />
to 100 micron. As the maximum allowable voltage drop<br />
become the most critical parameter it has been decided to<br />
use the copper conductors.<br />
Maximum allowable voltage drop requires special<br />
attention because if it exceeds certain limit then, in case<br />
of a sudden loss of load, the safe limit of 5.5 V for the<br />
readout chip is exceeded.<br />
It is planned to use the multi−wire round cable of 6 mm<br />
outer diameter, with a thin Kapton insulation and with<br />
the four high current lines of 0.6 mm 2<br />
. With such choice<br />
the maximum voltage drop will not exceed 1.6 V for Vcc<br />
or 1.4 for Vdd what is safe providing that the some<br />
voltage limiters are installed on the PP2. These limiters<br />
seem to be unavoidable since another 2V drop has still to<br />
be considered for the rest of the power path.<br />
3) Thin conventional cables<br />
For the part which extends from patch panels PP2 and<br />
PP3 of length of about 20 m it is planned to use multi−<br />
wire "conventional" cable[8] with somewhat complicated<br />
geometry taking into an account the requirement of<br />
twisting wires in groups belonging to the same voltage<br />
(e.g. Vcc, AGND, Vccsense and AGNDsense). With<br />
such requirement the cable outer diameter is about 12<br />
mm with the cross section of our four critical lines equal<br />
1mm 2<br />
. Maximum voltage drop for that part is estimated<br />
on 1.3 V.<br />
4) Thick conventional cables<br />
The distance from PP3 to power supply crates depends on<br />
the final location of PS racks. ATLAS SCT considers<br />
use of two locations in caverns on both sides of the<br />
detector hall. That will considerably shorten the path for<br />
thick conventional cables, and hence reduce power<br />
dissipation, cost etc. The multi−wire cable[8] with the<br />
geometry similar to the thin conventional cable will have<br />
the four critical lines with the 4 mm 2<br />
cross section and<br />
the outer diameter of about 20 mm. The estimated<br />
maximum voltage drop equals about 1V.<br />
J. Final remarks<br />
The power distribution system for the ATLAS SCT is in<br />
the development state. Several elements require final<br />
design and tests. In particular:<br />
� final construction and location of patch panels as<br />
well as final details of routing of cables. This<br />
�<br />
depends strongly on the overall process of<br />
integration with other subsystem and involves<br />
some coordination on the level of the Inner<br />
Detector<br />
full power dissipation by cables in all four<br />
regions is estimated on 10 kW (for nominal<br />
currents and maximal cable lengths)[8]. To<br />
satisfy the "rule of thermal neutrality" of each<br />
subdetector one has to find solution for the cable<br />
cooling in some regions<br />
� total voltage drop along power lines<br />
�<br />
considerably exceeds the safety limit for the<br />
readout ASICs and some voltage limiters have to<br />
be installed in the region of PP2. An appropriate<br />
voltage limiter circuit has been designed for PP2<br />
and is now undergoing system and radiation<br />
testing<br />
PP3 is being designed with common−mode<br />
inductors to filter unwanted pick−up from the<br />
long cable runs from the power supplies.<br />
Prototypes have been tested in the system test<br />
and have been shown to be quite beneficial.<br />
III. REFERENCES<br />
1. ATLAS Inner Detector TDR Volume 2<br />
CERN/LHCC/97−17 p 385
2. J.Bohm, SCT Week Prague 25 − 29 June 2001<br />
http://atlas.web.cern.ch/Atlas/GROUPS/INNER<br />
_DETECTOR/SCT/sct_meetings.html<br />
3. http://www.ifj.edu.pl/ATLAS/sct/scthv/<br />
4. http://www−hep.fzu.cz/Atlas/WorkingGroups/<br />
Projects/MSGC.html<br />
5. P.Malecki. Multichannel System of Fully<br />
Isolated HV Power Supplies or Silicon Strip<br />
Detectors, 6−th Workshop on Electronics for<br />
LHC Experiments. CERN/LHCC/2000−041.<br />
p.376<br />
6. Ned Spencer. ATLAS SCT/Pixel Grounding<br />
and Shielding Note. Nov 22. 1999. UCSC.<br />
EDMS Id:108383, Number ATL−IC−EN−0004<br />
v.1<br />
7. http://merlot.ijs.si/~cindro/low_mass.html<br />
http://www−f9.ijs.si/atlas/<br />
8. H.Pernegger.Services in:<br />
http://perneg.home.cern.ch/perneg/
Conductive Cooling of SDD and SSD Front-End Chips for ALICE<br />
A.van den Brink 1 ,S.Coli 2 ,F.Daudo 2 ,G.Feofilov 3 , O.Godisov 4 , G.Giraudo 2 ,S.Igolkin 4 ,<br />
P.Kuijer 5 ,G.J.Nooren 5 ,A.Swichev 4 ,F.Tosello 2<br />
1 Utrecht University, Netherlands<br />
2 INFN, Torino,Italy<br />
3 St.Petersburg, Russia, Institute for Physics of St.Petersburg State University<br />
Ulyanovskaya,1, 198904, Petrodvorets, St.Petersburg, Russia, e-mail: feofilov@hiex.niif.spb.su<br />
4 CKBM, St.Petersburg, Russia<br />
5 NIKHEF, Amsterdam, Netherlands<br />
Abstract<br />
We present analysis, technology developments and test<br />
results of the heat drain system of the SDD and SSD frontend<br />
electronics for the ALICE Inner Tracker System (ITS).<br />
Application of super thermoconductive carbon fibre thin<br />
plates provides a practical solution for the development<br />
of miniature motherboards for the FEE chips situated inside<br />
the sensitive ITS volume. Unidirectional carbon fibre<br />
motherboards of 160 -300 micron thickness ensure the<br />
mounting of the FEE chips and an efficient heat sink to<br />
the cooling arteries. Thermal conductivity up to 1.3 times<br />
better than copper is achieved while preserving a negligible<br />
multiple scattering contribution by the material (less<br />
than 0.15 percent of X/Xo).<br />
I. INTRODUCTION<br />
The state-of-the art Front-end electronics (FEE) for<br />
the coordinate-sensitive Si detectors of the ALICE Inner<br />
Tracking System(ITS)[1] at the LHC is situated inside the<br />
sensitive region. Therefore the heat drain of about 7kW<br />
of power is to be done under the stringent requirement of<br />
minimisation of any materials placed in this area containing<br />
6 layers of coordinate-sensitive detectors. A maximum<br />
of 0.3% of X/Xo per layer is allowed for all services including<br />
detectors, mechanics support, cables, cooling and electronics<br />
units. Analysis of various possible cooling schemes<br />
was performed earlier as a starting point of the general<br />
ITS services design [2],[3]. Solution of the local heat drain<br />
problem from the FEE to the cooling ducts was named<br />
as the key point in the thermal performance of the whole<br />
ALICE ITS. The application of super thermoconductive<br />
carbon fibre plastics was proposed in order to get the<br />
most efficient integration of the extremely lightweight FEE<br />
motherboards and local heat sink units. The implementation<br />
of these ideas in a single unit called ”the heat bridge”<br />
required the development of a new technology of thin unidirectional<br />
carbon fibre plates manufacturing. This technology<br />
was successfully developed and is improved further<br />
(For the ALICE colaboration)<br />
at present. We describe below the results of the development<br />
and tests of the heat bridges for SDD and SSD<br />
front-end hybrid electronics.<br />
II. CARBON FIBRE MOTHERBOARDS<br />
Novel carbon fibre compound motherboards of efficient<br />
thermal conductivity (heat bridges) were proposed for the<br />
ITS FEE chips after the analysis of the existing materials,<br />
see Table1.<br />
Table 1: Parameters of different materials for thermoconductive<br />
motherboards<br />
Material Young’s Thermal Rad.<br />
modulus, Conductivity, Length<br />
E,[GPa] [W/mK] Xo,[cm]<br />
Copper 125 380 1.43<br />
AlN 150 9<br />
CF comp. 450 300-500 18<br />
The application of super thermoconductive fibre Thornell<br />
KX1100 for the manufacturing of very thin mechanically<br />
stable plates with good thermal properties used both<br />
for the FEE support and for efficient heat drain to the<br />
cooling arteries was found to be the optimal solution to<br />
the problem. The conductivity along the fibre is about<br />
1100W/mK, while the mechanical strength is ensured at<br />
the level of steel (E=450GPa). Carbon fibers (CF) have<br />
a diameter of about 80 microns and are packed in unidirectional<br />
flat sheets (prepregs) impregnated with some<br />
compound material (about 35% for the last one). The<br />
thermal expansion coefficient of the carbon fibre based<br />
compounds is very low (close to zero), resulting in mechanically<br />
stable devices. Other properties of CF compounds<br />
measured in the present application are: density<br />
ρ =2.2 g/cm 3 , electric conductivity 5 Ohm/m, thermal<br />
conductivityλ = 500W/mK along the fibre,λ =40W/mK<br />
perpendicular to the fibre.<br />
The technological problems of making flat thin car-
on fibre composite plates (150-330 microns thickness, dimensions<br />
up to 10 · 10cm 2 ) were studied and successfully<br />
overcome[4], providing various options for the manufacturing<br />
of heat drain devices in line with the technical requirements<br />
of the Alice experiment. The programme included<br />
the design and optimization of the unidirectional<br />
fibre plates, ANSYS simulations and test analysis and the<br />
optimisation of the technology for the baking of plates<br />
with the surface quality and flatness, suitable for further<br />
chip mounting and bonding of microcables.<br />
Figure 1: Foto of the SSD heat bridge prepared for studying<br />
the temperature distribution along the bridge. Thickness 320<br />
microns, length 70mm. Thermal conductivity 300W/mK. The<br />
heat bridge is mounted with two triangular carbon fiber cooling<br />
arteries. The heater (representing dummy chips) is glued<br />
below the bridge.<br />
Various types of surface coatings were also developed<br />
and tested: pure carbon fibre surfaces and insulating Al203<br />
ceramic coatings.<br />
The bridges are used as hybrid motherboards for the<br />
FEE and ensure mounting of the chips, bonding the microcables,<br />
fixation of the assembled modules to the cooling<br />
artery. Miniature heat transfer clips for two types of<br />
bridges are also foreseen. Each bridge is formed by unidirectional<br />
layers of super thermoconductive carbon fibre.<br />
The minimum number of CF layers that could be applied<br />
is 2, resulting in a minimum thickness of 150 µm forthe<br />
board.<br />
The flat carbon unidirectional fibre plates were manufactured<br />
ranging in thickness from 150 to 330µm microns<br />
in order to get data on the new material conductivity and<br />
other parameters.<br />
A summary of the different configurations of heat<br />
bridges produced is presented in Table . Different numbers<br />
of CF layers were used in the manufacturing of these test<br />
samples. Also some different orientations of the carbon<br />
fibres were tested along with various tubes and tube-tobridge<br />
contacts.<br />
The experimental setup used for the multichannel temperature<br />
map cooling studies is described in [4].<br />
In Fig. 1 the CF heat bridge (coated with Al203) connected<br />
to two triangular shape cooling tubes, in preparation<br />
for heat drain studies, is shown. The front-end<br />
electronics was simulated by minuature dummy chips producing<br />
up to 2W of power. Five uniformly spaced temperatures<br />
sensors along the 70mm heat bridges were used<br />
in case a single-end cooling. We used only one half of the<br />
carbon fibre bridge in our data sampling in the case of one<br />
central tube or two cooling tubes spaced by 45mm due to<br />
the symmetry. Tests were compared with a copper bridge<br />
of the same geometry (Length =70mm, width =10.5mm).<br />
Table 2: Types of different heat bridges and cooling arteries used in the 1st studies (see Fig.2).<br />
Heat Bridge Material CF layers: Comments<br />
Thickness,mm Longitudinal(L)+Perpendicular(P)<br />
1 Copper, 0.47 one sided cooling, soldered contact<br />
2 CF, 0.56 4L-3P one sided cooling<br />
3 CF, 0.32 2L-4P central tube location,rectangular<br />
4 CF, 0,32 2L-4P central tube location, circular<br />
5 CF, 0.32 2L-4P central tube location<br />
6 CF, 0.32 4L-3P 2 side circular tubes<br />
7 CF, 0.56 4L-3P 2 side triangular tubes<br />
8 CF, 0.56 4L-3P 2 side triangular tubes<br />
9 CF, 0.37 4L 2 side triangular tubes
Tx-Tin, deg.C<br />
��<br />
��<br />
��<br />
��<br />
��<br />
��<br />
��<br />
��<br />
� �� �� �� �� �� �� ��<br />
X, [mm]<br />
Figure 2: Temperature distributions along the tested heat<br />
bridges obtained under 2 W heat load (except samples No.1<br />
and 2 tested under 1W load). Copper bridge No.1 (see Table<br />
2) has an ideal (soldered) contact with the cooling tube fixed<br />
toonesideendofthebridge.<br />
A summary of temperature distributions measured<br />
along the length of the different heat bridges is shown<br />
in Fig.2. One can see that the performance of the carbon<br />
fibre bridges is better than that of the copper bridge<br />
(e.g. dataset (No.9) Fig.2 for a completely unidirectional<br />
orientation of fibres in the heat bridge).<br />
1. The mean value of thermoconductivity for samples<br />
No. 2,6,7,8,9 was measured to be about 300-310 W/mK<br />
(i.e. about 0.78-0.8 of the value for Cu (λ=380W/mK).<br />
2. The mean value of thermoconductivity for the samples<br />
No. 3,4,5 (v-shaped, variable cross- section) is about<br />
470-505 W/mK (i.e. about 1.2-1.3 of the Cu).<br />
3. The value of contact temperature resistance for carbon<br />
fibre bridges connected to the cooling artery could be<br />
about 4.4-5.8deg.C for 1 W heating power.<br />
4. Maximum temperature gradients of 0.6-0.7 deg.C<br />
could be obtained along 70 mm carbon fibre bridge with<br />
two cooling channels.<br />
These data on the properties of different CF compounds<br />
were used for the ANSYS simulations of the temperature<br />
maps and for the further SDD and SSD cooling<br />
scheme and technology optimization.<br />
III. SURFACE QUALITY TESTS<br />
The carbon fibre heat bridge surface quality tests were<br />
done for three batches of CF plates consisting of 12,19<br />
and 19 samples. Plates of about 170 microns thickness<br />
had the dimensions of 72mm*6.5mm and were designed<br />
as SSD heat bridges. Measurements of carbon fibre heat<br />
bridges surface quality was done for 3 test batches. The<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
roughness of the surface was measured and found to be<br />
better than 10 microns for pure carbon fibre composite<br />
bridges. This parameter was found to be satisfactory and<br />
enabled to approve this technology for application in minuature<br />
motherboards manufacturing for ALICE SDD and<br />
SSD FEE chips.<br />
IV. ANSYS SIMULATIONS<br />
ANSYS simulations were done to optimise the heat<br />
bridge layout, in particular the direction of fibres and the<br />
location of the heat transfer clip.<br />
Figure 3: ANSYS simulations of the carbon fibre SDD heat<br />
bridge temperature map for the undirectional fibre orientation.<br />
Position of the thermoconductive clip is optimised<br />
Figure 4: ANSYS simulations of the carbon fibre SDD<br />
heat bridge temperature map for the uniform conductivity<br />
180W/(mK)<br />
Some examples of simulations are represented in Fig-
ures 3 and 4 (only one half of the hybrid is shown due<br />
to the heat dissipation symmetry). The geometry of the<br />
SDD heat bridge used was in line with the requirements to<br />
place FFE 4 Pascal and 4 Ambra SDD chips to serve one<br />
half of SDD (see Figure5). Electronics chips were placed<br />
in groups of 4 + 4 chips on one side of the heat bridge. The<br />
total power load for one bridge was assumed to be 1.775<br />
W, the heat release is proportional to the chip’s area (8<br />
chips 8x8 mm and 8 chips 5x5 mm were used). The following<br />
conductivity values were used in calculations for<br />
carbon fibre composite:<br />
for the panel in longitudinal direction- 300 W/(mK),<br />
in transversal direction- 0.7 W/(m K);<br />
for the clip in longitudinal direction - 150 W/(mK), in<br />
transverse direction - 40.0 W/(m K).<br />
(Longitudinal here means the direction of heat drain to<br />
the cooling artery, i.e. perpendicular to the cooling tube).<br />
Cooling by water and by neutral ozon-safe freon coming<br />
at 13.0 o C as a coolant liquid were studied. The cooling<br />
artery is a tube diameter 2mm made of stainless steel. The<br />
influence of natural convection was neglected. The nominal<br />
value of the heat transfer coefficient from liquid to the<br />
tube wall was assumed to be equal 3000 W/(m 2 · K). Calculations<br />
were done using ANSYS code. Results of these<br />
calculations are presented in Figures 3 and4. Tests of the<br />
unidirectional fibre heat bridge (Fig. 3) show that an operational<br />
temperature at the surface of the FEE chip at<br />
the level of 25 o C is obtained. This value is close to the<br />
requirements for the SDD FEE.<br />
A further decrease of the operational temperature<br />
down to 20oC can be obtained by adding additional conductivity<br />
to the transverse direction, see Fig.4.<br />
The following conclusions can be drawn from ANSYS<br />
simulations:<br />
1. The maximum temperature drop between the<br />
coolant and chips of 3-4 layer ITS is about 12oC using<br />
the optimal position of the cooling tube and using the<br />
unidirectional fibre orientation.<br />
2. The value of the heat transfer coefficient from<br />
coolant to the tube’s wall is an essential factor for the<br />
temperature level (this factor depends on the liquid flow<br />
regime).<br />
3. A noticeable decrease of temperature gradients for<br />
the SDD heat bridge is possible in using composites with<br />
higher transversal heat conductivity.<br />
V. SDDHEAT BRIDGES<br />
A side view of the heat bridge for the drift detector<br />
front-end electronics (Ambra and Pascal chips) is shown<br />
in Figure 5. It consists of 2 main elements : (i) the cooling<br />
panel (ii) the heat conducting clip on the panel.<br />
Figure 5: SDD CF motherboard with chips, Side view: CF<br />
-carbon fibre motherboard; C1,C2 - FEE SDD chips (Ambra<br />
and Pascal); D1=2mm,D=1,9mm;H=240microns. Length of<br />
the CF SDD board =65mm, width=20mm<br />
The cooling panel is a flat plate 20x65 mm, 0.18mm<br />
thick. It is made of heat-conducting C.F. THORNEL and<br />
of 1 additional layer of ordinary carbon fiber. The panel’s<br />
area is large enough to place the chips of the primary electronics.<br />
The measured value of the heat conductivity in<br />
the direction of the fibre is λ=400 W/mK. The panel’s clip<br />
is also made of THORNEL. The orientation of the layers<br />
is -+ 45 o . The heat conductivity is λ= 150 W/mK. The<br />
panel’s clip plays the following role in this structure :<br />
(i) fixation of the heat bridge on the cooling artery;<br />
(ii) it is an element of the heat transfer from the cooling<br />
panel to the cooling artery;<br />
(iii) it is an element of stiffness for the heat bridge .<br />
Test were performed for the SDD carbon fiber composite<br />
hybrid with dummy electronics with a heat load of 2 W,<br />
temperature gradients reached 8degrees (this corresponds<br />
to approximately 22 degree operational temperature at the<br />
surface of the chip when using a cooling liquid at 14 o C.<br />
).<br />
VI. SSD HEAT BRIDGES<br />
There should be 6 chips of the front-end electronics per<br />
one SSD detector side with an area of 6*8mm 2 and 300<br />
micron thickness. The new SSD HAL25 chips are expected<br />
to produce about 0.3 W power per SSD hybrid.<br />
The pecularity of the double sided silicon-strip detectors<br />
(ALICE SSD) is in the orientation of the coordinatesensitive<br />
elements (strips) almost parallel the ITS axis .<br />
Therefore the readout electronics is situated perpendicular<br />
to the latter. The problem of the heat drain from this<br />
electronics was suggested to be solved by conductive cooling<br />
using the available highly thermoconductive materials<br />
draining heat to the longitudinal cooling arteries.<br />
The most recent option for the heat bridge has dimensions<br />
(73x6.5x0.16mm 3 ). It has a unidirectional orientation<br />
of 2 layers of THORNELL providing efficient heat<br />
transfer supported by an additional thin (20µm) carbon<br />
fiber layer with a transverse orientation.<br />
The prototype SSD heat bridges were sucsessfully<br />
tested for mounting of chips and microcable bonding technology<br />
(see Fig.6).<br />
������������������������������������������� ���������
Figure 6: Foto of the SDD detector for ALICE ITS equipped with real CF hybrids with chips and microcables mounted<br />
VII. CONCLUSIONS<br />
A novel technology for the design and manufacturing<br />
of carbon fibre composite materials with required thermal<br />
and mechanical characteristics is developed and tested for<br />
application as miniature motherborads for microelectronics.<br />
Acknowledgements: authors are grateful to<br />
L.Abramova, V.Brzhezinski and M.Van’chkova from<br />
Mendeleev Institute for Metrology (St.Petersburg) for<br />
the heat bridges surface quality checks. This job was<br />
partially supported for Russian participants by the ISTC<br />
grants No345 and No1666, and by the Ministry of High<br />
Education of Russian Federation grant No.520.<br />
References<br />
[1] ALICE ITS Technical Design Report,<br />
CERN/LHCC,1999.<br />
[2] G.A. Feofilov et al., 1994, ”Inner Tracking System<br />
for ALICE: Conceptual Design of Mechanics, Cooling<br />
and Alignment”, CERN, Workshop on Advanced<br />
Materials for High Precision Detectors, Sept. 1994,<br />
73-81.<br />
[3] O.N.Godisov et al., ”Concept of the Cooling System<br />
of the ITS for ALICE”, Proceedeings of the<br />
1st Workshop on Electroncs and Detector Cooling”<br />
(WELDEC), Lausanne, Oct.1994<br />
[4] ”ITS-CMA for ALICE: Preliminary Technical<br />
Design Report” , ISTC No345 Final Report,Ed.G.Feofilov,.<br />
St. Petersburg, February, 1999.<br />
(http://www.cern.ch/Alice/projects.html)
Sorting Devices for the CSC Muon Trigger System at CMS<br />
Abstract<br />
The electronics system of the Cathode Strip<br />
Chamber (CSC) muon detector at the CMS<br />
experiment needs to acquire precise muon<br />
position and timing information and generate<br />
muon trigger primitives for the Level-1 trigger<br />
system. CSC trigger primitives (called Local<br />
Charged Tracks, LCT) are formed by anode<br />
(ALCT) and cathode (CLCT) cards [1]. ALCT<br />
cards are mounted on chambers, while CLCT<br />
cards are combined with the Trigger<br />
Motherboards (TMB) that perform a time<br />
coincidence of ALCT and CLCT. Every<br />
combined CLCT/TMB card (one per chamber)<br />
transmits two best combined muon tags to the<br />
Muon Port Card (MPC) which serves one CSC<br />
sector (8 or 9 chambers). The MPC selects the<br />
three best muons out of 18 possible and sends<br />
them over 100 m of optical cable to the Track<br />
Finder (TF) crate residing in the underground<br />
counting room In the current electronics layout<br />
the TF crate has 12 Sector Processors (SP), each<br />
of which receives the optical streams from<br />
several MPC. The SP measures the transverse<br />
momentum, pseudo-rapidity and azimuthal angle<br />
of each muon and sends its data (up to 3 muons<br />
each) to the CSC Muon Sorter (MS) that resides<br />
in the middle of the TF crate. The MS selects<br />
the four best muons out of 36 possible and<br />
transmits them to Global Muon Trigger (GMT)<br />
crate for further processing.<br />
Data sorting is the primary task of two<br />
devices in the CSC trigger chain: the MPC (“3<br />
best muons out of 18”) and MS (“4 best muons<br />
out of 36”). The total data reduction factor is 54.<br />
We propose a common approach to<br />
implementation of sorting logic and board<br />
construction for both the MPC and MS. They<br />
will be based on a single chip programmable<br />
logic devices that receive data from the<br />
previous trigger level, sort it and transmit the<br />
sorting result to the next trigger level.<br />
Programmable chips will incorporate input and<br />
output FIFO buffers that would represent all<br />
possible inputs and outputs for testing and<br />
Matveev M., Padley P.<br />
Rice University, Houston, TX 77005 USA<br />
matveev@physics.rice.edu<br />
debugging purposes. Finally we will use a<br />
common sorting scheme [2] for both designs.<br />
The MPC and MS functionality as well as the<br />
first results of logic simulation and latency<br />
estimates are presented.<br />
I. MUON PORT CARD<br />
In each of stations 2-4 of the CSC detector,<br />
an MPC receives trigger primitives from nine<br />
chambers corresponding to 60 degree sectors.<br />
Each MPC in these regions reduces the number<br />
of LCTs to three and sends them to the TF crate<br />
over optical cables. In station 1, an MPC<br />
receives signals from eight chambers<br />
corresponding to 20 degree sector. For these<br />
regions the number of selected LCTs is two. So<br />
the main sorting algorithm for an MPC is “3 best<br />
objects out of 18” while a “2 best objects out of<br />
16” algorithm can be easily implemented as a<br />
subset of the main one.<br />
Muon Port Cards will reside in the middle<br />
of 21-slot 9U*400 mm VME crates located on<br />
the periphery of the return yoke of the CMS<br />
detector. Other slots in a crate will be occupied<br />
by the TMB boards (9 or 8 total), DAQ<br />
Motherboards (9 or 8 total), Clock and Control<br />
Board and VME Master. The CCB is the main<br />
interface to CMS Trigger, Timing and Control<br />
(TTC) system. The VME Master performs the<br />
overall crate monitoring and control. All<br />
trigger/DAQ modules in a crate will<br />
communicate with each other over a custom<br />
backplane residing below a VME P1 backplane.<br />
Every bunch crossing (25 ns) an MPC will<br />
receive data from up to 9 TMB’s, each of which<br />
is sending up to two LCT patterns. In the present<br />
design each LCT pattern is comprised of 32 bits<br />
(see Table 1). Data transmission from the TMB<br />
to the MPC at 80MHz would allow us to reduce<br />
the number of physical lines between the MPC<br />
and nine TMB’s down to 288 and build a 6U<br />
backplane using industry standard 2 mm<br />
connectors. The MPC block diagram is shown<br />
on Figure 1. It performs a synchronization of<br />
incoming patterns with the local master
clock, sorting “3 out of 18” and output<br />
multiplexing of the selected patterns. The three<br />
best patterns are transmitted at 80Mhz to three<br />
16-bit serializers that perform a parallel-to-serial<br />
data conversion with 8B/10B decoding for<br />
further transmission over optical cables to SP.<br />
The proposed serializer is a Texas Instrument<br />
TLK2501, and proposed optical module is a<br />
small form factor Finisar FTRJ-9519-1-2.5<br />
transceiver [3].<br />
The block diagram of the PLD is shown on<br />
Figure 2. Sorting is based on a 4-bit pattern<br />
which is a subset of the 9-bit quality code (Table<br />
1). All 256-word deep FIFO buffers are<br />
available for read and write operations from<br />
VME. Data representing all input muons can be<br />
loaded into the input FIFO and sent out of FIFO<br />
at 80Mhz upon a specific VME command. The<br />
selected patterns will be stored in the output<br />
FIFO and transmitted to SP. Patterns coming<br />
from the TMB can also be stored in an output<br />
FIFO. This feature will allow us to test the MPC<br />
3 OPTICAL<br />
CABLES TO<br />
SECTOR<br />
PROCESSOR<br />
OPTICAL<br />
TRANSCEIVERS<br />
SERIALIZERS<br />
9U * 400 MM BOARD<br />
OPTO<br />
OPTO<br />
OPTO<br />
SER<br />
SER<br />
SER<br />
SORTER<br />
LOGIC<br />
CCB<br />
INTERFACE<br />
FIFO<br />
BIFFERS<br />
VME<br />
INTERFACE<br />
CCB<br />
TMB_1<br />
TMB_2<br />
TMB_3<br />
TMB_4<br />
TMB_5<br />
TMB_6<br />
TMB_7<br />
TMB_8<br />
TMB_9<br />
Figure 1: MPC Block Diagram<br />
II. MUON SORTER<br />
VME J1<br />
CONNECTOR<br />
CUSTOM<br />
PERIPHERAL<br />
BACKPLANE<br />
Twelve SP’s and one MS will reside in a<br />
single VME 9U*400 mm crate in the<br />
underground counting room. In addition to these<br />
modules there will be a Clock and Control Board<br />
(CCB) similar to a peripheral CCB, and a VME<br />
Master. Every bunch crossing an MS will<br />
receive data from 12 SP’s, each of which is<br />
sending up to three patterns. Data transmission<br />
at 80MHz from the Sector Processors to the MS<br />
is envisaged; this would allow us to reduce the<br />
number of physical lines between MS and 12<br />
SP’s down to 360 and build a custom backplane<br />
functionality and its communications with the<br />
TMB and SP without having the rest of trigger<br />
chain hardware.<br />
Table 1: MPC Inputs and Outputs<br />
Signal Bits per<br />
input<br />
muon<br />
Bits per<br />
output<br />
muon<br />
Valid Pattern Flag 1 1<br />
Quality 9 9<br />
Cathode ½-strip ID 8 8<br />
Anode<br />
ID<br />
Wire-Group 7 7<br />
Accelerator Muon 1 1<br />
Bunch Crossing ID 2 2<br />
Reserved 4 -<br />
CSC ID - 4<br />
Total 32 32<br />
TMB 1<br />
TMB 2<br />
TMB 9<br />
VME<br />
CCB<br />
VME<br />
VME<br />
DFF<br />
FIFO<br />
A<br />
DFF<br />
FIFO<br />
A<br />
CCB INTERFACE<br />
MUX<br />
MUX<br />
SORTER “3 OUT OF 18”<br />
4<br />
4<br />
PIPELINE<br />
MUON 1<br />
PIPELINE<br />
MUON 2<br />
54<br />
MUX<br />
DFF<br />
FIFO_B<br />
MUON<br />
1<br />
DFF<br />
VME<br />
MUON 1<br />
MUON 2<br />
FIFO_B<br />
MUON<br />
2<br />
VME<br />
DFF<br />
MUON 3<br />
FIFO_B<br />
MUON<br />
3<br />
VME<br />
Figure 2: MPC Sorter PLD Block Diagram<br />
using industry standard 5-row 2 mm connectors.<br />
Four such a 125-pin connectors are needed on a<br />
MS station in the middle of the custom 6U<br />
backplane residing below standard VME<br />
backplane.<br />
The MS block diagram is shown on Figure 3.<br />
Its inputs and outputs listed in Table 2 are<br />
described in more details in [4]. Sorting is based<br />
on a 7-bit rank which represents the quality of<br />
each muon. The larger the rank, the better the<br />
muon for the sorting purpose. The MS performs<br />
synchronization of the incoming patterns with<br />
the local master clock, sorting “4 out of 36” and<br />
output multiplexing of the selected four patterns.
In addition to that, the MS performs a partial<br />
output LUT conversion to comply with the<br />
GMT input data format [5]. Parallel data<br />
transmission from the MS to the GMT at 40Mhz<br />
using LVDS drivers/receivers and separate<br />
cables for each muon was proposed by the GMT<br />
group. The Muon Sorter also utilizes the same<br />
idea used in the MPC of 256-word deep input<br />
and output FIFO buffers for testing purposes.<br />
The list of input and output signals is given in<br />
Table 2. A block diagram of the Muon Sorter<br />
main PLD is shown on Figure 4.<br />
Table 2: Muon Sorter Inputs and Outputs<br />
Inputs from Sector Processor Outputs to Global Muon Trigger<br />
Signal Bits per Bits per<br />
Signal Bits per Bits per<br />
one muon three muons<br />
one muon four muons<br />
Valid Pattern Flag 1 3 Valid Pattern Flag 1 4<br />
Phi Coordinate 5 15 Phi Coordinate 8 32<br />
Muon Sign 1 3 Muon Sign 1 4<br />
Eta Coordinate 5 15 Eta Coordinate 6 24<br />
Rank 7 21 Quality 3 12<br />
Bunch Crossing ID - 2 Pt Momentum 5 20<br />
Error - 1 Bunch Crossing ID 4 16<br />
Error 1 4<br />
Clock 1 4<br />
Reserved 2 8<br />
Total 19 60 Total 32 128<br />
GMT LVDS DRIVERS<br />
CONNECTORS TO GMT<br />
CABLES TO<br />
GLOBAL MUON<br />
TRIGGER CRATE<br />
9U * 400 MM BOARD<br />
SORTER<br />
LOGIC<br />
CCB<br />
INTERFACE<br />
FIFO<br />
BUFFERS<br />
VME<br />
INTERFACE<br />
CCB<br />
SP1<br />
SP2<br />
SP3<br />
SP4<br />
SP5<br />
SP6<br />
SP7<br />
SP8<br />
SP9<br />
SP10<br />
SP11<br />
SP12<br />
VME J1<br />
CONNECTOR<br />
CUSTOM<br />
BACKPLANE<br />
Figure 3: Muon Sorter Block Diagram<br />
III. RESULTS OF SIMULATION<br />
PLD designs are implemented for the Altera<br />
20KC family of PLD [6] using Quartus II ver.<br />
1.0 design software. Preliminary results of logic<br />
simulation are shown in Table 3. They are<br />
obtained for the fastest available devices (-7<br />
speed grade). We assume that the actual PLD<br />
latency means the time interval between the<br />
latching of the input patterns into the sorter chip<br />
at 80MHz and the moment when the selected<br />
best patterns are available for latching into the<br />
SP 1<br />
.<br />
SP 2<br />
SP 12<br />
VME<br />
CCB<br />
DFF<br />
VME<br />
FIFO<br />
DFF<br />
VME<br />
FIFO<br />
DFF<br />
VME<br />
FIFO<br />
. . .<br />
MUX<br />
MUX<br />
SORTER “4 OUT OF 36”<br />
CCB INTERFACE<br />
PIPELINE<br />
MUON 1<br />
PIPELINE<br />
MUON 2<br />
MUX PIPELINE<br />
MUON 3<br />
144<br />
MUX<br />
LUTs<br />
VME<br />
LUTs<br />
VME<br />
LUTs<br />
VME<br />
LUTs<br />
VME<br />
VME<br />
VME<br />
VME<br />
VME<br />
DFF<br />
FIFO<br />
DFF<br />
FIFO<br />
DFF<br />
FIFO<br />
DFF<br />
FIFO<br />
MUON 1<br />
MUON 2<br />
MUON 3<br />
MUON 4<br />
Figure 4: Muon Sorter PLD Block Diagram<br />
external device outside the sorter chip. Board<br />
latency for the MPC includes a delay for the<br />
output serialization. Particularly, for the TLK<br />
2501 serializer this delay varies between 34 and<br />
38 bit times, or 20..24 ns. For both devices an<br />
extra board delay of ~15ns is assumed. It<br />
includes the delay of the custom backplane<br />
receivers, signal propagation times and the delay<br />
of output LVDS drivers (for MS only).
Table 3: Results of Simulation<br />
Muon Port Card Muon Sorter<br />
Number of data inputs 288 @ 80 Mhz 384 @ 80 Mhz<br />
Number of data outputs 48 @ 80 Mhz 128 @ 40 Mhz<br />
Number of bits used for sorting 4 7<br />
Altera PLD Device EP20K400CF672C7 EP20K1000CF33-7<br />
Number of logic cells (LC) used 9637/16640 (57%) 22716/38400 (59%)<br />
Number of ESB bits used 184320/212992 (86%) 262144/327680 (80%)<br />
Actual PLD latency, nanoseconds 75 150<br />
Board latency, nanoseconds 115 165<br />
IV. CONCLUSION<br />
We have proposed a common approach to<br />
design and implementation of two sorting<br />
devices for the CSC Muon Trigger system.<br />
These designs are targeted tosingle<br />
programmable chips for both Muon Port Card<br />
and Muon Sorter. Results of <strong>preliminary</strong><br />
simulation for the fastest Altera PLD indicate a<br />
maximum board latency of 115 ns for the Muon<br />
Port Card and 165 ns for the Muon Sorter. The<br />
MPC and MS prototypes are planned to be built<br />
in 2002 and 2003 respectively.<br />
V. REFERENCES<br />
[1]. J. Hauser. Primitives for the CMS Cathode<br />
Strip Muon Trigger. Proceedings of the Fifth<br />
Workshop on Electronics for LHC Experiments.<br />
Snowmass, Colorado September 20-24, 1999.<br />
CERN/LHCC/99-33. Available at:<br />
http://hep.physics.wisc.edu/LEB99/<br />
[2]. M.Matveev. Implementation of the Sorting<br />
Schemes in a Programmable Logic. Proceedings<br />
of the Sixth Workshop on Electronics for LHC<br />
Experiments. Krakow, Poland 11-15 September<br />
2000. Available at:<br />
http://lebwshop.home.cern.ch/lebwshop/LEB00_<br />
Book/posters/matveev1.pdf<br />
[3]. M.Matveev, T.Nussbaum, P.Padley. Optical<br />
Link Evaluation for the CSC Muon Trigger at<br />
CMS. These proceedings.<br />
[4].The CMS TriDAS Project Technical Design<br />
Report, Volume 1: The Trigger System. Chapter<br />
12. Available at:<br />
http://cmsdoc.cern.ch/cms/TDR/TRIGGERpublic/CMSTrigTDR.pdf<br />
[5]. Proposal for a common data link from the<br />
RPC, DT, CSC Regional Muon triggers to the<br />
Global Muon Trigger vers.3a. Available at:<br />
http://wwwhephy.oeaw.ac.at/p3w/cms/trigger/gl<br />
obalTrigger/Hardw/Interfaces/Input/Reg_to_GT<br />
_Muon3.pdf<br />
[6].http://www.altera.com/products/devices/apex<br />
/apx-index.html
Optical Link Evaluation for the CSC Muon Trigger at CMS<br />
M. Matveev 1 , T. Nussbaum 1 , P. Padley 1 , J. Roberts 1 , M. Tripathi 2<br />
Abstract<br />
The CMS Cathode Strip Chamber electronic<br />
system consists of on-chamber mounted boards,<br />
peripheral electronics in VME 9U crates, and a<br />
track finder in the counting room [1]. The<br />
Trigger Motherboard (TMB) matches the anode<br />
and cathode tags called Local Charged Tracks<br />
(LCT) and sends the two best combined LCT’s<br />
from each chamber to the Muon Port Card<br />
(MPC). Each MPC collects data representing<br />
muon tags from up to nine TMB’s, which<br />
corresponds to one sector of the CSC chambers.<br />
All TMB’s and the MPC are located in the<br />
9U*400 mm VME crates mounted on the<br />
periphery of return yoke of the endcap muon<br />
system. The MPC selects data representing the<br />
three best muons and sends it over optical links<br />
to the Sector Processor (SP) residing in the<br />
underground counting room. The current<br />
electronics layout assumes 60 MPC modules<br />
residing in the 60 peripheral crates for both<br />
muon endcaps and 12 SP’s residing in one 9U<br />
VME crate in the counting room.<br />
1 Rice University, Houston, TX 77005 USA<br />
2 University of California, Davis, CA 95616 USA<br />
Paper presented by M. Matveev<br />
matveev@physics.rice.edu<br />
Due to the high operating frequency of<br />
40.08MHz and the 100 m cable run from the<br />
detector to the counting room, an optical link is<br />
the only choice for data transmission between<br />
these systems. Our goal was to separately<br />
prototype this optical link intended for the<br />
communication between the MPC and SP using<br />
existing commercial components. Our initial<br />
design based on the Agilent HDMP-1022/1024<br />
chipset and Methode MDX-19-4-1-T optical<br />
transceivers was reported at the 6 th Workshop on<br />
Electronics for LHC Experiments [2] a year ago.<br />
Data transmission of 120 bits representing three<br />
muons at 40 MHz would require as many as<br />
twelve HDMP chipsets and 12 optical<br />
transceivers on a single receiver card. This<br />
solution has disadvantages such as relatively<br />
large power consumption and component areas<br />
on both the transmitter and receiver boards.<br />
Studies of the later triggering stages also show<br />
that a reduction in the number of bits<br />
representing three muons can be made without<br />
compromising the system performance. The new<br />
list of bits is shown in Table 1.<br />
Table 1: Data delivered from a Muon Port Card to Sector Processor<br />
Signal Bits per 1 muon Bits per 3 muons Description<br />
Valid Pattern Flag 1 3 “1” when data is valid<br />
Half-strip ID [7..0] 8 24 ½ strip ID number<br />
Quality [7..0] 8 24 ALCT+CLCT+ bend quality<br />
Wire ID [6..0] 7 21 Wire group ID<br />
Accelerator muon 1 3 Straight wire pattern<br />
CSC ID [3..0] 4 12 Chamber ID in a sector<br />
BXN [1..0] 2 6 2 LSB of BX number<br />
Spare 1 3<br />
Total 32 96<br />
Now only three links rather than the six in<br />
our previous design are needed for<br />
communication between the Muon Port Card and<br />
Sector Processor. Another improvement is to<br />
serialize and deserialize the data at 80Mhz with a<br />
lower power chipset and use small form factor<br />
(SFF) optical modules for a more compact<br />
design. Test results evaluating the Texas<br />
Instruments TLK2501 [3] gigabit transceiver and<br />
Finisar FTRJ-8519-1-2.5 [4] optical module as<br />
well as the functionality of our evaluation board<br />
are presented.
I. DATA SERIALIZER AND OPTICAL<br />
MODULE<br />
Among several high speed data serializers<br />
available on the market, the Texas Instruments<br />
TLK2501 [3] is one of the most attractive. It<br />
performs both serial-to-parallel and parallel-toserial<br />
data conversion. The transmitter latches<br />
16-bit parallel data at a reference clock rate and<br />
internally encodes it using a standard 8B/10B<br />
format. The resulting 20-bit word is transmitted<br />
differentially at 20 times the reference clock<br />
frequency. The receiver section performs the<br />
serial-to-parallel conversion on the input data,<br />
synchronizes the resulting 20-bit wide parallel<br />
word to the extracted reference clock and applies<br />
the 8B/10B decoding. The 80MHz to 125MHz<br />
frequency range for the reference clock allows us<br />
to transfer data at 80.16Mhz which is exactly<br />
double the LHC operation frequency of<br />
40.08Mhz.<br />
The TLK2501 transceiver has a built-in 8-bit<br />
pseudo-random bit stream (PRBS) generator and<br />
some other useful features such as a loss of<br />
signal detection circuit and power down mode.<br />
The device is powered from +2.5V and<br />
consumes less than 325mW. Parallel data,<br />
control and status pins are 3.3V compatible. The<br />
TLK2501 is available in a 64-pin VQFP package<br />
and characterized for operation from –40C to<br />
+85C.<br />
A Finisar FTRJ-8519-1-2.5 2x5 pinned SFF<br />
transceiver was chosen for the optical<br />
transmission. It provides bidirectional<br />
communications at data rates up to 2.125Gbps<br />
(1.6Gbps simplex mode transmission is required<br />
in our case). The laser technology is an 850 nm<br />
multimode VCSEL and allows fiber lengths up<br />
to 300 m. The transceiver operates at extended<br />
voltages (3.15V to 3.60V) and temperature (-10C<br />
to +85C) ranges and dissipates less than 750mW.<br />
One advantage of the FTRJ-8519-1-2.5 module<br />
over similar optical transceivers available from<br />
other vendors is a metal enclosure for lower<br />
electromagnetic interference.<br />
II. DESIGN IMPLEMENTATION<br />
A simplified block diagram of the evaluation<br />
board is shown on Figure 1. It consists of two<br />
TLK2501 and Finisar optical transceiver links<br />
with control logic based on an Altera<br />
EP20K100EQC240 PLD. The PLD provides<br />
VME access to the 16-bit transmitter and<br />
receiver data busses as well as the control/status<br />
signals of both TLK2501 devices. In addition to<br />
the VME A24D16 slave interface, it contains:<br />
256-word deep input and output FIFO buffers,<br />
two delay buffers, a 16-bit PRBS generator, error<br />
checking logic and 2 16-bit error counters. Since<br />
the PRBS data is not really a random, but a<br />
predetermined sequence of ones and zeroes, the<br />
data could also be checked for errors by<br />
comparison to an identical, synchronized PRBS<br />
generator.<br />
Finisar<br />
FTRJ-8519<br />
Finisar<br />
FTRJ-8519<br />
TLK<br />
2501<br />
TLK<br />
2501<br />
6U * 160 mm<br />
PRBS<br />
generator<br />
Input FIFO<br />
Output FIFO<br />
Error<br />
Counters<br />
VME A24D16<br />
Slave<br />
Interface<br />
Altera<br />
EPF20K100E<br />
VME<br />
Figure 1: Evaluation Board Block Diagram<br />
There are three modes of board operation. In<br />
mode 1, every TLK2501 transmitter can<br />
internally generate an 8-bit PRBS and send it to<br />
either another TLK2501 receiver or loop it back<br />
to its own receiver input. In mode 2, a 16-bit<br />
PRBS is generated by the PLD for both<br />
TLK2501 transmitters simultaneously. Two<br />
variable depth buffers, adjustable from a front<br />
panel, are used to delay the PRBS data inside the<br />
PLD for comparison with the receiver data. The<br />
main control PLD can count the number of errors<br />
using two separate (for modes 1 and 2) 8-bit<br />
counters which are accessible from VME. Error<br />
counting can also be implemented and displayed<br />
with an external event counter connected to the<br />
Receive_Error output signal.<br />
In mode 3, random programmable data (up to<br />
256 16-bit words) can be loaded into a FIFO<br />
buffer from VME and sent out as an 80Mhz<br />
burst to both TLK2501 transmitter sections upon<br />
a specific VME command. The data from both<br />
receivers is then captured into two FIFO input<br />
buffers which can be read from VME.
III. PROTOTYPING RESULTS<br />
Two evaluation boards were built and tested<br />
in spring 2001. All tests were done in simplex<br />
configuration over 100 m optical cable. No<br />
PRBS data errors occurred during 3 overnight<br />
tests for both PRBS sources (modes 1 and 2).<br />
Another part of our test was evaluation of the<br />
latency due to data serialization/deserialization<br />
and encoding. The datasheet [3] specifies that<br />
the transmit latency is between 34 and 38 bit<br />
times, and the receive latency is between 76 and<br />
107 bit times. At 80Mhz these numbers<br />
correspond to a link delay (excluding cable<br />
delay) of 69 to 91 ns. Our measurements<br />
conducted at room temperature and 2.5V power<br />
for both TLK2501 devices indicated that the total<br />
link latency is between 76 and 82 ns, or more<br />
than three bunch crossings. While the exact<br />
value is different at each link initialization in<br />
increments of the serial bit clock (625 ps), it has<br />
never varied by more than the 6 ns total.<br />
The receive and transmit latencies are<br />
essentially fixed once the link is established.<br />
However, due to silicon process variations and<br />
such implementation variables as supply voltage<br />
and temperature, the exact delay may also vary<br />
slightly. An additional offset of about 2ns was<br />
seen when one chip was externally heated to a<br />
point uncomfortable to the touch. One TLK2501<br />
was socket mounted for investigation of chip-tochip<br />
variations. No significant difference<br />
between 6 chips has been seen.<br />
A receiver reference clock is required on<br />
power-up reset, but unlike the Agilent HDMP-<br />
1024 receiver it does not need to be of a<br />
frequency slightly different from the frame rate.<br />
Automatic recovery from loss of synchronization<br />
will require periodic transmission of the “Idle”<br />
synch character. The resynchronization takes<br />
only 2 frames rather than ~2ms required for the<br />
Agilent HDMP-1022/1024 chipset.<br />
IV. RADIATION TEST<br />
The goal of the test was to determine how well<br />
the TLK2500/TLK2501 serializers and Finisar<br />
optical transceivers are able to tolerate the<br />
radiation environment and the integrated dose<br />
expected at LHC during 10 years of operation.<br />
Specifically, the potential for Single Event<br />
Latch-ups (SEL) and Single Event Upsets (SEU)<br />
in these CMOS devices due to the high flux of<br />
secondary neutrons is of concern.<br />
Based on simulations reported in references<br />
[5] and [6], the Total Ionizing Dose (TID) for the<br />
inner CSC chambers during 10 years of<br />
operation is below 10 kRad and the neutron<br />
fluence (for E > 100 KeV) is below 10 12 cm -2 .<br />
On the periphery of the return yoke (where<br />
Muon Port Cards will be located) these numbers<br />
are approximately one order in magnitude less.<br />
The SEU cross-section is quite independent of<br />
the neutron energy above about 100 MeV. While<br />
the expected energy distribution at the LHC has<br />
a sizable population below this level, we chose a<br />
convenient beam energy of 63 MeV to simulate<br />
the effect of the neutron environment at the<br />
LHC. Since the strong interactions responsible<br />
for the energy deposition are independent of the<br />
baryon type, our tests were conducted with a 63<br />
MeV proton beam at the Crocker Nuclear<br />
Laboratory cyclotron at the University of<br />
California, Davis (UCD).<br />
During irradiation the optical evaluation board<br />
was positioned perpendicular to the beam which<br />
was focused to irradiate only one<br />
TLK2500/TLK2501 chip or Finisar optical<br />
module at a time. The board was set to transmit<br />
PRBS data through 100 m of fiber to a second<br />
board located outside the beam area where data<br />
transmission errors due to Single Event Upsets<br />
were counted. Three serializer chips were<br />
exposed up to approximately 270 kRad total<br />
dose each, with no permanent damage. No SEL<br />
was detected.<br />
At 63 MeV, 1 rad = 7.4x10 6 protons cm -2 for<br />
silicon. Therefore, assuming strong isospin<br />
symmetry for SEU's, 270 Krad is equivalent to<br />
2.1x10 12 cm -2 neutron fluence or well above the<br />
expected levels for the peripheral electronics.<br />
The two TLK2501 devices produced 12 and 19<br />
data errors due to SEUs while the older TLK<br />
2500 device produced 78. These results are<br />
summarized in Table 2. While no errors where<br />
observed during the exposure of 2 Finisar optical<br />
modules, both devices failed permanently at<br />
about 70 kRad, also well above the expected<br />
TID. Combining the results for the three chips,<br />
no SEL was seen for a fluence of 6.0x10 12<br />
protons cm -2 .
V. CONCLUSION<br />
Table 2: Measured SEU Cross Sections for TLK2500/2501 serializers<br />
Device Proton Fluence<br />
(10 12 cm -2 Dosage Number of SEU Xsection<br />
) (kRad) SEUs (10 -12 cm 2 )<br />
TLK 2501 #1 1.8 230 12 6.7<br />
TLK 2501 #2 2.1 270 19 9.0<br />
TLK 2500 #1 2.1 260 78 37.1<br />
We have built and tested, in a radiation<br />
environment, an evaluation board that comprises<br />
the two main elements of an optical data link: a<br />
TLK2501 gigabit serializer/deserializer and a<br />
Finisar FTRJ-8519-1-2.5 optical transceiver.<br />
Acceptable data error rates were observed in<br />
testing the link at 80MHz using 8/16-bit PRBS<br />
test sequences and programmable patterns. We<br />
believe the components of the link can be used<br />
for the MPC and SP designs at the CSC Trigger<br />
System. Our evaluation board itself can be used<br />
as a source of test data for the next SP prototype.<br />
VI. REFERENCES<br />
[1]. The Track-Finding Processor for the Level-1<br />
Trigger of the CMS Endcap System. CMS Note<br />
1999/060. Available at:<br />
ftp://cmsdoc.cern.ch/documents/99/note99_060.<br />
pdf<br />
[2]. Optical Data Transmission from the CMS<br />
Cathode Strip Chamber Peripheral Trigger<br />
Electronics to Sector Processor Crate. 6 th<br />
Workshop on Electronics for LHC Experiments.<br />
CERN/LHCC/2000-041. P483-485. Available at:<br />
http://lebwshop.home.cern.ch/lebwshop/LEB00_<br />
Book/posters/matveev2.pdf<br />
[3]. TLK2501 1.6 to 2.5 Gbps transceiver.<br />
Datasheet is available at: http://wwws.ti.com/sc/psheets/slls427a/slls427a.pdf<br />
[4]. http://www.finisar.com/pdf/2x5sff-2gig.pdf<br />
[5]. The Compact Muon Solenoid Technical<br />
Proposal. CERN/LHCC 94-38. 15 December<br />
1994.<br />
[6]. http://cmsdoc.cern.ch/~huu/tut1.pdf
DISTRIBUTED MODLAR RT-SYSTEMS FOR DETECTOR<br />
DAQ, TRIGGER AND CONTROL APPLICATIONS<br />
Abstract<br />
Modular approach to development of<br />
Distributed Modular System Architecture for<br />
Detector Control, Data Acquisition and Trigger<br />
Data processing is proposed. Multilevel<br />
parallel-pipeline Model of Data Acquisition,<br />
Processing and Control is proposed and<br />
discussed. Multiprocessor Architecture with<br />
SCI-based Interconnections is proposed as<br />
good high-performance System for parallelpipeline<br />
Data Processing. Tradition Network<br />
(Ethernet –100) can be used for Loading,<br />
Monitoring and Diagnostic purposes<br />
independent of basic Interconnections. The<br />
Modular cPCI –based Structures with Highspeed<br />
Modular Interconnections are proposed<br />
for DAQ and Control Applications. Distributed<br />
Control RT-Systems. To construct the<br />
Effective (cost-performance) systems the same<br />
platform of Intel compatible processor board<br />
should be used.<br />
Basic Computer Multiprocessor Nodes consist<br />
of high-power PC MB (Industrial Computer<br />
Systems), which interconnected by SCI<br />
modules and link to embedded microprocessorbased<br />
Sub-systems for Control Applications.<br />
Required number of Multiprocessor Nodes<br />
should be interconnected by SCI for Parallelpipeline<br />
Data Processing in Real Time<br />
(according to the Multilevel Model) and link to<br />
RT-Systems for embedded Control.<br />
Introduction<br />
V.I.Vinogradov,<br />
INR RAS, ul.Prophsoyusnaya 7-a,<br />
Moscow, 117312, Russia,<br />
vin@inr.troitsk.ru<br />
RT-Multiprocessor Systems should have scalable<br />
architecture for high-performance Data<br />
Acquisition, Trigger Processing and reliable<br />
Control in future<br />
Experimental Physics. Multilevel Information<br />
Data-Flow Model for<br />
RT-Systems in Experimental Physics Including<br />
DAQ, Trigger Data Processing and Control<br />
Applications<br />
are proposed and analysed. Modular<br />
Multiprocessor System Architectures with Scalable<br />
Open System (SOS) Architecture for System Area<br />
Networks (SAN) is discussed.<br />
Information Model of data-fl0w in Experimental<br />
includes systems –<br />
Data Acquisition (DAQ), Trigger Processing<br />
(TRIP) and Control systems<br />
- all of them follow the needs and tasks of Frontend<br />
Electronics of Modern detector. There are 3-4<br />
equivalent vertical levels of data processing in each<br />
system. The higher level the less data events<br />
volume and lower data event frequencies. Highspeed<br />
Data flows on lower level of the Model<br />
require parallel data processing using many<br />
microprocessors.<br />
A lot of Experimental Research Centres (including<br />
Physics and Medicine applications) require parallel<br />
computing with high-speed interconnections based<br />
on new SOS-Architecture for Distributed SAN<br />
Data Processing. Basic. Modular SAN-components<br />
are single-chip microprocessors on Nuclear<br />
structure level (N-module), which connect to SMParchitecture<br />
on the board on Atomic structure level<br />
(A-Module). A lot of A-modules interconnected as<br />
a node by Link modules (L-modules) on Macro<br />
structure level (M-modules). Basic Distributed<br />
systems organized in different topologies (Basic<br />
Ringlet, equidistant multiprocessor, 2-D and 3-D<br />
topologies).<br />
One of the best approaches is to construct an<br />
effective (cost/performance) modular<br />
Multiprocessor system with flexible SCI-based<br />
Network Architecture. The advanced Scalable<br />
Open System (SOS) Network Architecture based<br />
on SCI link-modules and PC MB or PXI/cPCI<br />
modules for Distributed RT Systems, including<br />
DAQ, Trigger Data Processing and Control<br />
Applications in different fields are proposed and<br />
discussed. The Information Model of RT-<br />
Multiprocessor Scalable Modular Systems are<br />
based on parallel-pipeline Interconnections for<br />
Data Acquisition, Trigger and Control Data<br />
processing. It can be constructed (according to the<br />
multilevel Information Model) from a functional
single-chip multiprocessors on N-level, SMPmodules<br />
on A-level and macro-modules of basic<br />
structure components.<br />
1. Multi-level Information Model of Data Flows<br />
in RT-system<br />
The Multilevel Information Model of automated<br />
Complex in Experimental Area includes Data<br />
Acquisition and Control systems, based on existing<br />
standards. Proposed four-level Model includes<br />
parallel-pipeline Data Acquisition (DAQ) with<br />
reduction of data-flow from detector electronics.<br />
Data Reduction and Processing reduce event<br />
frequency (events selection) and volume of data on<br />
each level of the system. Reduction of data-flow<br />
(Volume, Frequency), Reconstruction and complex<br />
analysis of events in real time requires highperformance<br />
Multiprocessors working in real time<br />
(RT-systems). Experimental Area and Control<br />
Applications requires parallel Data Acquisition<br />
(DAQ), optimal control and distributed data<br />
processing according to requirements. All<br />
Interconnections between processor on any level of<br />
the Model require high-speed data transfer and<br />
parallel-pipeline data processing on the base of<br />
System oriented Network. Distributed parallelpipeline<br />
data processing requires good scalability<br />
of high-performance multiprocessor systems<br />
according to Source Data Flow and topology of the<br />
experiment. Some subsystems monitor detector<br />
electronics and slow control equipment.<br />
Data-Flow on each level of parallel data processing<br />
is reduced on events volume and frequency and can<br />
be appreciate as follows<br />
DF (i) = I(i) *<br />
F (i),<br />
where I(i) – Event Data Volume on i-level<br />
F(i) – Frequency of Data Events on i-level<br />
Data-Flow Volume on i-level will be<br />
I(i) = Q(i) *<br />
I(i-1),<br />
where Q(i) – Event Volume Reduction<br />
Coefficient on i-level of Data processing.<br />
Event frequency on i-level will be<br />
F(i) = R(i) *<br />
F (i-1),<br />
where R(i) - Event Frequency Reduction<br />
Coefficient on i-level of Data Processing.<br />
Event rate for LHC detector after the first level<br />
trigger is very high (100000 Hz).<br />
Data Volume includes Inner Tracking (1 MB per<br />
15 ns), calorimeter (200 Kbytes per 16 ns) and<br />
muon tracking, which are filtered by a second level<br />
trigger (in<br />
local segment data buffers and overall decision).<br />
The output event rate<br />
F(1) = 100 kHz and F(2) = 1 kHz and F(3)= 100<br />
Hz. . The Global decision takes 1 millisecond. The<br />
reduction coefficient should be R(2) = 100 and<br />
R(3) = 10.<br />
Control Data Volume on i-level will be appreciate<br />
as<br />
C(i) = K(i) *<br />
C(i-1),<br />
where C(i) – is Control Data Volume on i-level<br />
of the information Model.<br />
K(i) - is Control Data Reduction<br />
Coefficient on i-level of the Model<br />
Effective Computer Control depends on Control<br />
Data Reduction and distribution this functions on<br />
subsystems. All subsystems should be connected in<br />
Integral<br />
High-performance Distributed multiprocessor<br />
systems should be designed on the base of flexible<br />
System Area Network (SAN) Architecture for<br />
Scalable Open System (SOS) approach, using highmodular<br />
hardware and software, working in<br />
Compact Real Time (RT) Systems.<br />
2. System Architectures for Parallel<br />
Data Processing<br />
A lot of Computer Systems architecture for<br />
Distributed and Parallel Data Processing exist<br />
today, including Symmetrical Multiprocessing<br />
(SMP), Massively-Parallel Processing (MMP),<br />
Cluster Systems (RMC and NUMA).<br />
A RMC (Reflecting Memory Cluster) is a clustered<br />
system with a memory replication or memory<br />
transfer mechanism between nodes and traffic<br />
interconnect. Some vendors use the term NUMA<br />
(Non Uniform Memory Access) to describe their<br />
RMC systems, others used term Shared-Memory<br />
Cluster (SMC) to describe NUMA and RMC nodes<br />
(can be easy confused with the shared memory<br />
inside SMP- nodes). The Term Global sharedmemory<br />
system is not the best descriptor also.<br />
There are multiple memories (multiple memory<br />
maps) and OS, reflecting a portion of memory to<br />
one another.<br />
Academic community sometimes uses the term<br />
“node” to refer to the small part of processors and<br />
memory section in a CC-NUMA systems, but<br />
commercial community not use this term because<br />
of the confusion with the definition of a cluster<br />
node. For example, the first developer of SCI<br />
systems Sequent names multi-quad system as a<br />
single-node, since only one instance of the OS is<br />
running over all quads. Correct terminology is
equired also for describing of modern Scalable<br />
modular system. Structure Analysis<br />
and Synthesis of Scalable Computer Systems with<br />
Modular Structure in real time applications are one<br />
of the fundamental Problems in Computer Science.<br />
Following Sequence terminology clustered systems<br />
with two or more nodes (running a unique copy of<br />
the OS and applications), which share a common<br />
pool of storage simultaneously, is considered as a<br />
Single Computer System.<br />
Scalable Coherent Interconnections (SCI)<br />
developed as one of the best System Area Network,<br />
because of bus limits a number of parallel<br />
processors in Distributed Data Processing systems.<br />
A first attention to SCI-based Control System was<br />
given by author in DESY (Germany) and in KEK<br />
(JAPAN).<br />
This paper proposed effective Scalable RT-System<br />
Development with high-modular structure for<br />
DAQ, Control and Distributed Data Processing.<br />
The system includes a Microstructure level of<br />
general purpose (Pentium-1,-2,-Pro) or specialize<br />
Microprocessors as a Computer Nuclear. On the<br />
Atomic structure level they are constructed on the<br />
board as standard A-Module of the system with 1<br />
or more microprocessors. Functional A-Modules<br />
can be connected by bus interface, which limits a<br />
number of processor units on the same bus.<br />
The first developer of SCI-based high-power<br />
modular multiprocessor system with hardware<br />
coherency (high-priced) was Sequent. Advanced<br />
Integrated RT-systems with Effective SOS-<br />
network Architecture on the base of standard<br />
Compact-PC modules or PC-board and Linkmodules<br />
(Dolphin’s communication modules) for<br />
effective cost/performance systems according to<br />
proposed multilevel Physical Model are discussed<br />
and analysed for Advanced DAQ, Control and<br />
Distributed Data Processing Applications. One of<br />
the best way to construct cost/performance RTsystems<br />
is to use PC MB, connected by System<br />
Area Networking with different topology according<br />
to application.<br />
3. SAN-Architecture for Scalable Open System<br />
Multiprocessors<br />
The Physics Model of Scalable Modular System<br />
includes multilevel parallel-pipeline<br />
Communications for multiprocessor open systems<br />
interactions working as a Big-Bus SMP system for<br />
users. Data Acquisition, Control and Data<br />
processing systems can be constructed as a<br />
multilevel system from a functional single- or<br />
multiprocessor modules (Atomic micro structure<br />
level of a system), including micro-modules of a<br />
Processor Chip (Nuclear structure level) on the<br />
board, connected by bus interconnections into<br />
the standard Macro-modules (Molecular<br />
macrostructure level). PC MB should be select for<br />
effective cost/performance RT-Systems. Required<br />
number of Distributed processor modules or<br />
macro-modules interact each other through Linkmodules,<br />
bridges and switches on parallel-pipeline<br />
interconnections. RT-System can be constructed in<br />
required topology according to the application.<br />
Success of modern Microelectronics Technology<br />
(up to 0,1 inch) opens new possibilities in<br />
Computer System Design in Y2K. Multilevel<br />
Physics Model of Distributed Systems is based on<br />
Conceptual requirements of Information Model and<br />
includes high-modular structure on all levels of the<br />
Model. Conceptual approach to the Structure of a<br />
system includes basic: Nuclear structure level<br />
(micro-module or chips), Atomic structure level<br />
(functional module in a standard boards) and<br />
Molecular structure level (Macro-module<br />
integrated in a PC MB, VME/VXI Crate, Compact<br />
PC or SCI). All high modular structure levels of<br />
Integrated System should support effective<br />
Interaction of distributed processor and memory<br />
modules with help of distributed link modules on<br />
the base of the conceptual approach to the Scalable<br />
System Area Network (SAN) Model development.<br />
On Nuclear level of System Model micromodules(N<br />
–modules) include<br />
single-chip general-purpose processor, memory,<br />
I/O controllers and communication.. Selection of<br />
the best type of Microprocessor depends on<br />
application requirements. Special- or generalpurpose<br />
processor can be selected for different<br />
applications. For high-performance signal<br />
processing in real-time a Single- or Multiprocessor<br />
DSP with memory inside the chip can<br />
be used, but for effective (cost/power) DAQ,<br />
Control and Parallel Data Processing better to use<br />
modern compact general-purpose processors.<br />
Single-chip Microcomputer (processor with<br />
Memory in the chip) has shorter link, better access<br />
and data transfer time than out of chip on the same<br />
board, because it has shorter connections and cache<br />
of 1-st (and 2-nd level) inside<br />
the chip. Power Single-chip Multi-Processors is<br />
also reality today. New compatible L-modules for<br />
OEM developers produced by some companies<br />
include bridge Chips and Link Controllers. There<br />
are 4, 8 and 16 slot version of crates for Modular<br />
cPCI systems from Motorola, PEP Modular<br />
Computers, AdLink and PXI version for Modular<br />
Instrumental Systems from National Instruments<br />
Inc.
PCI-SCI Modular Bridge Chip (PSB) with a<br />
unique protocol converter suited for clustering and<br />
high-performance RT-System for DAQ and Trigger<br />
applications. The PSB-32 is designed to meet the<br />
requirements for high availability clustering and<br />
remote I/O applications. In a unique architecture<br />
combining both direct memory access (DMA) and<br />
remote memory access (RMA).<br />
High performance message passing protocols and<br />
transparent bus bridging operations are supported.<br />
By using the DMA controller contents of memory<br />
can be copied directly between PCI buses in a<br />
single copy operation with no need for intermediate<br />
buffering in adapter cards or buffer memories. This<br />
feature greatly reduces latency and lowers overhead<br />
of data transfers. The DMA controller supports<br />
both read and write operations. The remote<br />
memory access (RMA) feature of the PSB enables<br />
ultra-low latency messaging<br />
and low overhead and transparent I/O transfers. In<br />
RMA mode, PCI bus memory transactions are<br />
converted into corresponding SCI bus memory<br />
transactions allowing two physically separate PCI<br />
buses to appear as one. This feature allows<br />
applications to send data between system memories<br />
without using of operating system services,<br />
reducing latency and overhead. The PSB has builtin<br />
address translation, error detection and<br />
protection mechanisms to support highly reliable<br />
connections. The PSB chip is based on the<br />
ANSI/IEEE SCI-standard.<br />
Basic parameters of Bridge chip:<br />
• PCI 2.1 compliant, 32 Bits, 33 MHz<br />
• ANSI/IEEE 1596-1992 SCI standard<br />
• Chaining (Read/Write) DMA Engine<br />
• Up to 4096 map entries in SRAM<br />
• 512 Kbytes page size compatible<br />
• Host bridge capability (PCI arbiter)<br />
• B-Link Compliant Performance104 Mbytes/sec<br />
RMA, 73 Sec/sec DMA<br />
SCI Link Controller Chip (LC3, compatible<br />
backwards with LC2) is the first implementation of<br />
the Scalable Coherent Interface standard with<br />
duplex bandwidth of 800 Mbytes/s. The LC3<br />
is targeted for use in a wide range of systems where<br />
high bandwidth combined with low latency is<br />
required. Typical target systems are computer<br />
clusters, tightly coupled parallel computers, high<br />
performance I/O systems and switches. The LC3<br />
guarantees delivery of SCI packets with payloads<br />
up to 64 bytes of data. Internal buffers allow for<br />
pipelining of packets for high throughput operation,<br />
yet supports virtual cut-through routing for low<br />
latency access. The LC3 offers local high-speed<br />
bus performance characteristics, with LAN<br />
flexibility and scalability at a very competitive<br />
price. The chip uses high speed single-directional<br />
low voltage differential signaling, running on<br />
standard low cost cables to attain a 800 Mbytes/s<br />
(6.4 Gait/s) data transfer rate with routing latency<br />
as low as 70 ns. System scalability is accomplished<br />
through a high-speed backend interface<br />
(BXBAR) with built-in switching capability<br />
allowing system growth to beyond 1000 nodes.<br />
LC3 SCI Link Controller parameters:<br />
• ANSI/IEEE 1596-1992 SCI standard<br />
• ANSI/IEEE Std 1596 LVDS Link<br />
• ANSI/IEEE 1159.1 (JTAG) support<br />
• BXBAR Crossbar switching<br />
• 800 Mbytes/s Duplex link bandwidth for high<br />
performance applications<br />
• Virtual channel (VC) based buffer management<br />
• Table-based packet routing supporting complex<br />
topologies<br />
• Two-wire serial EEPROM interface<br />
• Queuing structure capable of storing 15 requests<br />
and 15 responses<br />
Atomic structure level on the Processor Boards<br />
(A-modules) of the Model includes specialpurpose<br />
(DSP) or general-purpose processor (PC<br />
MB), memory and I/O subsystem components.<br />
Typical examples are computer modules like<br />
VME/VXI, PXI/cPCI or modern Lita-PC Mother<br />
Board (MB). The simplest construction of effective<br />
Distributed Data Processing system for DAQ,<br />
Trigger and Control Application can be based on a<br />
compact PC MB with single, two (Dual) or four<br />
(Quad) microprocessors on the board.<br />
Number of modules on the same bus is limited (up<br />
to 16). Symmetrical Multiprocessing (SMP) is<br />
basic Software Model for Multiprocessors. This is<br />
one of a best decision for Trigger subsystems<br />
modules.<br />
A lot of A-Modules should be interconnected in<br />
similar SMP mode<br />
in different distributed topologies according to real<br />
detector on the base<br />
of San-architecture (using SCI). Embedded<br />
subsystem (A-module or<br />
M-modules) on macro level used often for slow<br />
control and monitoring wit Ethernet10/100<br />
interconnections.<br />
Distributed SAN Interconnection level (L-, B- and<br />
S-type Modules) depends of communications<br />
requirements and link-modules parameters. The<br />
cost of communication speed decreases faster than<br />
the cost of pins and board space. Tradition<br />
communications are usually based on bus, but any<br />
practical solution would involve the use of packetbased<br />
signaling over many independent point-to-
point links, which eliminated the bus bottleneck<br />
problem, and introduced a new problem - how to<br />
maintain cache-coherence in the shared-memory<br />
model of system. Bus is used for up to 16-32<br />
processor max on the same bus, but Scalable<br />
Coherent Interconnections (SCI) is good<br />
connection for SOS- architecture with many<br />
processors<br />
in a single system for DAQ, Control, Parallel Data<br />
Processing and DB.<br />
A wide range of application can cover the whole<br />
range from high-end multiprocessors to workstation<br />
cluster and LAN.<br />
The distributed SCI-based SAN Architecture shares<br />
a 64-bit address space, where the high order 16 bits<br />
are used to rout packets to the appropriate node.<br />
System topology can be based on a simple ringlet,<br />
multi-ringlet, bridges or powerful switches for<br />
Parallel-pipeline communications between<br />
processors<br />
and memory. Interconnection should<br />
be based on Link modules (L-Modules)<br />
or Switch modules (S-Modules). SCI<br />
is based on point-to point connections<br />
and supports transactions all processor modules at<br />
the same time. Commercial Dolphin’s L-modules<br />
provide 800 Mbytes/s bi-directional SCI link<br />
effective transfer<br />
of a large volumes of Distributed data. Applicationto-application<br />
latency is small (2.3 micro second)<br />
and reduces the overhead of inter node control<br />
messages, leading to the best possible scalability<br />
for multi-node applications. Dolphin’s S-Modules<br />
for System Area Networks provide 4 x 800<br />
Mbytes/s duplex ports and 2 x 800 Mbytes/s duplex<br />
ports.<br />
Internal Switch provides 1.28 Bytes bandwidth,<br />
port-to-port latency of 250 nanoseconds and dual<br />
fans for reliability. These parameters provide good<br />
interactions between distributed modules. Large<br />
Integrated Complex should be based on Bus-like<br />
SOS Networks architecture with distributed<br />
memory for effective DAQ, Trigger and Control in<br />
Distributed Data Processing. High-performance<br />
PXI/cPCI modules have mezanin interface for<br />
standard PMS-modules which be effective used for<br />
interconnections between systems components.<br />
PMC-SCI Adapter Module is a general-purpose<br />
interface for connecting PMC based cPCI-systems<br />
through SCI links onto Integrated SAN-based RTsystems<br />
(for DAQ, Trigger and Control) with<br />
Memory-mapped Distributed data processing. The<br />
module can be utilized in different applications<br />
such as scalable interconnection of I/O, bus-to-bus<br />
bridging and computer clusters. Its 800 Mbytes/s<br />
bi-directional SCI link is good for moving large<br />
volumes of data. Small application-to-application<br />
latency (2.3 micro second) reduces the overhead of<br />
inter node control messages, leading to the best<br />
possible scalability for multi-node applications.<br />
SCI's performance is achieved by taking maximum<br />
advantage of fast point-to-point links, and<br />
bypassing the time consuming operating system<br />
calls and protocol software overhead found in<br />
traditional networking approaches. SCI provides<br />
hot-plug cable connections; and redundant SCI<br />
modules can be used to increase fault tolerance. It<br />
supports SCI ring and switch topologies. Large<br />
Multiprocessor clusters can be built using<br />
Dolphin's interconnect switches.<br />
PCI-64 adapter Module opens effective way to<br />
build PC-based System Area Networks for<br />
Distributed data processing, clustering of computer<br />
and servers on the base of SCI. It has similar<br />
parameters as PMC PCI Module. PCI and VME<br />
configurations can be combined.<br />
Basic Technical Parameters of<br />
PMC- PCI Module:<br />
Link Speeds - 400 Mbytes/s<br />
(800 Mbytes/s duplex)<br />
SCI Standard - ANSI/IEEE 1596-1992<br />
PMC Specification - PMC IEEE 1386.1 Standard,<br />
32 and 64-bit,<br />
33 MHz PCI Bus (rev. 2.1 170<br />
Mbytes/sec operation).<br />
Performance - Up to 170 Mbytes/sec throughputs,<br />
2.3 microsecond latency<br />
Power Consumption - Static: 5W,<br />
- Dynamic: 6W<br />
Cable Connection - Parallel STP<br />
Copper Cable (1-7,5m)<br />
- Parallel Optical Link<br />
PAROLI-SCI (1-150m)<br />
Topologies - Point-to-Point and Switch<br />
Modular SCI Switch (MS-6E) with high data<br />
throughput and low message latency for SAN<br />
provides 4 x 800 Mbytes/s duplex user ports, 2 x<br />
800 Mbytes/s duplex ports. Internal Switch<br />
provides 1.28 Bytes bandwidth, port-to-port latency<br />
of 250 nanoseconds,<br />
dual fans for reliability and increased circuitry<br />
lifespan. Clusters are supported by hot<br />
plugging.MS-6E switch is a high-performance<br />
solution for System Area Networks and server<br />
clusters.<br />
Cluster nodes connect through four duplex 400<br />
Mbytes/s ports. For scalability to larger clusters,<br />
the MS-6E supports redundant expansion ports,<br />
which allow up to 12 switches to be cascaded for<br />
System Area Networks of up to 32 ports. The MS-
6E can also be configured as a standalone 6-port<br />
switch.<br />
The SCI architecture is designed to grow to as<br />
many as 64k nodes, which leaves plenty of<br />
headroom for future expansion. The SCI Switch's<br />
"port fencing" feature guarantees that a node failure<br />
will not prevent the cluster from functioning. Hotplug-gable<br />
ports allow the addition or removal of<br />
nodes without halting cluster applications.<br />
The switch is built around open standards and<br />
supports the IEEE/ANSI SCI standard and adapter<br />
module for standard buses such as PCI and Sbus.<br />
The dual expansion ports provide highly reliable<br />
redundant links. Dual fans increase MTBF and<br />
circuitry life span. Cable Connection is based on<br />
Parallel STP Copper Cable up to 5m or Parallel<br />
Optical Link PAROLI-SCI up to 150m. It has<br />
compatibility with 19” racks for ease of mounting<br />
and auto- ranging power supply.<br />
Macro-structures on Molecular level (Mmodules)<br />
depends of system topology. A lot of<br />
Multiprocessor cPCI Crates as a nodes can be<br />
interconnect<br />
by System Area Networks (“Big Bus”<br />
Interconnections) into a large (up to<br />
a Kilo-Processor) systems to support Distributed<br />
Integrated RT-Systems for DAQ, Control and Data<br />
Processing Applications. A number of A-modules<br />
on the Bus is limited up to 16 (max 32) because of<br />
physics parameters and not good scalability. A set<br />
of A-Modules in single Crate or Multi-Crate<br />
systems is sectioned as a Macro-module of the<br />
System Model. Sectioning of modules are based on<br />
exist standard Compact PC, VME/VXI, PC MB or<br />
SCI. Multi-Crate systems are one of the way to<br />
construct big systems. Big Bus-like approach is<br />
used to develop System Area Networks (SAN). SCI<br />
is one of the best approach to Scalable<br />
Multiprocessor System Architecture with following<br />
advantages.<br />
Interactions between modules within RT-system<br />
are based on small packet transfer with split<br />
transactions. There are high interaction of N-<br />
Nodes on shared memory resources (example direct<br />
access to memory in SMP systems), weak<br />
interactions between A-modules (example<br />
message passing<br />
in MPP systems) and intermediate interaction on<br />
the base of external memory devices (disks, tapes<br />
in Clustered systems). High SCI interactions are<br />
based on small packet transactions (send and<br />
response packets with echo). Packet Formats<br />
include writexx, readxx, movexx and locksb<br />
commands, where xx – represent one of the<br />
allowed data block length (number of data bytes,<br />
on the right after the packet header).<br />
Scalability is fundamental requirement for highperformance<br />
Modular Multiprocessor Systems. All<br />
requirements of application are changed and can be<br />
much more tomorrow than today. Number of<br />
processor should be used up to Kilo-Processor<br />
system.<br />
Good linear Scalability is a problem for highperformance<br />
computer system today. Addition of<br />
A-modules, L- and S-modules in RT-system<br />
support good scalability and provides more<br />
performance and throughput of the Multiprocessor<br />
system.<br />
Distributed-memory Model for Multiprocessor<br />
system with SOS-architecture should be<br />
fundamental to support high-performance parallelpipeline<br />
data processing (computing)<br />
in RT-application. Direct access any processor to<br />
any memory in single address space of the<br />
Integrated System is similar to SMP model. Big<br />
address fields (64 bits) supports or each node up to<br />
256 Tbytes memory. Register field in high part of<br />
node memory (256 Mbytes) includes registers with<br />
ROM (2K), initial units space (+2K) and available<br />
space. Much additional problems in integrated<br />
Kilo-processor system exist with cache-coherency.<br />
Cache coherency in multi-processor systems is<br />
required to support data availability for all<br />
processor during distributed data processing<br />
(parallel computing) in real time.<br />
Coherency is the problem in distributed<br />
multiprocessor systems, which include many<br />
processors, attempting to modify a single datum or<br />
holding their own copies of it in their cache at the<br />
same time. Coherency, implemented by software or<br />
hardware, is request to prevent multiple processors<br />
from trying to modify the same data at the same<br />
time. The cache-coherency protocol based on the<br />
snoopy bus is back plane limited and it should<br />
migrate to a more scalable modular multiprocessor<br />
system as hardware cache-coherency.<br />
Topology of modular RT-Systems can be<br />
constructed from required set of Modules (A-<br />
Modules, L-modules, B- or S-module) according to<br />
application.<br />
It should be a matrix for DAQ-systems, if data<br />
sources are based on matrix detectors, or 3Dtopology,<br />
if experiment based on 3-D detectors.<br />
In Control Fields the system should have topology<br />
according to the structure of accelerator part (linear<br />
or ring). MB-based module is connected by Lmodules<br />
in System Area Network<br />
with required topology. N-node as microprocessor<br />
chip (or as a small mezzanine-board) connected by<br />
L-modules in different system topology.
Technology Independent System architecture<br />
are ready to support a new technology and provide<br />
long living time of systems and up-grading modules<br />
on different level. SCI-based SOS network<br />
Architecture with high-modular structure not<br />
depend on changed technology and should consist<br />
of required modules.<br />
Standard Construction of special Mechanics<br />
required for SCI standard Modules and Crates. To<br />
simplify problem PC mechanics and PC-boards<br />
with PCI local bus should be used as a platform for<br />
A-modules in Distributed systems. Hard disks<br />
should be used as reliable distributed external<br />
memory near each processor module. Tradition<br />
Ethernet should be used as additional<br />
communication media for system initializing, user<br />
access and serves.<br />
Summary.<br />
System area Network Architecture<br />
for Scalable Open System s should be used for<br />
effective (performance/price) construction of<br />
multilevel Data Processing RT-Systems for<br />
Detector DAQ and Trigger Application on the base<br />
of Small PC MB or PXI/cPCI and SCI Linkmodules.<br />
Hardware coherency in multiprocessor<br />
systems supports high-performance at high price,<br />
but Software coherency provides good performance<br />
at low price.<br />
Distributed Multiprocessor System on the base of<br />
PC MB (A-modules) and Link-modules (Lmodules<br />
or S-modules) without hardware cachecoherency<br />
are discussed as example of low-cost<br />
effective approach to DAQ and Control RT-<br />
System. There are commercial link-modules for<br />
SOS–systems interconnections (Dolphin’s PMC-<br />
SCI Module for modular RT-system, PCI-64<br />
Adapter Module for PC-based distributed data<br />
processing and Switch for high-speed<br />
interconnections between ringlets).<br />
Different topologies should be constructed with<br />
these modules according to requirements of<br />
application. DAQ and Trigger Systems with 2-D<br />
Matrix topology should be used, which can consist<br />
of 4x4=16 single-processors modules (A-type).<br />
Toroidal Topology or 3-D Matrix system topology<br />
should be used for 3-D detector.<br />
Modules for short distance (up to 5 meters) are<br />
based on cooper link. Long distance Module (up to<br />
1-2 km) are based on fiber optics links. For more<br />
performance 2-4 processor on a board SMPmodule<br />
should be used.<br />
Control Systems can be divided on a number of<br />
sectors, which should have subsystems (submatrixes),<br />
interconnected by the same way. The<br />
best connections for long distance for Control and<br />
Monitoring are fiber-optics modules. Control and<br />
Experimental areas should be interconnected in<br />
Distributed Integrated systems on the base of<br />
system Area Networking technology. For slow<br />
control tradition standard connections (CAN) or<br />
Ehernet can be used. A set of functional linkmodules<br />
(L-module without hardware coherency)<br />
accessed from Dolphin described below.<br />
All general problems of Scalable Microprocessor<br />
system Developments and Applications were<br />
discussed in conferences starting from ICSNET’91-<br />
95 symposiums on modular systems and networks<br />
in S-Petersburg till this year and all future<br />
publications open on the intranet sites of Elics<br />
Community http://elics.org.ru/ and ware published<br />
in paper Proceedings.<br />
.<br />
Reference<br />
1. Applications of the Scalable Coherent Interface<br />
to Data Acquisition at LHC<br />
Co-authors: A.Bogaerts, J.Buytaert, R.Divia,<br />
H.Muller, C.Parkman,<br />
P. Pointing and others. CERN RD24.<br />
2. D.B.Gustavson, V.I.Vinogradov; Stanford<br />
University, INR RAS,RF<br />
Advanced Systems and Networks Architectures.<br />
Plenary report on ICSNET/1993,S-Petersburg<br />
http://elicsnet.ru/ICSNET/icsnet93.htm<br />
3. D.B.Gustavson, Stanford , USA<br />
Tutorial: The ANSI/IEEE Standard 1956 Scalable<br />
Cogerent Interface (SCI) -<br />
ICSNET/1993,S-Petersburg<br />
http://elicsnet.ru/ICSNET/icsnet93.htm<br />
4. St.Kempainen, National Semiconductor<br />
Corp.,CA , USA<br />
IEEE 1596.3 Low Voltage Diffirential Signal and<br />
Low Cost implementation ICSNET/1993,S-<br />
Petersburg. http://elicsnet.ru/ICSNET/icsnet93.htm<br />
5. Bin Wu, CERN-Norway University, France<br />
Distributed SCI-based Data Acquisition Systems<br />
constructed from SCI-bridges and SCI-switches<br />
ICSNET/1993,S-Petersburg<br />
http://elicsnet.ru/ICSNET/icsnet93.htm<br />
6. David B.Kirk, P. Rayakumar, IBM Federal<br />
System Company, NY 13827 , USA<br />
Multimedia Support in FUTUREBUS+
ICSNET/1993,S-Petersburg<br />
.http://elicsnet.ru/ICSNET/icsnet93.htm<br />
7. F.Bal, H.Muller, P.Ponting, CERN<br />
Development in the Implementation of<br />
Modular Systems for Design , Production<br />
and Diagnostic Purpose. ICSNET/1993,<br />
S-Petersburg<br />
http://elicsnet.ru/ICSNET/icsnet93.htm<br />
8. Small-scale Networks for General Systems<br />
Interconnect . England Peter Thompson SGS-<br />
Thomson, Microelectronics Limited, England.<br />
Almondsbury, Bristol BS12 4SQ, U.K.<br />
ICSNET/1995,S-Petersburg<br />
.http://elicsnet.ru/ICSNET/icsnet93.htm<br />
9. A High Speed Serial Communication<br />
Architecture for Machine Vision Systems.<br />
Dr. Chris Brown A.I. Vision Research Unit<br />
Sheffield University, Western Bank<br />
ICSNET/1995,S-Petersburg<br />
.http://elicsnet.ru/ICSNET/icsnet93.htm<br />
10. Status and development of advanced<br />
distributed Modular systems and networks<br />
on the base of SCI. D.B.Gustavson<br />
USA,SCU-SCIzzL; V.I.Vinogradov INR<br />
RAS, RF. Optimizing Processor<br />
Architectures for use in Multiprocessor<br />
Systems based on the Scalable Coherent<br />
Interconnections. Qiang Li, David<br />
B.Gustavson -SCU, USA; David V.James<br />
ICSNET/1995,S-Petersburg<br />
.http://elicsnet.ru/ICSNET/icsnet93.htm<br />
11Realisation of a 1000 node High Speed<br />
Swiching Network. CERN<br />
Collaboration:J.Hansen.Niels Bohr<br />
Institute.Copenhagen,DK. R.Dobinson,Stefan<br />
Haas,Brian Martin.Minghua<br />
Zhu,CERN,Geneva,CH. ICSNET/1995,S-<br />
Petersburg .http://elicsnet.ru/ICSNET/icsnet93.htm<br />
12.Comparativ performance Analysis Crossbar<br />
Switchies and Multiple Buses Dr.O.Panfilov.<br />
AT&T / Global Information Corp. San Diego. CA.<br />
92127, USA. ICSNET/1995,S-Petersburg<br />
.http://elicsnet.ru/ICSNET/icsnet93.htm<br />
13.Performance Evaluation of the SM-IMP<br />
Eterogeneous Parallel Distributed<br />
Architecture.Migliardi,M.Maresca,P.Baglietto<br />
N.Zingirian. DIST - University of Grenoable<br />
ICSNET/1995,S-Petersburg<br />
.http://elicsnet.ru/ICSNET/icsnet93.htm<br />
15. SCI based Modular Multiprocessor<br />
Systems. D.B.Gustavson, Q-Li, - SCU,USA<br />
V.l.Vinogradov - INRRAS, RF<br />
ICSNET1997.<br />
http://elicsnet.ru/ICSNET/ICSNET97/SESS1.<br />
HTM<br />
15. Adaptive Performance Optimizations for<br />
SCI.Andre Bogaerts, CERN, ECP Division,<br />
1211 Geneva-23, Switzerland<br />
Tel: ++41 22 767 1197<br />
email: Joannes.Andreas.bogaerts@cern.ch<br />
Manfred Liebhart, Techn. Univ. of Graz,<br />
Austria, and CERN, ECP Division, 1211<br />
Geneva-23, Switzerland<br />
Tel: ++41 22 767 5022<br />
email: Manfred.Liebhart@cern.ch<br />
ICSNET1997.<br />
http://elicsnet.ru/ICSNET/ICSNET97/SESS1.<br />
HTM<br />
16. Advanced SCI Standards and NUMA-Q<br />
Architecture. V.I. Vinogradov, INR RAS,<br />
Russia,Pavel Leipunski, Sequent Comp.<br />
Sys.Corp.ICSNET1997.S-Petersburg.<br />
http://elicsnet.ru/ICSNET/ICSNET97/SESS1.<br />
HTM<br />
17. SAMSON – Scalable Architecture for<br />
Multiprocessor Systems. Fernand Quartier,<br />
SPACEBEL Informatique 111, rue Colonel<br />
Bourg, B-1140 Brussels, Belgium<br />
Fax. +32.2.726.85.13, Tel. +32.2.730.46.11<br />
e-mail: fernand.quartier@spacebel.be<br />
ICSNET1997.S-Petersburg.<br />
http://elicsnet.ru/ICSNET/ICSNET97/SESS1.<br />
HTM<br />
18. Scalable SCI-based SMP System<br />
Applications. V.I. Vinogradov, INR RAS,<br />
A.N. Lutchev, Sequent Co.ACS-1998, MSU,<br />
Moscow, Russis<br />
,http://elicsnet.ru/ACS/acs98/ACS98TEC_KOI<br />
.HTM<br />
19. Creation of Massively-Parallel<br />
Supercomputers as Opened Systems.<br />
ACS-1998, V.K.Levin, RAS, Moscow,<br />
MSU, Russia.<br />
,http://elicsnet.ru/ACS/acs98/ACS98TEC_KOI<br />
.HTM<br />
Multimedia Presentation and Picture for this<br />
Manuscript are below in Appendix.
¡£¢¥¤§¦©¨�¢���¦©��¨���¦���������¨�¢¥¤ ��¦©����������¢�¨�¢¥��¨�¦©����� �<br />
� ¦�������¢���¦©��¨���¢� ����� ���������������� ��¦©�����������©¢¥�<br />
���������������������������������������������������������������������������<br />
���¥�������¥���������¥���������¥�����������������������¥���������������������������������������������������<br />
�������������������������������������������������������������������������<br />
�������������������������������<br />
�����������������������������������������������������������������������������������������©�<br />
�����������������������������������������<br />
�������������������������������������������<br />
�����������������������������������������������������������������<br />
�������������������<br />
���������������������������������������������������������������������<br />
���������������������������������������������������������<br />
�<br />
�������������������������������������������������������������������<br />
�������������������������<br />
���������������<br />
�����������������������������������������������������£�����������������£�����������<br />
���������������������������������������������©�����������������������©���������������<br />
�������������������������������������������������������������������������������������������<br />
�������������������������������������������������������������������������������������������<br />
���������������������������������������������������������������������������������������������<br />
�������������������������������������������������������������������������������¥���������������<br />
�����������������������������������������������������¥���������������������������������������<br />
�������������������������������������������������������������������������������������������<br />
���������������������������������������������������������������������������������������������<br />
�����������������������������©�����������������������������������������������������������<br />
�������������������������������<br />
�����������������������������������������������������������������<br />
���������������������¥�������������������������������������¥���������������������������������������<br />
¡ �����������������������������£¢������������������<br />
�������������������©�������������������¥���<br />
���������������������������������������������������©���������������������������������<br />
¤ ¤¦¥¨§�©���������§�����¥<br />
���������������������������������������������������������������������������������������<br />
¡<br />
���������������������������������������������������������������������������¥���������������������<br />
�������������������������������������������������������������������������������������������������<br />
��������������������������� ��������� ������¢�� �����<br />
�������������������������������<br />
�������©�©���������������������������������������������������������������������������������������<br />
������������������¢���� ����¢����������������������©���������������������<br />
�����������������<br />
�����������������������������������������©�������������������������������������<br />
�����������<br />
�������������������������������������������������������������������<br />
�������������<br />
�������������������������������������¥�������������������������������������<br />
�������������������������������������������������������������������������������������������<br />
�������������������������������������������������������<br />
�������������������������������<br />
¡ ����������������� ���������������������<br />
�������������������������������������������������<br />
�������������������������������������������������������������������������������������������<br />
¡ ���������������������������������������<br />
���������������������������������©����������������¢����<br />
� � ���������������������������������������������������������������������������������<br />
���������������������<br />
���¥�������������������������������������������������������������������������<br />
�������¥�������<br />
���������������������������������������������������������������������<br />
����������������� ���¥�������������������������������������������������������������������������<br />
�<br />
��������������������������������������������������������������������� ¡ �������������������������<br />
�<br />
�������������������������������������������������������������������������<br />
���������������������������������������������������������������������������������<br />
¡<br />
��������� ¡ �������<br />
�������������������������������������������������������������������<br />
�������������������������������������������������������������������������������������¥�<br />
�����������������������������������������������������������������������������������������������<br />
�����<br />
��������������������¢������������©���������������������������������������������������������<br />
�����������������<br />
�������������������������������¥�����¥�������������������������<br />
���������������������������������������������������������������������������������������<br />
¡ �����������������������������������������¥�����������¥�������������������������������<br />
���<br />
����¢��������������������������������������������������������������������������������������<br />
�<br />
���������������������������������������������������������©�������������������������<br />
���������������������������������������������������������©�����������������������������<br />
�������������������������������©�������������������������������������©�����������������������<br />
¡<br />
�����¥������������������¢������������������������������������������������������©���<br />
���������������������������������������������������������������������������������������<br />
�����������������������������������������������������������������������������������������<br />
������������������������������¢����������¦����������������������¢��������������������<br />
�������������©�������������©�����������¥���<br />
�����������������������������������������������������������������������������<br />
�������<br />
�������©�������������������������������������������������������������������������������������<br />
��������� ¡ �����������������������������������������������������������������<br />
�����<br />
�¦��������������������������� �������������������������������<br />
�����������������������������<br />
�������������������������������������������������������������������<br />
����������������������¢����������������������������������������������������������������������<br />
���������©���������������������������������������������������������������������©���������������<br />
������������������������������������������������������������������������������������¢��������������<br />
�������������������������������������������������������������������������������������������������<br />
�������������������������������������������������������������������������������������������������<br />
���������������������������������������������������������������������
�¨�¨��§���������������¥��¨¥¨����©���§���§��¨���<br />
¤�¤<br />
¤�������������¥¨§���§�����¥<br />
��������������������������¢������������������������������������©���������������������������������<br />
�����������������������������������������������������������������<br />
�����������©�����¥�����������<br />
�������������������������������������������������������������������<br />
Local Master<br />
CCI<br />
Local Host<br />
Radiation Area<br />
VME<br />
G-LINK<br />
Instruction--><br />
��������������������������������¢������©�����������������������������������������������������������<br />
�������������������������������������©�����������������������������������©���������������������<br />
��������������������¢����©���������������������������������������������<br />
�������������������<br />
���������������������������������������������������������������������������������������<br />
���������������������������������������������������������������������<br />
���������������<br />
������������������������������������������������������������������������¢����¦���������<br />
������������������������������������������������¢����������¥���������������������������������������<br />
���������������������¥�����������������<br />
�����������������������������������������<br />
���������������������������������������������������������������¥���������������<br />
���������������<br />
�����������������������������������<br />
������������������������� ������������� ���������������������<br />
�����������<br />
��� � ���������������������������������������������������������<br />
�������������������<br />
������������������������������������������������������������������������������������������¢<br />
�����������������������������¥�������������������������������������������<br />
�����������<br />
�������������������������������������������������������������������������<br />
���������������������������������������������¥�����������������������������<br />
�������<br />
���������������������������������<br />
�����������������������������������������¨���������������������������������������������¦���<br />
�������������<br />
� �����������������������¥���<br />
¤�¤�¤ ����¥¨��§�����¥��¨����§��<br />
���������������������������������������������������������������������������������������<br />
���������������������������������������������������©�������������������������������������©�������<br />
���������������������������������������������������������������������������������������©���������<br />
����������������������������������������������������������������������������¢����������<br />
������������������������������¢������������������������������������������������������������������<br />
��������������������������������������������������¢<br />
���������������������������������������������<br />
������������������� �����������������������<br />
���������������������������������������<br />
���������������������������������������������������������������������������������������������<br />
���������������������������������������������������������������������������������������������������<br />
�����������������������������������������������������������������������������������������<br />
���������������������<br />
�������������������������������������������������������������������������<br />
�������������������������������������¥���������������������������������������������������������<br />
�����<br />
���������������������������������������������������������������������������������������<br />
���������������������������������������������������������<br />
��������������� �����������������<br />
���������������������<br />
�������������¥��������������� �����<br />
�������<br />
������������������������� �����<br />
�������������<br />
������������������� �����<br />
�������������<br />
����������������� �����<br />
���������������<br />
�����������������¥��� � ��������� �����<br />
�������������������¥���<br />
�����������������¥��������������� �����<br />
�����������������������������<br />
������������������������������� �����<br />
�������������������<br />
��������� �����<br />
���������������<br />
����������������������� �����<br />
�����������������<br />
����������������� ������������������� �����<br />
� ���������������<br />
� �����������������������������£���������������������������������<br />
� ���������������������������©�������������������������������������������<br />
���������<br />
��������������������������������������������������������������������������������������������������¢������<br />
��������� ¡ ������������������������������������������������� �������������<br />
�����<br />
�������������������������������������������������������©�������������������¥�������������������<br />
����������������������������������������������������������������������¢������������������������������<br />
������������������������� ����������������������������������������������� ���<br />
�������������<br />
����� �����������������������������������<br />
���������������������������������������������<br />
�����������������<br />
�����������������������������������������������������������������������������<br />
�������������������������������������������<br />
��������������������¢��������������������������<br />
� �������������������������<br />
���������������������������������������������������������������<br />
���������������������������������������<br />
���������������������������������������������������<br />
�©�����������������������������������������������������������������������������������������<br />
���������������������������������������������������������¥����������������¢������������<br />
�����������������������������������������������������������������������������������������������<br />
�������������������������������������������������������������������������������������������<br />
�������������������<br />
�����������©�������������������������������������������������������<br />
����������������������������������������¢������������������������¥�������������������<br />
�������������©���������������������<br />
�����������������������������������������������<br />
�����������������������������������������������������������������������©�������������������<br />
�����������������������������������������¥�����������������������������������������������<br />
� ������������� � ����������������������������������������������������������¢<br />
�����������������<br />
��� �����������<br />
���������������������������������������������������������������������������<br />
���������������������©�����������������������������������������������������������������©�����������<br />
������������������������������������������¢����������������������������������������������������<br />
� ������������� ����������������������������������������� � ���������<br />
�������������������<br />
�����©��������������� ���������������������������������������������������������<br />
�����<br />
�����������������������������������������������������������������������������������¥���������<br />
�������������������������������������������������������©�������������������������<br />
¤�� �������¨��§�������§���������©�����©����¨¥�����������§��<br />
�¥����������������������� �������������������������<br />
�<br />
�������������������������������������������������������������������������������������<br />
�������������������������������������������������������������������������������������������������<br />
���������������<br />
���������������������������������������������������������������������<br />
�������������������������������������������������������������������������������������������<br />
�����������������������������������������<br />
���������������������������������������<br />
���������������������������������������������������������������������������������������<br />
�����������������������������������������¨���������������������������������������������������������
����������������������������������������������¢����������������������<br />
���������������������������<br />
�����������������������������������������������������������<br />
����������������������������������������������¢����������������������������������������<br />
�����������������������������������������������������������������������������������������<br />
���������������������������¦�����������������������������������������������������������������<br />
�����������©�������������<br />
PC<br />
Bit3<br />
CCI<br />
VME<br />
VME(64x)<br />
G-LINK<br />
HSC<br />
Target<br />
Module<br />
VME(9U)<br />
VME<br />
JTAG<br />
�����������������������������£�������������������������������������<br />
�����������������¥�����<br />
�<br />
�����������������������������������������������¥�����������������������������������������<br />
�������������������������������������������������������������������������������������¥���������<br />
���������������¥�������������������������������������������������������������������������<br />
���������������������������������������������������������������������������¨�������������<br />
�������������������������<br />
�������������������������������������������������������������������<br />
�����������������������������������<br />
�����������������������������������������������������¨�<br />
��������������������������������������������������������������¢������������������������������������<br />
�������������������������������������������������������������������������������������������<br />
���������������������������������<br />
����������������������������������������������������������������������������������������¢<br />
�����������������������������������������������������������������������������������<br />
�����������������<br />
�<br />
���������������¥�������������������©���������������������������������������������������<br />
¡<br />
�������������������������<br />
�����������������<br />
������������������������������������� ���������������������������������������<br />
¡<br />
�����������<br />
����������������������¢����������������������������������������������������������<br />
¡ �����������<br />
�����������������������������������������������������������������������������<br />
�������������������������������������������������������������������������������<br />
� ����������������������������������������� �������������<br />
�������������������������<br />
���������������©�����������������������������������������������������¥�����������������������<br />
�������������������������������������������������������������������������������������<br />
���������������������������������������������������������������������������������������������������<br />
¡ �������������������<br />
���������������������������������������������������������������������<br />
�����������������������£���������������������������������������������������������<br />
�������������������������������<br />
�<br />
���������������������������©���������������������������������������������������������������<br />
¡<br />
¡ ���������������������������������������������������������������<br />
���������¥�����<br />
�����������������������������������������������������������������������������������©�<br />
�������¥�����������������������������������������©�������������©�����������©���������<br />
���<br />
�������������������������<br />
�����������������������������������������������������������������������������������������<br />
���������©���¥�����¨���¦���������������������������������©�����������������������������<br />
CCI<br />
CCI<br />
OE/EO<br />
GLINK<br />
PPE<br />
HSC<br />
N1<br />
TDI<br />
TCK<br />
eTBC TMS<br />
TDO<br />
/TRST<br />
JTAG (J3)<br />
N-lines [21:2]<br />
CPLD<br />
(SPE)<br />
STDO<br />
STCK<br />
STMS<br />
STDI<br />
/STRST<br />
ASP<br />
PTDO<br />
PTCK<br />
PTMS<br />
PTDI<br />
/PTRST<br />
CPLD<br />
STDO<br />
STCK<br />
STMS<br />
STDI<br />
/STRST<br />
ASP<br />
PTDO<br />
PTCK<br />
PTMS<br />
PTDI<br />
/PTRST<br />
CPLD<br />
STDO<br />
STCK<br />
STMS<br />
STDI<br />
/STRST<br />
ASP<br />
PTDO<br />
PTCK<br />
PTMS<br />
PTDI<br />
/PTRST<br />
VME (J1,J2)<br />
CPLD<br />
STDO<br />
STCK<br />
STMS<br />
STDI<br />
/STRST<br />
ASP<br />
PTDO<br />
PTCK<br />
PTMS<br />
PTDI<br />
/PTRST<br />
����������������� � ���������������������������������������������������������������������<br />
���������������������������������������������������������������������¥���������������������������<br />
�������������������������������������¥�¥���������������������������������������������<br />
�����<br />
���������������������������������������¦�������<br />
�������������������������������������������©���������������<br />
��������������������¢��������������<br />
�������������������������������<br />
���������������������������������¥���������������©�<br />
�������������������������������������������������������������������������������<br />
�������<br />
���������������������������������������������������������������������������������<br />
�����������������������������������������������������������������<br />
���������������������<br />
�������������������������������������������<br />
�<br />
���������������£�������������������<br />
���������������������������������������������<br />
�������������������������������������������������������������<br />
�����������������������<br />
�¨�����������������������������������������������������������<br />
���������������������������<br />
����������������������������������������������������¢������������������<br />
�����������<br />
�������������������������������������������������������������������������������£�����������<br />
�������������������������������������<br />
��������������� ������������������� �������<br />
���������������������<br />
����� ���¦��� ���<br />
�������<br />
����� ������� ���<br />
�������������<br />
������������� ���¦��� �����<br />
���������������������������<br />
����������������� ��������������� ������� �¦���<br />
� ���������������������������������������������������������������<br />
�����������������������������������������������������������������������������������<br />
�������������������������������������������������������������������������<br />
�����������<br />
���������������������������������������������������������������������������������������������������<br />
������������������������� ���������������������£��� ���<br />
�����������������������<br />
�������������������������������������������©���������������©�������������������<br />
���������������������������������¦���������������������������������<br />
���������������������<br />
�����������£����������¢��������������������������������������������������������������<br />
¡ ������������¢������<br />
���������������������������������������������������������������<br />
�������������������������������������������������������������<br />
�����������������������������<br />
�����������������������������������������������������������������������������������������������<br />
�����������������������������������������������������������������������¥���������������<br />
�������������������������������������������������������©�����������������������������������<br />
� ��������������������������������� � ������� �����<br />
�©�������������������������������������<br />
���������������������������������������������������¥�����������������������������©�����<br />
��������������¢������������������������������������������������������������©���������������������
�����¥����������������������������������������������¢����������������������������������������������<br />
���������������������©���������������������������<br />
� �����¨������������¥�§�����©�����¥��¨������©�������¥<br />
��������������������������������������������¢������������������������¥�������������������<br />
��������������������������������������¢��������������������������������������������©�������<br />
�����������������������������������������������������������������������������������������������<br />
�������������<br />
������������������������¢������������������������������������������������������������<br />
�����������������������������������������������������������������<br />
���������������������������<br />
����������� ����������������������������������������������� � �����������������<br />
���<br />
�������������������������������������������������������������������©�������<br />
��¤ ����¥¨�����¨������¥<br />
�����������������������������������������������������������������������������������������������<br />
¡<br />
�����������������������������������������������������©���������������������������<br />
�������������������������������������������������������������������������������������������������<br />
�����������������������<br />
���������¥�������������������������������������������������������¥�<br />
�������������������������������������������������<br />
�������������������������������������<br />
�������������������������������������������������������������������������������©�������������<br />
�����������������������������������������������������������������������������������������������<br />
�������������������������������������¦�������<br />
���������������������������<br />
�������������������������������������������������������������������������������<br />
¡ ����������������������������������������¢��������������������������������������<br />
���������<br />
� ������� �������������������©�������������������������������������������<br />
�������������<br />
���<br />
���������������������������������������������������������������¥�������������������<br />
����������������¢��������������������������£�������������������������������������������������<br />
�����������������<br />
�����������������������������������������������������������������������<br />
�����������������������������������������������������������������¥�������������������<br />
����������������������������������������������������������������¢����������������©���������������������<br />
�������������©�����������������������������������������<br />
�����������������������������¨������������������¢����������������������������������������<br />
����������¢��������������������������������������������������������������������������������<br />
���������������������������������������������������������������������������������������<br />
¡ ���������������������������������������������������<br />
�����������������������������������<br />
���������������������������������������������������������������������������<br />
�����������������������������������������������������������������������������������<br />
�������������������������������������������������������������¥�������������������������<br />
�������������������������������������������������������������������������������<br />
���������¥���������������������������������������������������������������������������<br />
�������������������������������������©�����������������������������������������������������������<br />
��������¢��������������������������������©�����������������������������������������������������<br />
�������������������������������������������������������<br />
��¤�¤ ������¥¨������������������¥�§��<br />
�����������������<br />
�������������������������������������������������������������������������<br />
¡ �����������������������¥�������������������������������<br />
�����������������¥�����������������<br />
�������������������������������������������������������������������������������������<br />
���������������������������������������������������������������������������������������������<br />
¡ ������������������� �������������������������������������������������������<br />
���������<br />
��������������������������������������� ��������������������������������¢��¥�������<br />
���<br />
�����������������������������������������������������������������������������<br />
�������������������������������������<br />
�������������������������������������<br />
���������������������������������������������¥�����������������������������©���������<br />
�����������������������������������������������������������©���������������������������������<br />
���������������������������<br />
�������������������������������������������������������������������<br />
�����������<br />
�����������������������������������������©�����������������������������������������������<br />
� �����������������������������������������������������<br />
������������¢������������������������<br />
�������������������������������������������������������������������������������������������<br />
�������������������<br />
����������������������������������¢����������������������¦��������¢<br />
���������������<br />
���������������������������������<br />
������� �¥��������������������¢����������������������������������������������������������<br />
�<br />
������������������������������¢����������������������������������������������������<br />
������������������������������������������������������������¢�������������¢�������������¢<br />
�����������£���<br />
���������¦������¢������������������������¦������������¢������<br />
������� ������������������¢����������������������������������������������������������������<br />
�<br />
�������������������©�������������©���������������<br />
���������������������������������<br />
����������¢��������������������������������������������������������������������������<br />
����������������������������������������¢�������������¢�������������¢�����������������¢<br />
�����������¦���<br />
�����������������������¦������������¢������<br />
������� ��������������������������¢����¥���������������������������������������������<br />
�<br />
��������������������������������������������������¢������������������������������<br />
��������¢������ �������������£¢¦�����¦���<br />
�������<br />
�����¥�����������������������������������������¡ ¢ ¡£¥¤§¦¢¦¡¨¡¨¢¨¥©§���������¡��©��������<br />
�<br />
������� ������������¢������������£�����������������¥�������������������������������<br />
�<br />
�����������������������£¢�����������������������������������������¢��������������<br />
�������¦����¢������¦���������������������������<br />
�<br />
�¥��������������������¢���� ������������������������������������� �����<br />
���������<br />
�����¥��������������� ���������������<br />
�������������©�������������������������<br />
������������¢����������������������������������������������������������������<br />
��������������������������������������������������������������������¢�����������������¢<br />
�����������¦���<br />
����������¢��������������£��¢������<br />
�����¥������������������������������������¢����������������<br />
�<br />
¡ ¡£�¤§¦¡¦¢¨¡¨¡¨¥©��¡�����¡�¡�¢ �©����¡���<br />
�¢<br />
������������������������������������������¢����������������<br />
�<br />
¡ ¡£�¤§¦¡¦¢¨¡¨¡¨¥©����¡�����¡�¢����©��������<br />
�¢<br />
¡ ¢£¥¤§¦¢¦¡¨¡¨¡¨�©§�¡�¢ ¡�¡�¢��©����¡���<br />
���������������������������������������������������¡<br />
¡ ¢£¥¤§¦¢¦¡¨¡¨¡¨�©§ ���©����¢���<br />
���������������������������������������������������������¡<br />
¡ ¡£�¤§¦¡¦¢¨¡¨¡¨�©�������©����¡���<br />
��������������������������������������¢������������������¢<br />
¡ ¡£�¤§¦¡¦¡¨¢¨¡¨¥©��������¡���¥©§�¢�¡¦����¢�¡ ¡�������¡¦¡£��¡�¢�¡���� ���¦¡�¢�����¡�¥©��¢ ¡�¡���<br />
���������¢<br />
¡ ¡£�¤§¦¡¦¡�¢�¡�¡�¢�¡�¥©��¡���¥©§�¢£¡¦����¢�¡�¡���¡���¡��¦����� ¡�¡¦����¡�¡�¢� �¥©��¢ ¡�¡���<br />
���������¢
Studies for a Detector Control System for the ATLAS Pixel Detector<br />
Bayer, C. 1 , Berry, S. 2 , Bonneau, P. 2 , Bosteels, M. 2 , Hallewell, G. 3 , Imhäuser, M. 1 , Kersten, S. 1+ ,<br />
Kind, P. 1 , Pimenta dos Santos, M. 2 , Vacek, V. 5<br />
1 Physics Department, Wuppertal University, Germany; 2 CERN, Geneva, Switzerland ;<br />
3 Centre de Physique des Particules de Marseille, Campus des Sciences de Luminy, Marseille, France;<br />
5 Czech Technical University, Prague, Czech Republic;<br />
Abstract<br />
For the ATLAS experiment at the LHC, CERN, it is planned<br />
to build a pixel detector containing around 1750 individual<br />
detector modules. The high power density of the electronics<br />
requires an extremely efficient thermal management system:<br />
an evaporative fluorocarbon cooling system has been chosen<br />
for this task. The harsh radiation environment presents<br />
another constraint on the design of the control system, since<br />
irradiated sensors can be irreparably damaged by heating up.<br />
Much emphasis has been placed on the safety of the<br />
connections between the cooling system and the power<br />
supplies. An interlock box has been developed for this<br />
purpose. We report on the status of the evaporative cooling<br />
system, on the plans for the detector control system and on the<br />
irradiation studies of the interlock box.<br />
I. INTRODUCTION<br />
The pixel detector is the component of the ATLAS inner<br />
tracker closest to the interaction point. From the monitoring<br />
point of view, the base unit is a detector module consisting of<br />
a pixelated silicon sensor, with 16 bump-bonded front end<br />
readout chips. A flexible hybrid circuit is attached and carries<br />
a “Module Controller Chip” which organizes data<br />
transmission from the module via a bi-directional optical link.<br />
The optical link is located close to the detector module and<br />
contains a VCSEL (vertical cavity semiconductor laser) and a<br />
PIN diode, together with their corresponding transceiver chips<br />
“VDC” and “DORIC” {[1], figure (1)}. Besides the voltages,<br />
necessary to drive the different electronic components, one<br />
temperature sensor per module must be handled by the<br />
detector control system (DCS). The detector modules are<br />
glued on the different support structures (“staves” in the barrel<br />
part, disks in the “end-caps”) which contain the cooling pipes<br />
necessary to remove the heat dissipated by the electronics.<br />
The following three sections describe the three main<br />
components of the pixel DCS: the cooling system, the power<br />
supplies and the temperature monitoring and interlock<br />
system. Emphasis is put on the characterization of the<br />
hardware. Wherever possible we try to use ATLAS standard<br />
components, especially the ELMB (Embedded Local Monitor<br />
Box)[2]: a multi purpose acquisition and control unit, using<br />
the CAN fieldbus and developed by the ATLAS DCS group.<br />
Section V contains an overview of the whole Pixel DCS and<br />
+ Corresponding Author : susanne.kersten@cern.ch<br />
reports on the first steps in the implementation of a SCADA<br />
[“Supervisory, Control and Data Acquisition”] system. The<br />
summary in section VI includes an outlook on future plans.<br />
Flex Hybrid Circuit<br />
Temperature Sensor<br />
Detector Module<br />
MCC (Module Controller Chip)<br />
Silicon Detector<br />
16 Front End Chips<br />
Cooling Pipe<br />
VDCp (VCSEL Driver Chip) VCSEL<br />
capton cable<br />
optical fibre<br />
PIN-Diode<br />
DORICp (Digital Opto Reciever IC)<br />
Opto Board<br />
Vdet detector depletion voltage<br />
Vdda<br />
Vdd<br />
Vpin<br />
Vvdc<br />
Viset<br />
front end chips, analog part<br />
front end chips, digital part, MCC<br />
voltage for PIN-Diode<br />
voltage for VDCp and DORICp<br />
control voltage for VCSEL<br />
Figure 1: Schematic of a pixel detector module with relevant<br />
components of the Detector Control System<br />
II. THE COOLING SYSTEM<br />
A. The Challenge<br />
For a ten-year operational lifetime in the high radiation field<br />
close to the LHC beams, the silicon substrates of the ATLAS<br />
pixel detectors must operate below ~ -6 °C with only short<br />
warm-up periods each year for maintenance. Around 15 kW<br />
of heat will be removed through ~ 80 parallel circuits, each<br />
cooling a series pair of pixel barrel staves (208 or 290 W) or<br />
disk sectors (96 W). Evaporative per-fluoro-n-propane (C3F8 )<br />
cooling [3] has been chosen since it offers minimal extra<br />
material in the tracker sensitive volume (flow rates 1/20 those<br />
in a monophase liquid system), with a refrigerant that is nonflammable,<br />
non-conductive and radiation resistant. Since the<br />
pixel stave and disk sector “local supports” are of the lowest<br />
possible mass composite construction, detectors can exhibit a<br />
very rapid temperature rise (~ 5 Ks -1<br />
) in the event of a loss of<br />
coolant or cooling contact. A rapid thermal interlock with the<br />
module power supplies is therefore indispensable. Thermal<br />
impedances within the local supports require C3F8 evaporation in the on-detector cooling channels at ~ -20 °C<br />
for a silicon operating temperature of ~ -6 °C.<br />
B. The Recirculator and Principle of Operation<br />
A large scale (6 KW) prototype circulator (Fig. 2) can supply<br />
up to 25 parallel cooling circuits, through interconnecting<br />
tubing replicating the lengths and hydrostatic heat differences<br />
expected in the final installation in the ATLAS cavern.
Figure 2:<br />
Schematic of Prototype Evaporative Recirculator<br />
It is centred on a hermetic, oil-less piston compressor 1<br />
operating at an aspiration pressure of ~1 bar abs and an output<br />
pressure of ~10 bar abs . Aspiration pressure is regulated via<br />
PID variation of the compressor motor speed from zero to<br />
100%, based on the sensed pressure in an input buffer tank.<br />
C 3 F 8 vapor is condensed at 10 bar abs and passed to the<br />
detector loads in liquid form. A detailed description of the<br />
principle of operation is given in ref [4].<br />
In the final installation, liquid will be distributed, and vapor<br />
collected, in fluid manifolds on the ATLAS service platforms.<br />
This zone will be inaccessible to personnel during LHC<br />
running, with local radiation levels and magnetic fields that<br />
exceed acceptable levels for a wide range of commercial<br />
control electronics. Local regulation devices will be<br />
pneumatic: in each cooling circuit, coolant flow rate will be<br />
proportional to the output pressure of a “dome-loaded”<br />
pressure regulator 2 , placed ~ 25 m upstream of an injection<br />
capillary, piloted by analog compressed air in the range 1-10<br />
bar abs from an I2P (4-20mA input) or E2P (0-10V input)<br />
electro-pneumatic actuator 3 . Actuators will receive analog set<br />
points from DACs, which will either be commercial control<br />
components 4 , or an adjunct to the ATLAS ELMB monitor and<br />
control card [2]. Circuit boiling pressure (hence operating<br />
temperature: at 1.9 bar abs , C 3 F 8 evaporates at –20°C) will be<br />
controlled by a similarly piloted dome-loaded backpressure<br />
regulator 5 .<br />
C. PID Regulation of Coolant Mass Flow<br />
Coolant flow in each circuit will be PID-regulated to<br />
maintain the temperature on a NTC sensor on the exhaust 50<br />
cm downstream of a cooled detector element ~> 10 °C above<br />
the C 3 F 8 evaporation temperature. In this way it is not<br />
necessary for the cooling control system to detect how many<br />
1 Model QTOX 125 LM ; Mfr: Haug Kompressoren<br />
CH-9015 St Gallen, Switzerland<br />
2 Model 44-2211-242-1099: Mfr: Tescom, Elk River MN 55330, USA<br />
3 Model PS111110-A: Mfr Hoerbiger Origa GmbH, A-2700 Wiener-<br />
Neustadt, Austria: Input 0-10V DC, Output pressure 1-11 bar abs<br />
4 Model 750-556: Mfr Wago GmbH, D32423 Minden, Germany<br />
controlled through Model 750-307 CAN Coupler<br />
5 Model 26-2310-28-208: Tescom Corp<br />
detector modules on a cooling circuit are powered (or their<br />
instantaneous power dissipation).<br />
A PID algorithm has been implemented directly [4] in a<br />
microcontroller chip 6 of the same family as that used 7 for<br />
system programming and monitor functions in the ATLAS<br />
ELMB. The results indicated that with proportional flow<br />
control, stability in temperature of –6 ± 1 °C on remaining<br />
powered modules is achievable, with > 90 % of the supplied<br />
C 3 F 8 liquid evaporated in the on-detector tubing. In setting up<br />
the PID parameters, care was needed to ensure that the lower<br />
pressure limit was not less than the saturated liquid pressure at<br />
the C 3 F 8 liquid injection temperature.<br />
The tubing of the cooling circuits requires insulation (and<br />
possibly local surface heating in certain critical locations) to<br />
safely traverse the electrical services of other ATLAS subdetectors,<br />
which are located in an ambient air atmosphere<br />
with ~ 14 °C dew point.<br />
The results also demonstrated PID colant flow regulation<br />
allows a relatively simple insulation scheme to maintain the<br />
outer surface of the exhaust tubing above the local dew-point,<br />
reducing the insulation volume required in the extremely<br />
restricted service passages.<br />
A DAC adjunct is under design for the ATLAS ELMB [2].<br />
This would allow PID flow regulation through control loops<br />
from the ELMBs with one or more PID channels in the onboard<br />
microcontroller of each ELMB. While it is also possible<br />
to implement a software PID algorithm for each circuit in the<br />
final ATLAS SCADA software, it is not known at this stage<br />
whether the reaction time will be fast enough for our<br />
application.<br />
III. THE POWER SUPPLY SYSTEM<br />
Six different voltages are necessary for the operation of each<br />
pixel detector module, (figure 1). These must supply a<br />
depletion bias of up to 700 V and five low voltage loads<br />
varying between low power consumption to two which draw<br />
2-5 Amperes. In order to support the ATLAS grounding<br />
scheme, all voltages will be floating.<br />
We aim to use a coherent power supply system which<br />
accommodates these diverse requirements and has a high<br />
level of local intelligence. A system able to make decisions<br />
autonomously - not relying on the functionality of the network<br />
- will reduce field-bus traffic and will enhance the safety of<br />
the detector. Error conditions (including over-current) will be<br />
handled in the power supply system, leaving recovery<br />
procedures to the operator or supervisory detector control<br />
station. The demand for significant local functionality implies<br />
the location of the system in an radiation-free environment.<br />
6 AT90S8515; Mfr: ATMEL Corp, San Jose CA 95131, USA:<br />
programmed from C via GNU toolkit<br />
7 ATMEL ATmega103 128k RISC flash µcontroller
For redundancy and grounding considerations, we require a<br />
high granularity of power distribution, with a total of ~ 4000<br />
separate channels. In order to handle this large number of<br />
channels efficiently and to speed up access, a power supply<br />
system is needed which offers the grouping of channels<br />
according to the characteristics of the detector and its<br />
installation: so called multi-voltage “complex channels” will<br />
be formed to power each pixel module. This is also required<br />
by the design of our interlock system: a single interlock signal<br />
should simultaneously turn off all power supplies to a module.<br />
As the front end chips of the pixel detectors are fabricated in a<br />
0.25 µm deep submicron technology, they are very sensitive<br />
to transients. First measurements with a prototype power<br />
supply system have shown that line regulators are required.<br />
These represent an additional object for the DCS. Their<br />
location must be close to the detector, setting a specification<br />
for their radiation hardness. The LHC4913 8<br />
is a possible<br />
candidate to meet our requirements, and its use is presently<br />
under investigation.<br />
IV. TEMPERATURE MONITORING AND<br />
INTERLOCK SYSTEM<br />
A. Principle<br />
Due to the great influence of the operating temperature on<br />
the longevity of the detector, each detector module is<br />
equipped with its own temperature sensor. Inaccessibility<br />
during long periods of one year or more requires a reliable<br />
and robust solution for the temperature measurement: we<br />
have chosen a method based on resistance variation. The<br />
information from the sensor will be led to the ADC channels<br />
of the ELMB for data logging. In parallel, the signal will be<br />
fed to an interlock box, which compares it to a reference value<br />
and creates, in the case of excessive deviation, a hardwired<br />
logical signal which can be used for direct action on the<br />
power supplies. This solution protects the detector against<br />
risks associated with latch-up, de-lamination of a particular<br />
module from its cooling channel or failure of coolant flow to a<br />
particular parallel cooling circuit.<br />
B. Choice of the Temperature Sensor<br />
The very limited space requires a component available in a<br />
SMD 0603 package, and which can be read via a two wire<br />
connection, implying the need of a sensor in the 10 kΩ range.<br />
A relatively large change of the resistance per Kelvin reduces<br />
the requirements on the precision of the electronic circuit. In<br />
order to avoid a calibration of each individual channel, we<br />
have been searching for a resistor with small tolerance limits.<br />
Operation in a magnetic field of up to 2.6 T and at a radiation<br />
dose of up to 500 kGy (corresponding to 10 years of operation<br />
in ATLAS), are pre-requisites. For this reason the package<br />
material of the sensor was also taken into consideration.<br />
We found that the requirements listed above were best met by<br />
a 10 kΩ NTC resistor with a relative change of its resistance<br />
8<br />
developed by ST Microelectronics (Catania, Italy) in the framework of the<br />
RD 49 project, CERN<br />
of 4 % per Kelvin 9<br />
, available with 1 % tolerance at 25 °C. The<br />
type 103KT1608-1P from Semitec is additionally available in<br />
a glass coated package.<br />
C. The Interlock Box<br />
Since the interlock box is so closely related to the safety of<br />
the detector, we aimed at a pure hardware solution, which<br />
should not rely on any initialization via software or<br />
multiplexing. The interlock box should be able to work<br />
completely independently from other equipment. In addition<br />
to module heat up other error conditions including a broken<br />
cable or temperature sensor and a short circuit must also cause<br />
the setting of the interlock signal. Negative TTL logic is<br />
employed.<br />
R1<br />
10k<br />
+2V5<br />
R8<br />
1M<br />
C1<br />
470n<br />
Sensor<br />
NTC 10k<br />
R3 1M<br />
UHIB<br />
+5<br />
R2<br />
3k9<br />
+5<br />
IC2<br />
+<br />
OPA<br />
IC1<br />
_<br />
4336<br />
+<br />
OPA<br />
R5 1M<br />
_<br />
4336<br />
R4<br />
3k9<br />
+5<br />
IC3<br />
+<br />
OPA<br />
4336<br />
_<br />
ULO<br />
R7 1M<br />
R6 +5<br />
IC4<br />
3k9<br />
+<br />
OPA<br />
_<br />
4336<br />
UERR<br />
IC01<br />
2V5<br />
AD680<br />
+5 V<br />
C01<br />
CR<br />
22n<br />
C02<br />
etc. RH1<br />
RH2<br />
GND<br />
+5<br />
HI-Test<br />
TC4S66<br />
UHIB<br />
UHI +<br />
OPA<br />
CH<br />
_<br />
336<br />
RE2<br />
IC02<br />
RE1 UERR<br />
CE<br />
74LCX02<br />
IC6<br />
74LCX02<br />
Discriminator section, one for each channel<br />
RL2<br />
RL1 ULO<br />
Reference section, common for all channels<br />
Figure 3: Electrical Schematic of the Interlock Box<br />
IC5<br />
NTC I-Box circuit P. Kind 11.2000 Uni Wuppertal<br />
In order to reduce the influence of noise, a hysteresis of ~1 K<br />
is required between the setting of the alarm signal and its<br />
reset. A precision of +/- 1° K is required for the complete<br />
chain of temperature sensor, cables and interlock box<br />
therefore an accuracy for the interlock circuit alone of < 0.5<br />
K is adequate.<br />
Figure 3 shows the realization of the interlock circuit. A clean<br />
reference voltage is created by the reference section. The<br />
signal from the NTC is then compared to different thresholds,<br />
the op-amps acting as discriminators. The following NORgates<br />
create a pattern of two bits, representing the different<br />
error condition combinations mentioned above.<br />
Several studies [4] have demonstrated that the electrical<br />
performance of the circuit is in good agreement with the<br />
expected error, which is composed of the error of interlock<br />
circuit, caused by the tolerances of the components, of ± 0.2<br />
K and of the tolerances of the NTC of ± 0.3 K (depending on<br />
the temperature).<br />
9<br />
10 kΩ @ 25°C, t(K)=1/(9.577E-4+2.404E-4ln(R)+2.341E-7ln(R) 3 ). Mfr:<br />
Ishizuka Electronics Co. 7-7 Kinshi 1-Chome, Sumida-ku, Tokyo 130-8512,<br />
Japan<br />
T-Hi<br />
T-Lo<br />
CL
D. Irradiation Results on the Interlock Box<br />
As the location of the interlock box will be the ATLAS<br />
experimental cavern, the radiation tolerance of all its<br />
components must be investigated. Three types of possible<br />
problems must be studied in order to qualify the electronics:<br />
damage due to ionising (“total ionising dose”: TID) and non<br />
ionising radiation (“non-ionising energy loss”: NIEL) and<br />
single event effects (SEE). The expected simulated radiation<br />
levels are given in [5]. In order to determine the “radiation<br />
tolerance criteria”, several safety factors are added, table 1<br />
summarizes them for the interlock box based on the<br />
calculations described in [5].<br />
Table 1: “Radiation tolerance criteria” for the Interlock Box<br />
RTCTID RTC NIEL<br />
RTCSEE 10 years operation in ATLAS<br />
93 Gy<br />
4.8 10 11<br />
n/cm 2<br />
(1 MeV)<br />
9.22 10 10<br />
h/cm 2<br />
(> 20 MeV)<br />
As a pre-selection 3 irradiation campaigns were performed:<br />
• TID studies with a 2.10 4<br />
Ci Co- 60 source;<br />
• A neutron irradiation at the CEA Valduc 10<br />
research<br />
reactor with an energy range up to a few MeV with<br />
maximum intensity at 0.7 MeV;<br />
• SEU studies at the 60 MeV proton beam of UCL 11<br />
.<br />
Usually five devices were irradiated per campaign. During all<br />
irradiation tests the components were powered, monitored and<br />
checked online for performance and power consumption. For<br />
the study of single event effects additionally a special<br />
program was developed to monitor the complete circuit for<br />
transients or other temporary changes in the output. During<br />
the first TID campaign, the first selected NOR-gate (type<br />
CD74HC02M 12<br />
) showed severe problems, manifested by an<br />
increase of the power consumption. Several further NORgates<br />
were tested [6]. It was found that the radiation<br />
robustness depended not only on the logic family and the<br />
manufacturer, but also on the input pattern sent to the device<br />
during the irradiation. We found that the MC74LCX02D 13<br />
best met our requirements.<br />
Table 2: Components of the Interlock Box, which passed all three<br />
irradiation tests<br />
Selected Device<br />
Op-Amp OPA336N<br />
4 fold Op-Amp OPA4336EA<br />
NOR-Gate MC74LCX02D<br />
Voltage reference (2.5 V) AD680JT<br />
Voltage regulator (3.3 V) LE33CZ<br />
Analog switch TC4S66<br />
Capacitors Ceramicmulti-layer, Epcos<br />
As the offset voltage of the op-amp is critical to the accuracy<br />
of the circuit (1 mV corresponds to 1/20 K) this quantity was<br />
10<br />
11<br />
CEA, Centre de Valduc, F21120 Is-sur-Tille, France<br />
Université Catholique de Louvain, B1348 Louvain-la-Neuve,<br />
Belgium<br />
12<br />
13<br />
Texas Instruments, USA; www.texas-instruments.com<br />
ON Semiconductors, USA ; www.onsemi.com<br />
measured for several input voltages. Figure 4 shows a typical<br />
result, indicating that no deterioration is expected. The<br />
colored band represents the acceptable variation in our<br />
application. All other devices showed no problems, either<br />
during or after irradiation. Table 2 summarizes the<br />
components which finally passed the three irradiation tests.<br />
Offset/uV<br />
Offset/uV<br />
Offset/uV<br />
1000<br />
800<br />
600<br />
400<br />
200<br />
-200<br />
-400<br />
-600<br />
-800<br />
-1000<br />
0<br />
1000<br />
800<br />
600<br />
400<br />
200<br />
-200<br />
-400<br />
-600<br />
-800<br />
-1000<br />
0<br />
1000<br />
800<br />
600<br />
400<br />
200<br />
0<br />
-200<br />
-400<br />
-600<br />
-800<br />
-1000<br />
OPA336, Sample B3 Uin=1.0 V<br />
14 10 11 12 13 14 15 12<br />
19/4 ----- 20/4 ----- 21/4<br />
begin end of irradiation<br />
OPA336, Sample B3 Uin=1.8 V<br />
14 10 11 12 13 14 15 12<br />
19/4 ----- 20/4 ----- 21/4<br />
begin end of irradiation<br />
OPA336, Sample B3 Uin=2.5 V<br />
14 10 11 12 13 14 15 12<br />
19/4 ----- 20/4 ----- 21/4<br />
begin end of irradiation<br />
12<br />
28/4<br />
12<br />
28/4<br />
12<br />
28/4<br />
Time [h]<br />
Date<br />
Time [h]<br />
Date<br />
Time [h]<br />
Date<br />
Figure 4: Offset Voltages of the OPA-336 (TID: 100 Gy)<br />
E. The Interlock System<br />
CAN-Bus<br />
Interface<br />
DCS Station<br />
TCP/IP<br />
CAN-Bus<br />
ELMB1<br />
CAN-Bus<br />
CAN-Bus<br />
Test<br />
ELMB2 Interlock ELMB1<br />
Test<br />
ELMB2 Interlock<br />
Monitoring<br />
CAN-Bus<br />
Interface<br />
Ethernet<br />
Interface<br />
ELMB3<br />
Power Supply Lines<br />
Power Supply System<br />
CH CH CH CH CH CH<br />
#01 #02 #03<br />
#nn<br />
Monitoring<br />
Logic Unit<br />
NTC NTC<br />
Detector Module<br />
Detector Module<br />
Figure 5: Schematic of the Temperature Monitoring<br />
and Interlock System<br />
Counting Room<br />
Cavern<br />
Detector Inside<br />
Figure 5 shows the complete temperature and interlock chain.<br />
The signal from the temperature sensor is split between<br />
interlock box and ELMB. Information from two interlock<br />
channels are combined by the logic unit, as one power supply<br />
channel serves two detector modules. In parallel, the digital<br />
information from the ELMB is sent to the DCS station via the<br />
CAN-Bus for data logging. Additionally, another ELMB<br />
monitors the interlock bit pattern. A third ELMB can be used
to send test signals to the interlock box. This remote test<br />
possibility is necessary because the equipment is not<br />
accessible during data taking periods. The power consumption<br />
of the Interlock Box is also monitored by this third ELMB<br />
since an increase in the supply current can indicate problems<br />
due to irradiation.<br />
V. THE PIXEL DETECTOR SCADA SYSTEM<br />
Endcap 1<br />
Pixeldetector<br />
B-Layer 1 Layer 1-2 Layer 2-3<br />
Layer 1-1 Layer 2-2<br />
Layer 2-1<br />
optical link<br />
HV<br />
Vdda<br />
Vdd ERROR<br />
Temp 1<br />
Temp 2<br />
Interlock 1<br />
Interlock 2<br />
cooling info<br />
others ?<br />
B-Layer 2 Layer 1-3 Layer 2-4<br />
5.0<br />
4.9<br />
2<br />
3<br />
OVC<br />
Endcap 2<br />
PCC #01 PCC #02 PCC #03 PCC #04 PCC #nn<br />
BDU #01 BDU #02 BDU #03<br />
BDU #12 BDU #13<br />
Vset Vmon Iset Imon stat<br />
enter new value<br />
Figure 6: Organization of the Pixel Detector Elements inside<br />
the SCADA System<br />
Besides the hardware components described in the previous<br />
three sections, the temperature management system (see<br />
figure 5) will also be applied for the control of the opto boards<br />
(see figure 1), as the performance of irradiated VCSELs will<br />
decrease if their temperature exceeds 20 °C. The temperatures<br />
of the regulators located in the supply cabling and their input<br />
and output voltages and currents are further parameters to be<br />
monitored.<br />
Following the recommendations for the ATLAS experiment<br />
we have started to implement the SCADA system for the<br />
control of the pixel detector using the PVSS commercial<br />
software product 14<br />
.<br />
Our first studies with PVSS are following geographically<br />
oriented structures, where all information relevant for one<br />
detector unit is given simultaneously, since this helps to trace<br />
and analyse problems. Figure 6 shows how the different levels<br />
of the tree structure can be arranged. Following the different<br />
levels one can find out in which part of the detector a problem<br />
has arisen. As the distribution of the high and low voltages is<br />
done with granularity two, from the DCS point of view the<br />
14<br />
from ETM, A-7000 Eisenstadt, Austria<br />
smallest unit DCS can act on contains 2 modules - forming a<br />
“base detector unit”. All other information relevant to the<br />
status of a detector module (temperature, status of the related<br />
opto link and interlock, etc.) must be available on this level.<br />
Information on the status of the cooling system and the<br />
readout chain will be supplied via the “event manager”.<br />
We have started to implement PVSS version 2.11.1 on a win-<br />
dows NT platform. In this development system we use two<br />
OPC servers (OLE for Process Control), one communicating<br />
via CAN-Bus to the ELMB which monitors the hardware, and<br />
the other handling communication with the power supplies.<br />
With this system we have begun to implement a few BDUs,<br />
from which larger systems can be composed.<br />
VI. SUMMARY<br />
The requirements of the pixel detector have made specific<br />
developments necessary for the hardware components of the<br />
DCS. A cooling system using the evaporation of C 3 F 8<br />
fluorocarbon has been developed and will be used for the<br />
ATLAS pixel and SCT detectors. Control of coolant flow in<br />
each circuit via PID regulation has been demonstrated and is<br />
being implemented in a 25-channel demonstrator. For the<br />
temperature management system, a possible solution is found<br />
based on the interlock box and the ELMB. The design of the<br />
interlock box uses standard electronic components, which<br />
helps to reduce its cost. Its radiation tolerance for 10 years´<br />
operation in the ATLAS cavern can be achieved as several<br />
pre-selection irradiation tests have shown. Before going to<br />
production this must be verified on larger samples. The<br />
implementation of the supervisory control system in PVSS<br />
has started. The basic elements are defined. Future test beam<br />
activities will allow us to test and improve the proposed<br />
control structure.<br />
VII. REFERENCES<br />
[1] ATLAS Pixel Detector Technical Design Report,<br />
CERN/LHCC/98-13, 31.05.1998<br />
[2] B. Hallgren, The Embedded Local Monitor Board in<br />
the LHC Detector Fron-end I/O Control System, in these<br />
prceedings<br />
[3] E. Anderssen et al, “Fluorocarbon Evaporative Cooling<br />
Developments for the ATLAS Pixel and Semiconductor<br />
Tracking Detectors”, Proc. 5 th<br />
Workshop on Electronics for<br />
LHC Experiments, CERN 99-09 CERN/LHCC/99-33<br />
[4] C. Bayer et al, “Development of Fluorocarbon Evaporative<br />
Cooling Recirculators and Controls for the ATLAS<br />
Pixel and Semi- conductor Tracking Detectors” Proc. 6 th<br />
workshop on Electronics for LHC Experiments, Krakow,<br />
Poland, Sept. 1999, CERN 2000-101 CERN/LHCC/2000-041<br />
[5] M. Dentan: ATLAS Policy on Radiation tolerant<br />
Electronics, July 2000 Rev. No.2, ATC-TE-QA-001<br />
[6] www.atlas.uni-wuppertal.de/dcs
Printed Circuit Board Signal Integrity Analysis at CERN.<br />
Abstract<br />
Printed circuit board (PCB) design layout for digital<br />
circuits has become a critical issue due to increasing clock<br />
frequencies and faster signal switching times. The Cadence ®<br />
SPECCTRAQuest package allows the detailed signalintegrity<br />
(SI) analysis of designs from the schematic-entry<br />
phase to the board level. It is fully integrated into the<br />
Cadence ® PCB design flow and can be used to reduce<br />
prototype iterations and improve production robustness.<br />
Examples are given on how the tool can help engineers to<br />
make design choices and how to optimise board layout for<br />
electrical performance. Case studies of work done for LHC<br />
detectors are presented.<br />
I. INTRODUCTION<br />
A. High-speed digital design<br />
Electronic Design Automation tools are becoming<br />
essential in the design and manufacture of compact, highspeed<br />
electronics and have been extensively used at CERN to<br />
solve a wide range of problems [1], [2].<br />
A typical fast digital system poses many high-speed signal<br />
integrity questions (Figure 1).<br />
Termination<br />
Effect<br />
IC Package<br />
Effect<br />
Silicon Speed<br />
Effect<br />
Stubs Effect<br />
Termination<br />
Effect<br />
Backplane & Board<br />
Impedance and<br />
Tolerance Effects<br />
Connector<br />
Effect<br />
Partially Loaded<br />
Backplane Effect<br />
Connector<br />
Effect<br />
Stubs<br />
Effect<br />
IC Package<br />
Effect<br />
SiliconSpeed<br />
Effect<br />
Board<br />
Impedance<br />
& Tolerance<br />
Effects<br />
Figure 1: Typical backplane and onboard bus systems<br />
Evans, J.<br />
Sainson, J-M.<br />
CERN, 1211 Geneva 23, Switzerland<br />
John.Evans@cern.ch<br />
J-M.Sainson@cern.ch<br />
Nearest<br />
Neighbour<br />
Receiver<br />
Effect<br />
Some rules of thumb can be applied to try and achieve a<br />
working design. These include:<br />
• A track should be considered as a transmission line<br />
when:<br />
2 tpd≥ tr<br />
where tpd is the propagation delay for an<br />
interconnect length and tr is the signal switching<br />
time.<br />
• For a digital signal with switching time tr, the<br />
equivalent bandwidth is given by:<br />
1 0.<br />
32<br />
F = ⇒<br />
Πtr<br />
tr<br />
Note that this expression is dependent only on<br />
switching speed and not clock frequency.<br />
• For a backplane, the effective loaded transmissionline<br />
characteristic-impedance (Zeff) can be estimated<br />
as [3]:<br />
1<br />
Zeff ≅ Zo<br />
Cload<br />
1+<br />
C<br />
Zo = unloaded transmission-line characteristicimpedance<br />
C = Total unloaded transmission-line capacitance<br />
Cload = Total load capacitance along the backplane<br />
line.<br />
The above guidelines are very useful but insufficient to<br />
ensure a working design. Other rules of thumb exist to<br />
estimate crosstalk and reflections but it becomes impractical<br />
to apply them all to a large design with many nets.<br />
The Cadence SPECCTRAQuest SI Expert suite was<br />
developed to try and overcome these problems. It has three<br />
main components (Figure 2).<br />
SigXplorer has been primarily used at CERN during the<br />
pre-layout stage. It can be used to perform “what if” analyses<br />
and to determine the effects of different design and layout<br />
strategies on signal integrity. While fully integrated into the<br />
Cadence flow, it is simple enough to be used as a standalone<br />
tool.<br />
The program represents a circuit as a number of drivers<br />
and receivers that are linked via interconnects (Figure 3).<br />
The user can explore different placement and routing<br />
strategies and is free to experiment with parameters such as<br />
track impedances and terminations and choice of IC<br />
technology.
CADENCE's<br />
SPECCTRAQuest TM SI expert<br />
(Signal Integrity Analysis Package)<br />
IBIS Models<br />
(Behavioral)<br />
Signal Explorer<br />
(Topology Explorator)<br />
Design<br />
Choice<br />
Layout<br />
Guideline<br />
Concept<br />
(Schematic)<br />
(Front End to Allegro)<br />
Allegro to Front End<br />
(Backannotaion)<br />
Simulation<br />
SPECCTRAQuest<br />
Allegro<br />
(Floor Planer)<br />
(Board Layout)<br />
Layout Check<br />
&<br />
Enhancement<br />
SigNoise<br />
(Behavioral Simulator)<br />
CADENCE's<br />
PCB System Design<br />
Figure 2: Signal Integrity Design Flow<br />
Import<br />
Figure 3: Typical SigXplorer pre-layout analysis<br />
Typical design choices to be considered would be:<br />
• Architecture evaluation (bus, clock tree…)<br />
• Topology exploration<br />
• Standard ICs technologies comparison<br />
• Termination type and value characterization<br />
• Package type selection<br />
• ASICs I/O cells characterization<br />
These preferences are then transferred to the Schematic Editor<br />
before producing an Allegro PCB netlist (Figure 2).<br />
SigXplorer (via the SPECCTRAQuest Floorplanner<br />
/Editor) can verify that simulation results from the routed<br />
PCB correspond to those obtained from the pre-layout<br />
analysis. While using the same simulator in both cases, there<br />
is an important difference between the models used. During<br />
pre-layout analysis, the user specified all relevant parameters<br />
such as line delays and impedances to allow<br />
SPECCTRAQuest to estimate the high-speed signal<br />
phenomena. For post-layout analysis, the program calculates<br />
these values directly from the board layout and stackup using<br />
a built-in 2-D electromagnetic field solver.<br />
For each interconnect type of interest, the simulator<br />
searches its libraries for a corresponding electrical model. If<br />
no appropriate model exists, SPECCTRAQuest automatically<br />
derives one using the field-solver. This is then stored in the<br />
interconnect model library so the calculation is only necessary<br />
once for each specific cross-section.<br />
While not providing the precision of other more<br />
specialised tools e.g. Ansoft’s Maxwell, the solver aims to<br />
provide a good compromise between accuracy and<br />
computation time.<br />
The SPECCTRAQuest SI Expert Simulation environment<br />
is called SigNoise. It embraces tools for entering and editing<br />
device models, simulating the circuit and displaying results.<br />
It uses a SPICE-type simulation engine that incorporates a<br />
lossy, coupled, frequency-dependent transmission line model<br />
valid to the tens of GHz region.<br />
Traditionally, an important shortcoming in the PCB flow<br />
was that there was no formal means of passing design rules<br />
between different stages. The latest SPECCTRAQuest SI<br />
Expert release has seen an extension of the Constraint<br />
Manager application. This tool can manage high-speed<br />
electrical constraints across all stages of the design flow.<br />
These rules can be defined, viewed and validated at any step<br />
from schematic capture to floor planning and to PCB<br />
realization in Allegro. For example, an electrical constraint<br />
can be formally applied in Concept and carried through to the<br />
PCB layout stage. If the layout designer violates this<br />
constraint, it will be automatically flagged as an error. These<br />
new features are currently being evaluated at CERN.<br />
B. IBIS models<br />
SPECCTRAQuest is delivered with a built-in library of<br />
commercial driver and receiver models. However, the user<br />
sometimes needs to define a new device. A SPICE model<br />
could be used to describe fully a driver or receiver at<br />
transistor level but this approach has some serious<br />
disadvantages. One is the impractically long simulation times<br />
that could arise for a large circuit. Another is that IC<br />
manufacturers usually do not wish to disclose proprietary<br />
information regarding their processes.<br />
An alternative approach is to describe devices according to<br />
the Input/Output Buffer Information Specification (IBIS)<br />
modelling standard [4]. Here, only the input and output<br />
stages are modelled and no attempt is made to represent the<br />
internal circuit structure. Two basic models have been<br />
defined for the standard (Figures 4, 5).<br />
Contrary to the SPICE approach, the buffers are described<br />
using behavioural modelling only. A set of tables is used to<br />
represent various characteristics such as output stages pull-up<br />
or pull-down capabilities (using I-V relationships) and the<br />
output switching speed (using a V-t table).<br />
This largely overcomes the disadvantages of the SPICE<br />
approach:<br />
• The system simulates quickly as there is no circuit<br />
detail involved.<br />
• The voltage/current/time relationships are defined<br />
only for the external nodes of the gates. This<br />
conceals both process and circuit intellectual<br />
property.<br />
There are also programs available that can translate a<br />
SPICE netlist to an equivalent IBIS model. These have been<br />
used at CERN to characterise ASICS’ output buffers [5].
Package Parasitic<br />
GND<br />
R_pkg<br />
L<br />
C_pkg<br />
L_pkg<br />
Power_Clamp<br />
Gnd_Clamp<br />
GND<br />
clamp<br />
I-V<br />
Figure 4: IBIS Buffer input stage definition<br />
Ramp up<br />
or<br />
V-t<br />
Ramp down<br />
or<br />
V-t<br />
Vcc<br />
Pullup<br />
I-V<br />
Pullup<br />
Pulldown<br />
GND<br />
Pulldown<br />
I-V<br />
Power<br />
clamp<br />
I-V<br />
Power<br />
Clamp<br />
Ground<br />
Clamp<br />
GND<br />
GND<br />
clamp<br />
I-V<br />
Vcc<br />
Silicon Behaviour<br />
&<br />
Die Parasitics<br />
C_comp<br />
GND<br />
Figure 5: IBIS Buffer output stage definition<br />
Vcc<br />
GND<br />
Power<br />
clamp<br />
I-V<br />
Silicon Behaviour<br />
&<br />
Die Parasitics<br />
C_comp<br />
GND<br />
Package Parasitics<br />
R_pkg L_pkg<br />
C_pkg<br />
II. LHC DETECTORS CASE STUDIES<br />
GND<br />
Die<br />
capacitor<br />
We shall now give examples of how these tools have been<br />
applied during the development of LHC detector electronics.<br />
A. FPGA Bus Design application for ATLAS<br />
LARG ROD Injector module<br />
1) Project Overview<br />
The Read Out Driver Injector module has been designed<br />
to debug the ATLAS Liquid Argon Calorimeter ROD system<br />
[6]. The module emulates the Front End Buffers output data<br />
and generates typical Timing, Trigger and Control signals.<br />
2) Module description<br />
During normal operation, a Front End Buffer module<br />
receives analogue signals from calorimeter cells. After<br />
amplification and shaping, these signals are digitised at 40<br />
MHz sampling frequency and the resulting data sent to the<br />
ROD.<br />
The injector module emulates 4 half-FEBs and the TTC<br />
signals (Figure 6). Each function is implemented in an<br />
ALTERA Flex 10K30E FPGA with data for each function<br />
stored in an associated 32Kwords SRAM memory. The<br />
module is built as a 9U VME64x card with a 16 bits data<br />
VME interface.<br />
There are 3 separate data busses with the following<br />
characteristics:<br />
ALTERA FLEX I/O Cells options?<br />
I/O Voltage (2.5V, 3.3V), slew rate<br />
DATA BUS lines impedance &<br />
type?<br />
Microstrip<br />
Stripline<br />
STUBS maximum length, number<br />
& impedance?<br />
Maximum DATA BUS length?<br />
Is it necessary to split the BUS<br />
in two buses?<br />
� hardware overheads<br />
� more complexity<br />
PCB Board<br />
STACKUP<br />
specifications?<br />
Impedance<br />
Type<br />
Tolerance<br />
VME<br />
INTERFACE<br />
FPGA 7<br />
ALTERA<br />
FLEX 10K30E<br />
High Speed DATA BUS<br />
characteristics:<br />
tr,tf < 500 ps, F = 40 MHz max<br />
TERMINATION type<br />
location and value?<br />
Thevenin<br />
Shunt<br />
RC<br />
..<br />
16 bits<br />
TTC/LOCAL<br />
TTC<br />
RECEIVER<br />
Figure 6: LARG ROD Injector Module<br />
16 bits DATA BUS<br />
16 bits<br />
16 bits<br />
16 bits<br />
16 bits<br />
16 bits<br />
12 bits<br />
16 bits<br />
SLINK<br />
RECEIVER<br />
FPGA 1<br />
ALTERA<br />
FLEX 10K30E<br />
HALF FEB<br />
DATA GEN.<br />
FPGA 2<br />
ALTERA<br />
FLEX 10K30E<br />
FEB1<br />
12<br />
HALF FEB<br />
DATA GEN.<br />
FPGA 3<br />
ALTERA<br />
FLEX 10K30E<br />
12<br />
12 bits<br />
Clock<br />
BCR<br />
L1A<br />
Busy<br />
To ROD<br />
HALF FEB<br />
DATA GEN.<br />
FPGA 4<br />
ALTERA<br />
FLEX 10K30E<br />
FEB2<br />
HALF FEB<br />
DATA GEN.<br />
FPGA 5<br />
ALTERA<br />
FLEX 10K30E<br />
Clock, L1A, BCID,<br />
EVTID<br />
12<br />
12<br />
LHC SEQ.<br />
GEN.<br />
FPGA 6<br />
ALTERA<br />
FLEX 10K30E<br />
• A 12 bit unidirectional data bus links the Timing<br />
generator to the 4 half-FEB generators. Data on this<br />
bus is transferred at 40 MHz.<br />
• A 16-bit bi-directional data bus and a 15-bit<br />
unidirectional address bus (this bus is not explicitly<br />
shown on Figure 7) link these functions to the VME<br />
interface. The busses are sampled at 40 MHz but<br />
data is transferred at VME bus rates.<br />
3) Design choices and layout recommendations<br />
An extensive pre-layout analysis was undertaken on the<br />
system busses and the clock distribution system. This<br />
allowed some significant decisions to be made already at this<br />
early stage.<br />
The analysis was based on a multipoint bus using<br />
ALTERA FLEX 10K30E devices with tf < 500pS. Using the<br />
recommended design choices and layout rules ensures that<br />
signals switch on the first incident wave with at least 500mV<br />
positive noise margin at sampling time. The important<br />
conclusions include:<br />
• It is possible to use a single 300mm long DATA<br />
BUS with these devices if proper termination is used<br />
(see below). This avoids the complication of<br />
splitting the bus into two or more segments.<br />
• Only one termination scheme was found to provide<br />
consistently good results. This was a combination of<br />
an AC termination (R= 100 Ohm, C=1nF) at both<br />
DATA_BUS ends complemented by 4.7 Ohms<br />
STUBS SERIES terminations added near the DATA<br />
BUS. The latter was needed to lower stubs and
•<br />
package impedance effects. DC termination could<br />
not be used as it overloaded the driver fan out<br />
capabilities.<br />
DATA BUS lines impedance and type were<br />
confirmed to be capable of being manufactured as a<br />
class 5, 8-layer PCB. Calculations were made for a<br />
line impedance of 70Ω with a +/- 20 % tolerance –<br />
this was evaluated using SigXplorer’s sweep<br />
parameters functionality.<br />
• Simulations showed that stub lengths could be at<br />
least 10 mm.<br />
• Simulation showed that design was robust enough to<br />
allow working with worst-case driver/receiver<br />
combinations as regards switching speeds and IC<br />
location.<br />
4) Post-layout check before board manufacture<br />
The guidelines were implemented in the PCB layout<br />
phase. Prior to board manufacture, all critical parameters<br />
(time propagation delay, skew, first incident wave, noise<br />
margin) were verified and found to be satisfactory.<br />
5) Module state and future development<br />
A prototype is fully functional and integrated with the<br />
ROD in the current DAQ environment.<br />
B. GTL Bus Design case study for ALICE pixel<br />
chip carrier<br />
1) Project overview<br />
The ALICE Silicon Pixel Detector (SPD) is located within<br />
the Inner Tracking System (ITS) and is the detector with the<br />
highest active channel density. The Pixel Carriers have been<br />
designed to physically support the detector ladders, power<br />
them and to carry signals between the pilot and the readout<br />
chips.<br />
Figure 7: ALICE Pixel Chip Carrier<br />
2) Module description<br />
Figures 7 and 8 show different views of the ALICE pixel<br />
chip carrier [7]. 10 Readout chips are connected to the data<br />
bus by bonding wires. The data is then fed to the I/O Cells<br />
Pilot chip via a series of “vertical” and “horizontal” lines.<br />
3) Design choices and layout considerations<br />
Due to its position in the active area, the board has to be as<br />
transparent as possible to physics particles. This physical<br />
constraint eventually led to a choice of a 200µm thick, 6-layer<br />
aluminium/polymide PCB. This has important implications<br />
for the electrical characteristics. Due to the small PCB<br />
thickness and the need for fine lines, the track impedances<br />
become uncomfortably low compared to the driver impedance<br />
and to its current capabilities. There is also another mismatch<br />
problem as the horizontal and vertical lines have different<br />
characteristic impedances. Detailed analyses were performed<br />
to confirm that the system would still work acceptably under<br />
these sub-optimum conditions. Important design points are:<br />
• The horizontal data-bus microstrip-lines signals were<br />
estimated to have an impedance of 19Ω. The vertical<br />
lines were calculated to be nominally 9Ω.<br />
• For these lines and I/O cells, the optimum pull-up<br />
termination was found to be 22Ω on one end only of<br />
the PCB.<br />
• The PIXEL chip was designed with GTL-like I/O<br />
technology with output-cells switching speeds<br />
selectable from 4 to 30ns. Simulations were made at<br />
4, 7 and 20ns with virtual silicon using IBIS models<br />
created at CERN from the HSPICE netlist.<br />
The complete assembly was fully analysed before delivery<br />
of the Carrier PCB or chips with a full load of 10 pixel chips<br />
for all three switching times. The crosstalk was estimated as<br />
being less than 120mV Root Sum Square.<br />
This figure has not yet been measured on hardware.
Figure 8: ALICE Pixel System Detector<br />
III. CONCLUSIONS<br />
Signal integrity analysis is essential for designing reliable<br />
sub-nanosecond switching-time circuits. We have described<br />
SPECCTRAQuest and shown how it can help with all aspects<br />
of the design flow from circuit design choices to PCB layout.<br />
We have given examples of how it has been used to help in<br />
validating IC technology choice and in developing placement<br />
and routing guidelines for the layout designers.<br />
Future developments will include the deployment of the<br />
“Constraint Manager” to automatically manage high-speed<br />
electrical constraints across all design-flow stages. We also<br />
hope to evaluate the SUN/Cadence “Power Integrity” module<br />
to address the issues of correct power-plane design.<br />
The SPECCTRAQuest SI Expert tools are presently<br />
available at CERN and fully supported by IT/CE-AE [9].<br />
IV. ACKNOWLEDGEMENTS<br />
The authors would like to acknowledge the contributions<br />
of D. Charlet, LAL, M. Morel, CERN/EP and G. Perrot,<br />
LAPP to this paper.<br />
Cadence, SPECCTRAQuest, SigXplorer, SigNoise,<br />
Ansoft and Maxwell are all registered trademarks.<br />
V. REFERENCES<br />
[1] Evans, B.J., Calvo Giraldo, E., Motos Lopez, T.,<br />
‘Electronic Design Automation tools for high-speed electronic<br />
systems’, proceedings of the 6 th Workshop on Electronics for<br />
LHC experiments, Cracow, September 11-15, 2000 pp 393-7<br />
[2] Evans, B.J., Calvo Giraldo, E., Motos Lopez, T.,<br />
‘Minimizing crosstalk in a high-speed cable-connector<br />
assembly’, proceedings of the 6 th Workshop on Electronics for<br />
LHC experiments, Cracow, September 11-15, 2000 pp 572-6<br />
[3] Novak, I., “Design of Clock Networks, Busses and<br />
Backplanes: Measurements and Simulation Techniques in<br />
High-Speed Digital Systems”, CEI Europe Course Notes,<br />
Dublin 1997<br />
[4] IBIS (I/O Buffer Information Specification) ANSI/EIA-<br />
656-A Homepage, http://www.eigroup.org/ibis<br />
[5] SPICE to IBIS Application Notes,<br />
http://cern.ch/support-specctraquest/ (“How to” menu)<br />
[6] Perrot, G., "ROD Injector User Manual", LAPP internal<br />
note, http://wwwlapp.in2p3.fr/<br />
[7] Antinori, F. et al, “The ALICE Silicon Pixel Detector<br />
Readout System”, proceedings of the 6 th Workshop on<br />
Electronics for LHC experiments, Cracow, September 11-15,<br />
2000 pp 105-9<br />
[8] Morel, M. “ALICE Pixel Carrier (A) and (B)”,<br />
http://morel.web.cern.ch/morel/alice.htm<br />
[9] SPECCTRAQuest SI Expert at CERN,<br />
http://cern.ch/support-specctraquest/
Influence of Temperature on Pulsed Focused Laser Beam Testing<br />
P.K.Skorobogatov, A.Y.Nikiforov<br />
Specialized Electronic Systems, 31 Kashirskoe shosse, Moscow, 115409 Russia<br />
pkskor@spels.ru<br />
Abstract<br />
Temperature dependence of radiation-induced charge<br />
collection under 1.06 and 0.53 μm focused laser beams is<br />
investigated in experiment and numerical simulation. The<br />
essential sensitivity of collected charge to temperature was<br />
obtained only for 1.06 μm wavelength.<br />
I. INTRODUCTION<br />
The focused laser sources are widely used for single event<br />
effects (SEE) investigation [1-3]. Laser simulation of SEE is<br />
based on the focused laser beam capability to induce local<br />
ionization of IC structures. A wide range of particle linear<br />
energy transfer (LET) and penetration depths may be<br />
simulated varying the laser beam spot diameter and<br />
wavelength.<br />
The temperature dependence of the laser absorption<br />
coefficient in semiconductor affects the equivalent LET and<br />
must be accounted for when devices are tested at temperature<br />
range [4]. In order to estimate the influence of temperature<br />
on SEE laser testing parameters we have analyzed the<br />
temperature dependence of charge collected in test structure<br />
p-n junction.<br />
In the present study we used a pulsed laser with 1.06 and<br />
0.53 μm wavelengths as a source of focused ionization. The<br />
measurements of p-n junction collected charge were<br />
performed in the temperature range from 22 to 110 °C for<br />
two laser beam spot positions. It was found the essential<br />
influence of temperature on collected charge for 1.06 μm<br />
wavelength and the negligible dependence under 0.53 μm<br />
laser beam.<br />
This effect is associated with the strong temperature<br />
dependence of light absorption in silicon when the photon<br />
energy is near the bandgap [5]. The numerical simulations<br />
with the “DIODE-2D” 2D software simulator confirmed this<br />
assumption.<br />
II. EXPERIMENTAL DETAILS<br />
The experiments were performed using the original<br />
"PICO-2E" pulsed solid-state laser simulator (Nd 3+ passively<br />
mode-locked, basic wavelength λ = 1.06 μm, laser pulse<br />
duration Tp ≈ 8 ps) as a source [6]. The simulator was used in<br />
basic (λ = 1.06 μm) and frequency-double (λ = 0.53 μm)<br />
modes with laser spot diameter of 5 μm.<br />
The investigated test structure is manufactured in a<br />
conventional 2 μm bulk CMOS process and includes wellsubstrate<br />
p-n junction (48x78 μm) with narrow (2 μm)<br />
metallization strips to have a maximum free surface [7]. The<br />
p-n junction collected charge temperature dependence was<br />
measured under laser irradiation for two laser beam locations<br />
as shown in Fig. 1. The first is located within the n-well<br />
(location 1) and the second is localized out of junction area<br />
(location 2).<br />
a)<br />
b)<br />
Figure 1: Cross-sectional (a) and top (b) views of the test structure<br />
The internal chip temperature was monitored with an<br />
additional forward biased p-n junction test chip. The<br />
experimental set-up and temperature monitoring procedures
are described in [8]. The temperature uncertainty was near<br />
5%. The test structures were under 5 V bias. The ionizing<br />
current transient response and collected charge were<br />
registered with a "Tektronix TDS-220" digital oscilloscope.<br />
III. NUMERICAL TO EXPERIMENTAL COMPARATIVE<br />
RESULTS<br />
In order to perform a collected charge analysis of test<br />
structure in a temperature range the "DIODE-2D" software<br />
simulator was used. This is a two-dimensional solver of a<br />
fundamental system of equations that was modified to include<br />
a temperature dependent laser absorption coefficient. It takes<br />
into account the electrical and optical processes including<br />
free carrier nonlinear absorption [9].<br />
The temperature dependencies of semiconductor<br />
parameters such as band gap and intrinsic carrier density<br />
were taken into account in accordance with [10]. The bulk<br />
mobility temperature dependence is described by the function<br />
(T/300) -2.33 for electrons and holes, where T is the Kelvin<br />
temperature. As for low level density carrier lifetimes their<br />
temperature dependencies were modeled by a power law<br />
(T/300) 2 . The Auger recombination coefficients were taken<br />
slightly increasing with temperature in accordance with a 0.2<br />
power law.<br />
The "PICO-2E" laser simulator pulse energy has a<br />
fluctuations from pulse to pulse. To reduce the variations of<br />
laser pulse energy on accuracy the monitoring of every pulse<br />
was performed. The numerical and experimental results are<br />
presented as a dependences of SEE sensitivity coefficient Kq<br />
= ΔQ/W versus temperature. Here ΔQ is a collected charge<br />
in pC and W is a laser pulse energy in nJ.<br />
The SEE sensitivity coefficients vs temperature, both<br />
measured and calculated at laser beam location 1 are<br />
presented in Fig. 2 for the case of 0.53 μm wavelength. This<br />
range of wavelengths is far from bandgap and light<br />
absorption coefficient is practically insensitive to<br />
temperature. The theoretically predicted slight temperature<br />
dependence may be connected with the competition of two<br />
mechanisms: increase of minority charge carriers lifetime<br />
and decrease of their mobility with temperature.<br />
The pulse-to-pulse variation of laser energy during 0.53<br />
μm wavelength experiment was in the range from 0.5 to 1.08<br />
nJ. The 2-order linear regression of experimental data is<br />
presented in Fig. 2 by dashed line.<br />
The SEE sensitivity coefficient vs temperature, both<br />
measured and calculated at laser beam location 1 are<br />
presented in Fig. 3 for the case of 1.06 μm wavelength.<br />
This wavelength is near the bandgap edge and light<br />
absorption coefficient is very sensitive to temperature. The<br />
theoretical prediction gives the approximately doubling of<br />
collected charge in the range from 22 to 110 °C. The<br />
experimental results show that SEE sensitivity increases at<br />
least three times in this temperature range. This difference<br />
between measured and simulated results may be explained by<br />
uncertainties of laser absorption coefficient temperature<br />
dependence near the edge of silicon fundamental band-toband<br />
absorption zone.<br />
Figure 2: Numerical (lines) and experimentally determined (dots)<br />
test structure SEE sensitivity coefficient vs temperature at laser<br />
beam location 1 for 0.53 μm wavelength<br />
Figure 3: Numerical (lines) and experimentally determined (dots)<br />
test structure SEE sensitivity coefficient vs temperature at laser<br />
beam location 1 for 1.06 μm wavelength<br />
The pulse-to-pulse variation of laser energy during 1.06<br />
μm wavelength experiment was in the range from 2.3 to 4.5<br />
nJ per pulse.<br />
The experiment and calculations for laser beam location<br />
in other surface points (both inside and outside of p-n<br />
junction) give the similar results.<br />
The obtained results are in a good agreement with those<br />
described in our previous paper [5] for dose rate effects<br />
simulation with non-focused 1.06 μm laser irradiation. The
collected charge temperature dependence under focused laser<br />
beam is similar to that of ionizing current amplitude under<br />
non-focused nanosecond laser pulse.<br />
IV. CONCLUSIONS<br />
Temperature dependence of charge collection in silicon<br />
IC’s under 1.06 and 0.53 μm focused laser beams was<br />
investigated in application to Single Event Effect simulation<br />
in CMOS test structure.<br />
It was shown that in the case of 0.53 μm laser irradiation<br />
the temperature practically does not affect the collected<br />
charge because of slight laser absorption coefficient<br />
temperature dependence in this range. The theoretically<br />
predicted variations of collected charge may be explained by<br />
carrier lifetime and mobility temperature dependences.<br />
In the case of 1.06 μm laser irradiation the theory and<br />
experiment have shown the essential growth of collected<br />
charge with temperature. It is corresponds with strong laser<br />
absorption coefficient temperature dependence for photon<br />
energy near the bandgap. The theoretical prediction gives the<br />
approximately doubling of collected charge in the range from<br />
22 to 110 °C. The experimental results show that SEE<br />
sensitivity increases at least three times in this temperature<br />
range. The difference between measured and simulated<br />
results may be explained by uncertainties of laser absorption<br />
coefficient temperature dependence near the edge of silicon<br />
fundamental band-to-band absorption zone.<br />
The results obtained prove that the temperature<br />
dependence of the laser absorption coefficient in<br />
semiconductor affects the equivalent LET and must be taken<br />
into accounted in devices SEE selection for LHC electronic.<br />
V. REFERENCES<br />
[1] C.F.Gosset, B.W. Hughlock, A.H.Johnston, "Laser<br />
simulation of single particle effects", IEEE Trans. Nucl.<br />
Sci., vol. 37, no.6, pp. 1825-1831, Dec. 1990.<br />
[2] R. Velazco, T. Calin, M. Nicolaidis, S.C Moss, S.D.<br />
LaLumondiere, V.T. Tran, R. Kora, “SEU-hardening<br />
storage cell validation using a pulsed laser”, IEEE Trans.<br />
Nucl. Sci., vol. 43, no.6, pp. 2843-2848, Dec. 1996.<br />
[3] J.S. Melinger, S. Buchner, D. McMorrow, W.J. Stapor,<br />
T.R Wetherford, A.B. Campbell and H. Eisen, “Critical<br />
evaluation of the pulsed laser method for single-event<br />
effects testing and fundamental studies”, IEEE Trans.<br />
Nucl. Sci., vol. 41, no.6, pp. 2574-2584, Dec. 1994.<br />
[4] A.H. Johnston, "Charge generation and collection in p-n<br />
junctions excited with pulsed infrared lasers", IEEE<br />
Trans. Nucl. Sci., vol. 40, no. 6, pp. 1694 - 1702, Dec.<br />
1993.<br />
[5] P.K. Skorobogatov, A.Y. Nikiforov, A.A. Demidov, V.V.<br />
Levin, “Influence of temperature on dose rate laser<br />
simulation adequacy”, IEEE Trans. Nucl. Sci., vol. 47,<br />
no.6, pp. under publication, Dec. 2000.<br />
[6] A.I. Chumakov, A.N. Egorov, O.B. Mavritsky, A.Y.<br />
Nikiforov, A.V. Yanenko, “Single Event Latchup<br />
Threshold Estimation Based on Laser Dose Rate Test<br />
Results”, IEEE Trans. Nucl. Sci., vol. 44, no. 6, pp. 2034<br />
- 2039, Dec. 1997.<br />
[7] P.K. Skorobogatov, A.Y. Nikiforov and A.A. Demidov,<br />
“A way to improve dose rate laser simulation adequ<br />
IEEE Trans. Nucl. Sci., vol. 45, no. 6, pp. 2659 - 2664,<br />
Dec. 1998.<br />
[8] A.Y. Nikiforov, V.V. Bykov, V.S. Figurov A.I.<br />
Chumakov, P.K. Skorobogatov, and V.A.Telets "Latch-up<br />
windows tests in high temperature range" in Proceedings<br />
of the 4th Europ. Conf. "Radiations and Their Effects on<br />
Devices and Systems, Cannes, France, Sept. 15-19, 1997,<br />
pp. 366-370.<br />
[9] A.Y. Nikiforov and P.K. Skorobogatov, "Dose rate laser<br />
simulation tests adequacy: Shadowing and high intensity<br />
effects analysis", IEEE Trans. Nucl. Sci., vol. 43, no.6,<br />
pp. 3115-3121, Dec. 1996.<br />
[10] S.M. Sze, Physics of Semiconductor devices. 2-nd ed.<br />
John Wiley & Sons, N.Y., 1981.
The Behavior of P-I-N Diode under High Intense Laser Irradiation<br />
P.K.Skorobogatov, A.S.Artamonov, B.A.Ahabaev<br />
Specialized Electronic Systems, Kashirskoe shosse, 31, 115409, Moscow, Russia<br />
pkskor@spels.ru<br />
Abstract<br />
The dependence of p-i-n diode ionizing current amplitude<br />
vs 1.06 μm pulsed laser irradiation intensity is investigated.<br />
It is shown that the analyzed dependence becomes nonlinear<br />
beginning with relatively low laser intensities near 10<br />
W/cm 2 .<br />
I. INTRODUCTION<br />
Pulsed laser sources are widely used for dose rate effects<br />
simulation in IC’s [1]. The Nd:YAG laser with 1.06 μm<br />
wavelength is nearly ideal for silicon devices, with a<br />
penetration depth near 700 μm [2]. The measurements of<br />
pulsed laser irradiation intensity and waveform monitoring<br />
may be provided with p-i-n diode. High electric field in its<br />
intrinsic region provides the full and fast excess carriers<br />
collection. As a result the ionizing current pulse waveform<br />
repeats the laser pulse within the accuracy of several<br />
nanoseconds.<br />
Possible nonlinear ionization effects may disturb the<br />
behavior of p-i-n diode at high laser intensities. Here we<br />
present the results of the recent study of typical p-i-n diode<br />
using 2D numerical simulation and a pulsed laser simulator<br />
with the 1.06 μm wavelength as a radiation source.<br />
II. P-I-N DIODE STUDY<br />
The typical p-i-n diode with a intrinsic region 380 -<br />
micrometers thick at 300 V reverse bias was investigated. Pi-n<br />
diode cross-section is shown in Fig. 1. The sensitive area<br />
size is 2.3x2.3 mm. The p+ and n+ regions of diode are<br />
doped up to 10 19 cm –3 .<br />
To investigate the p-i-n diode possibilities at high laser<br />
intensities the original software simulator “DIODE-2D” [3]<br />
was used. The “DIODE-2D” is the two-dimensional solver of<br />
the fundamental system of equations. It takes into account<br />
carrier generation, recombination and transport, optical<br />
effects, carrier’s lifetime and mobility dependencies on<br />
excess carriers and doping impurity concentrations.<br />
The calculated p-i-n diode ionizing current pulse<br />
amplitude vs laser intensity dependence under 300 V reverse<br />
bias is presented in Fig. 2. The radiation pulse waveform was<br />
taken as “Gaussian” with 11 ns duration. One can see that<br />
the direct proportionality between current pulse amplitude<br />
and laser intensity takes place only at relatively low<br />
intensities (up to 10 W/cm 2 ). The ionization distribution<br />
nonuniformity connected with laser radiation attenuation<br />
does not affect the dependence because of relatively low<br />
excess carrier density is not enough to sufficiently change the<br />
absorption coefficient.<br />
Figure 1: P-i-n diode cross-section<br />
Figure 2: P-i-n diode ionizing current pulse amplitude vs laser pulse<br />
intensity dependence<br />
The non-linearity is caused by the modulation of p-i-n<br />
diode intrinsic region by excess carriers. Because of low level<br />
of initial carriers concentration the modulation takes place at<br />
relatively low dose rates. As a result of modulation the<br />
distribution of electric field in the intrinsic region becomes<br />
non-uniform that leads to decrease of excess carriers<br />
collection. This proposal is confirmed by results of potential<br />
distribution calculations for different laser peak intensities<br />
presented in Fig. 3.
Figure 3: Potential distributions at time 11 ns for different<br />
maximum laser intensities: initial (a), 10 2 (b) and 10 3 (c) W/cm 2<br />
a)<br />
b)<br />
c)<br />
The behavior of p-i-n diode becomes similar to that of<br />
ordinary p-n junction with prompt and delayed components<br />
of ionizing current. The prompt component repeats the laser<br />
intensity waveform. The delayed component is connected<br />
with the excess carriers collection from regions with low<br />
electric fields. As a result, the ionizing current pulse form<br />
becomes more prolonged and doesn’t repeat the laser pulse<br />
waveform.<br />
Fig. 4 shows the normalized calculated ionizing current<br />
pulse waveforms for different maximum laser intensities. At<br />
relatively low intensity the current pulse waveform repeats<br />
the appropriate laser pulse waveform. At high intensities we<br />
see the delayed components.<br />
The non-linear nature of behavior and prolonged reaction<br />
must be taken into account when p-i-n diode is used as a<br />
laser intensity and waveform dosimeter.<br />
Figure 4: Normalized calculated ionizing current pulse waveforms<br />
for different maximum laser intensities:10 (1), 10 2 (2) and 10 3 (3)<br />
W/cm 2<br />
III. NUMERICAL TO EXPERIMENTAL COMPARATIVE<br />
RESULTS<br />
The numerical results were confirmed by experimental<br />
measurement of p-i-n diode ionizing reaction within a wide<br />
range of laser intensities.<br />
The pulsed laser simulator "RADON-5E" with the 1.06<br />
μm wavelength and 11 ns pulse width was used in the<br />
experiments as a radiation source [4]. The laser pulse<br />
maximum intensity was varied from 6· 10 2 up to 2.1· 10 6<br />
W/cm 2 with laser spot size covering the entire chip. It<br />
provides in silicon the equivalent dose rates up to 10 12<br />
rad(Si)/s. The p-i-n diode ionizing current transient response<br />
was registered with the "Tektronix TDS-220" digital<br />
oscilloscope.<br />
The comparative p-i-n diode ionizing current pulse<br />
amplitude vs laser intensity dependencies under 300V reverse<br />
bias are presented in Fig. 5. The upper limit of laser<br />
intensity is restricted by p-i-n diode failure possibility.<br />
One can see that the experimental results confirm the<br />
non-linear behavior of p-i-n diode at intensities above 10 1<br />
W/cm 2 . The reduction of reverse voltage increases the nonlinear<br />
effects.<br />
As for the distortion of pulse waveform at high laser<br />
intensities it was also confirmed. The experimental p-i-n<br />
diode current pulse waveforms are represented in Fig. 6. At<br />
the maximum intensity 1.3 W/cm 2 the current pulse<br />
waveform repeats the laser irradiation one. At an intensity of<br />
27 W/cm 2 we can see prolonged behavior though this<br />
intensity corresponds to only 3,4 10 7 rad(Si)/s equivalent<br />
dose rate.
Figure 5: Numerical (line) and experimentally determined (dots) pi-n<br />
diode ionizing current amplitude vs laser intensity<br />
a)<br />
b)<br />
Figure 6: P-i-n diode ionizing current waveforms under laser pulses<br />
with 1.3 (a) and 27 (b) W/cm 2 maximum intensities<br />
IV. CONCLUSION<br />
The simulation and experiments under p-i-n diode<br />
structure have shown that linear dependence between pulsed<br />
laser intensity and ionizing current pulse amplitude is valid<br />
only at relatively low intensities up to 10 W/cm 2 . In the field<br />
of high intensities this dependence becomes non-linear and<br />
ionizing current increases more slowly than laser intensity.<br />
The ionizing current pulse form becomes more prolonged<br />
and does not repeat the laser pulse waveform.<br />
The non-linear nature of behavior and prolonged reaction<br />
must be taken into account when p-i-n diode is used as a<br />
laser intensity and pulse waveform dosimeter in LHC<br />
experiments.<br />
V. REFERENCES<br />
[1]. P.K. Skorobogatov, A.Y. Nikiforov B.A. Ahabaev<br />
“Laser Simulation Adequacy of Dose Rate Induced Latchup”//Proc.<br />
4th European Conf. on Radiations and Its Effects<br />
on Components and Systems (RADECS 97), Sept. 15-19,<br />
1997, Palm Beach, Cannes, France. P. 371-375.<br />
[2]. A.H. Johnston “Charge Generation and Collection in p-n<br />
Junctions Excited with Pulsed Infrared Lasers”//IEEE Trans.<br />
1993. Vol. NS-40, N 6. P. 1694 - 1702.<br />
[3]. The "DIODE-2D" Software Simulator Manual Guide,<br />
SPELS, 1995.<br />
[4]. "RADON-5E" Portable Pulsed Laser Simulator:<br />
Description, Qualification Technique and Results, Dosimetry<br />
Procedure/A.Y. Nikiforov, O.B. Mavritsky, P.K.<br />
Skorobogatov et all//1996 IEEE Radiation Effects Data<br />
Workshop. P. 49-54.
Estimating induced-activation of SCT barrel-modules<br />
C. Buttar, I. Dawson, A. Moraes<br />
Department of Physics and Astronomy, University of Sheffield,<br />
Hicks Building, Hounsfield Road, Sheffield, S3 7RH, UK<br />
c.m.buttar@sheffield.ac.uk, Ian.Dawson@cern.ch, a.m.moraes@sheffield.ac.uk<br />
Abstract<br />
Operating detector systems in the harsh radiation environment<br />
of the ATLAS inner-detector will result in the production<br />
of radionuclides. This paper presents the findings<br />
of a study in which the radioactivation of SCT barrel modules<br />
has been investigated.<br />
I. INTRODUCTION<br />
One of the consequences of operating detector systems<br />
in the harsh radiation environments of the ATLAS innerdetector<br />
[1] will be the radioactivation of the components.<br />
If the levels of radioactivity and corresponding dose rates<br />
are significant, then there will be implications for any access<br />
or maintenance operations. In addition, maintenance<br />
operations may be required on nearby detector or machine<br />
elements, for example beam-line equipment, so the impact<br />
of the SCT has to be considered. A further motivation<br />
for understanding SCT activation concerns the eventual<br />
storage or disposal of the SCT-modules at the end of the<br />
detector lifetime, in which any radioactive material will<br />
have to be classified.<br />
Radionuclides are produced in the SCT modules via<br />
the inelastic interactions of hadrons with the various target<br />
nuclei comprising the modules. The hadrons originate<br />
from: 1) secondaries from the p-p collisions, dominated<br />
by pions, and 2) backsplash from hadron cascades in the<br />
calorimeters, mainly neutrons. Therefore, the calculation<br />
of radionuclide production requires a detailed inventory of<br />
the target-nuclei comprising the SCT modules and a good<br />
knowledge of the corresponding radiation backgrounds.<br />
II. MODULE DESCRIPTION AND<br />
MATERIAL INVENTORY<br />
In order to make activation estimates, it is necessary<br />
to obtain a detailed inventory of the materials involved in<br />
the construction of a module. The relevant information is<br />
now becoming available as SCT modules go into the production<br />
phase [2]. Of particular importance is knowledge<br />
of elements containing isotopes with high neutron capture<br />
cross sections. For example, ��Au has a thermal neutron<br />
capture cross section � 100 b for the (n,) process and even<br />
small quantities of such an isotope can contribute considerably,<br />
or even dominate the activated environment.<br />
In the SCT barrel system there are 4 layers of cylinders,<br />
comprising 32, 40, 48 and 56 rows of 12 modules respectively.<br />
Each SCT barrel module comprises silicon sensors,<br />
baseboard with BeO facings, ASICs and a Cu/polymide<br />
hybrid [3], connected to opto-packages and dog-legs [4].<br />
Each row of modules has associated with it a cooling<br />
pipe [5] and power tape. In total, the module mass is calculated<br />
to weigh 34.6 g and contain 15 different elements.<br />
Details of the elemental breakdown are given in Table 1.<br />
Unfortunately, material details are not always given<br />
in their elemental form and sometimes have to be derived.<br />
Assumptions are therefore unavoidable and include:<br />
1) Thermal adhesives assumed to be 70% polyimide and<br />
30% BN; 2) Epoxy films, polyimide layers with adhesives<br />
and PEEK are chemically described as C H N O�; 3) The<br />
solder composition is assumed to be 59% Pb, 40% Sn and<br />
1%Ag; 4) Capacitors made of AlO and resistors 20% C and<br />
80% AlO. Perhaps the most important assumption concerns<br />
the possible use of silver loaded conductive glues.<br />
Silver is important due to the high thermal neutron capture<br />
Ñ cross section and long half-life of Ag. In the current<br />
study no silver has been assumed in the glues.<br />
III. RADIONUCLIDE PRODUCTION<br />
Radionuclides are produced in the SCT modules via the<br />
inelastic interactions of hadrons with the various target nuclei<br />
comprising the modules. However, radionuclides with<br />
short half-lifes can be neglected as access to the irradiated<br />
SCT material is unlikely for at least several days. In the<br />
current study, radionuclides are only considered of radiological<br />
interest if they have half-lifes greater than 1 hour<br />
and less than 30 years.<br />
Radionuclide production can be divided into two categories:
Elements Silicon<br />
sensors<br />
Table 1: Table of barrel module element masses.<br />
Baseboard<br />
with BeO<br />
facings<br />
ASIC’s Hybrid Cooling<br />
pipe with<br />
coolant<br />
Power<br />
tape<br />
Opto Package<br />
/ Dog<br />
Leg<br />
Total �<br />
À 0.001 0.008 0.004 0.034 - 0.012 0.035 0.094<br />
�� - 0.597 - - - - - 0.597<br />
� 0.010 0.054 0.027 0.185 - - - 0.276<br />
� 0.037 4.672 0.099 3.024 0.821 0.306 0.926 9.885<br />
Æ 0.017 0.092 0.045 0.311 - 0.032 0.324 0.821<br />
Ç 0.011 1.119 0.037 0.442 - 0.093 0.304 2.006<br />
� - - - - 3.462 - - 3.462<br />
�Р- 0.118 0.028 0.393 1.166 0.78 0.434 2.919<br />
� 10.812 - 0.742 0.187 - - 0.029 11.770<br />
� - - - 0.025 - - 0.015 0.040<br />
�Ù - - - 1.501 - - 1.071 2.572<br />
�� - - - 0.001 - - 0.0004 0.0014<br />
ËÒ - - - 0.049 - - 0.012 0.061<br />
�Ù - - - 0.011 - - - 0.011<br />
� - - - 0.073 - - 0.024 0.097<br />
Total � 10.888 6.660 0.982 6.236 5.449 1.223 3.174 34.612<br />
1. Production via low energy (� 20MeV) neutron interactions,<br />
eg (n,), (n,p), (n,«), (n,np) etc.. The cross sections<br />
for these neutron interactions are well known<br />
for the target nuclei being considered [6]. The production<br />
probability for each radionuclide of interest,<br />
per target nuclei, can be obtained by convolving the<br />
energy dependent cross sections with neutron energy<br />
spectra. The radiation environment of the SCT environment<br />
has previously been studied [1] and neutron<br />
energy spectra obtained. According to these studies,<br />
thermal neutrons account for more than a half of the<br />
total neutron fluences in and around the SCT barrel<br />
system. In the current study, the radionuclide production<br />
probabilities were evaluated for all the lowenergy<br />
neutron interactions and, as expected, the<br />
dominant process was found to be thermal-neutron<br />
capture (n,).<br />
Values of radionuclide production per p-p event per<br />
module from (n,) interactions are obtained simply<br />
from Ò�� where Ò is the number of atoms of a given<br />
isotope in the module and � is the corresponding<br />
thermal neutron cross section, obtained from [7]; � is<br />
the thermal neutron fluence which, according to [1],<br />
has the value 2 ¢ 10 � ncm s over the whole SCT<br />
barrel system. It should be noted, however, that the<br />
inner detector thermal neutron rates are likely to be<br />
smaller in reality than those predicted in the original<br />
fluence simulations. This is because elements such as<br />
boron, xenon, silver, gold etc., which have very high<br />
thermal neutron capture cross sections, had not been<br />
included.<br />
2. High energy inelastic interactions, or spallation. Unlike<br />
neutron interactions, radionuclide production<br />
cross sections from spallation are not available for<br />
all the target nuclei, and are often scarce for particles<br />
such as pions. In this study, hadron interaction<br />
models in the Monte Carlo particle transport code<br />
FLUKA is used [8]. The results then depend on the<br />
quality and coverage of the physics models, in particular:<br />
nuclear evaporation, the intranuclear cascade,<br />
nuclear fission and nuclear fragmentation. Of these<br />
the first three are considered to be well modelled<br />
but fragmentation, which is important for the heavier<br />
target nuclei, is not included in the code. The<br />
general features of residual-nuclei production predicted<br />
by FLUKA are in reasonable agreement with<br />
experimental data [9], exept for light-nuclei production<br />
from heavy targets where fragmentation effects<br />
start showing up. Fortunately, the bulk of the target<br />
nuclei in SCT modules are in the light to medium<br />
mass range.<br />
IV. EVALUATING ACTIVITY<br />
Knowledge of the various radionuclide production<br />
rates along with their half-lifes allows the calculation of radioactivities;<br />
defined as the number of decays per second.<br />
The build-up and decay of activity, for each radionuclide,<br />
is given by:<br />
� � Æ� � �Ø� �Ø<br />
� (1)<br />
where t� and t are irradiation and cooling times respectively.<br />
Æ is the number of produced radionuclides per p-p<br />
event per module, � is the average number of p-p events<br />
per second and � is the decay constant. In going from<br />
radionuclide production per p-p event to activity, it is necessary<br />
to make certain assumptions about the p-p event<br />
rates. The design luminosity of the LHC is 10 � cm s ,
esulting in a p-p interaction rate of 8¢10 � s as predicted<br />
by PHOJET [10]. However, the average luminosity<br />
over longer timescales will be less due to beam-lifetimes<br />
(a) (b)<br />
etc. and an averaged luminosity value of 5¢10 cm s<br />
is assumed [11].<br />
Figure 1: Dominant activities produced by (a) (n,) interactions and (b) spallation.<br />
Shown in Figure 1 are the dominant contributions to activation<br />
from (n,) and spallation reactions. Presented in<br />
Table 2 are activities obtained 1 day, 1 week and 1 month<br />
after shutdown, for the two cases of one-year of highluminosity<br />
running and ten-years of high luminosity running.<br />
A high-luminosity year is defined as 180 days<br />
of running (assuming the average beam-luminosity of<br />
5 ¢ 10 cm s ) followed by 185 days of shutdown.<br />
� � ���<br />
��<br />
�ËÚ�� (2)<br />
where � is the activity in MBq, � is the sum of energies<br />
in MeV weighted by their emission probabilities and � is<br />
the distance from the source in metres. The above formula<br />
is valid for photons in the range 0.05 to 2 MeV, which is the<br />
range covering most of the emitted s.<br />
Using the activity values given in Table 2 and the relevant<br />
gamma decay and energy information, the total<br />
gamma dose is obtained by summing the contributions<br />
from each radionuclide calculated using equation 2 above.<br />
The results are given in Table 3.<br />
V. CALCULATING DOSE RATES<br />
B. Dose rates from ¬-emitters<br />
The dose rate �¬ from a ¬-emitting nuclide can be approximated<br />
[12] by:<br />
Dose rates are obtained at distances of 10 cm, 30 cm and<br />
100 cm from the centre of a barrel-module. The calculations<br />
�¬ �<br />
�<br />
�<br />
�ËÚ�� (3)<br />
are facilitated by assuming each module is a point source<br />
of radioactivity. This assumption is good for the distances<br />
larger than the dimensions of the module (ie 30 cm and<br />
100 cm) and will be conservative by some � 30 % for the<br />
value obtained at 10 cm.<br />
This expression assumes no absorption of the ¬s, which<br />
can be appreciable depending on the ¬ energy. For example,<br />
the average ¬ energy in H decay is 5.68 keV. The<br />
range in air of such ¬s is a few mm and will not contribute<br />
to external ¬-doses. However, most of the emitted ¬s have<br />
A. Dose rates from -emitters<br />
much higher energies, with ranges in air going up to several<br />
metres. In order to avoid overly overestimating the<br />
The dose rate � from a -emitting nuclide can be ob- ¬-doses, the maximum ranges in air of all ¬-emitters have<br />
tained [12], for a point source, from:<br />
been obtained and, if the range is shorter than the distance<br />
at which the dose is calculated, then it is not included in<br />
the total ¬-dose estimate. The results are given in Table 3.<br />
The above strategy will still overestimate ¬-doses for two<br />
reasons; 1) the ranges are obtained assuming the maximum<br />
¬-energy and 2) self-shielding of the module material has<br />
been neglected. However, the emphasis of the ¬-dose calculations<br />
is to provide reliable upper-limits.
Table 2: Dominant radionuclides contributing to module activity.<br />
180 days irradiation 10 years irradiation<br />
cooling times cooling times<br />
Radionuclide 1day 1week 1month 1day 1week 1month<br />
H 477.87 477.43 475.74 3759.17 3755.69 3742.42<br />
� B 4139.23 3828.54 2838.78 4175.48 3862.06 2863.64<br />
Na 476.32 474.59 466.35 1894.87 1886.59 1855.19<br />
� Na 505.16 0.64 - 505.16 0.64 -<br />
Si 3.78 - - 3.78 - -<br />
P 13.31 9.95 3.26 13.31 9.95 3.26<br />
�� V 45.20 34.97 13.07 45.35 35.01 13.11<br />
�� Mn 114.69 113.18 107.55 206.61 203.88 193.74<br />
�� Co 106.19 100.73 82.28 110.65 104.96 85.74<br />
�� Co 153.14 150.81 142.19 252.27 248.43 234.23<br />
�� Co 541.38 510.49 407.57 557.01 525.23 419.33<br />
�� Fe 51.79 47.17 32.97 51.97 47.33 33.08<br />
� Co 14.24 14.21 14.09 84.72 84.54 83.85<br />
�� Cu 22114.09 8.54 - 22114.09 8.54 -<br />
Ñ Ag 253.97 249.78 234.35 398.90 392.32 368.07<br />
Sn 9.81 9.64 8.24 11.04 10.65 9.27<br />
Sn 3.10 3.01 2.66 3.61 3.50 3.09<br />
� Sn 65.14 42.32 8.09 65.14 42.32 8.09<br />
�� Au 5260.37 1127.36 3.07 5260.37 1127.36 3.07<br />
C. Dose rates from Bremsstrahlung<br />
While most of the energy of electrons or positrons<br />
is lost through ionisation, there will also be some<br />
bremsstrahlung, depending on the energy of the particle<br />
and the atomic number Z of the absorbing medium. According<br />
to [13], the total bremsstrahlung dose rate from a<br />
point source is given by the equation:<br />
� ¢ � �� Ñ<br />
�<br />
� Á � �ËÚ�� (4)<br />
where � and � are as before, � Ñ is the maximum ¬ energy<br />
in MeV, Á takes into account internal bremsstrahlung<br />
and � is the mass energy absorption coefficient of x-rays in<br />
cm g . Assuming values of 7, 5 and 0.03 for �, Á and �<br />
respectively [13] for bremsstrahlung in air, it can be seen<br />
that equation 4 remains several orders of magnitude lower<br />
than equation 4 for all distances and energies. Dose rates<br />
from bremsstrahlung can therefore be neglected.<br />
D. Total dose rates<br />
for whole barrel system<br />
It is also of interest to estimate dose rates for the entire<br />
SCT barrel system. This has been done assuming every<br />
module in the barrel system is a point source of identical<br />
activation. While this assumption is reasonable for the<br />
(n,) activation, it will overestimate spallation related dose<br />
rates as the relevant particle rates are higher in the first SCT<br />
barrel than in the other three barrels. Two ‘access scenarios’<br />
are considered, shown in Figure 2; the corresponding<br />
results are given in Tables 4 and 5.<br />
Table 3: Total (¬) dose rates (�Sv/h) resulting from the activation of a single barrel module.<br />
180 days irradiating 10 years irradiating<br />
Distance cooling times cooling times<br />
from the source 1day 1week 1month 1day 1week 1month<br />
10 cm 0.161 (28.1) 0.049 (1.3) 0.037 (0.13) 0.215 (28.4) 0.102 (1.4) 0.089 (0.24)<br />
30 cm<br />
1m<br />
0.018 (3.1)<br />
1.61¢10 (0.28)<br />
0.005 (0.15)<br />
0.49¢10 (0.013)<br />
0.004 (0.015)<br />
0.37¢10 (0.001)<br />
0.024 (3.3)<br />
2.15¢10 (0.28)<br />
0.011 (0.16)<br />
1.02¢10 (0.013)<br />
0.010 (0.027)<br />
0.89¢10 (0.001)<br />
VI. CONCLUSIONS<br />
Concerning (n,) activation the dominant radionuclides<br />
are �� Cu, �� Au and Ñ Ag (see Figure 1). However,<br />
�� Cu and �� Au have half-lifes of 12.7 hours and 2.7 days<br />
respectively. Therefore, assuming access cannot be made<br />
to the SCT within 1 week of shutdown, module activity<br />
Ñ will be dominated by the Ag which has a half-life of<br />
249.9 days. Considering the small amount of silver as-
(a)<br />
1.50m<br />
r 0<br />
r<br />
Figure 2: Considered access scenarios (a) Scenario 1 and (b) Scenario 2.<br />
sumed in the module construction (� 1 mg), it is clear that<br />
a design using a higher silver content will result in a proportional<br />
increase in activities.<br />
Concerning spallation induced activation, the dominant<br />
radionuclides are H, � Be and Na. However, when<br />
considering external dose rates from activation, H can be<br />
neglected as discussed in Section V.B. Also, � Be only decays<br />
� 10% of the time into gammas, leaving Na as the<br />
most important radionuclide resulting from spallation. Interestingly,<br />
the target nuclei mainly responsible for Na<br />
production is silicon and is therefore unavoidable.<br />
Inspection of Table 3 shows that even after 10 years<br />
of LHC running and at a distance of 10 cm, the module<br />
-doses will be about � 0.1 �Sv/h. The corresponding<br />
¬-doses (Table 3) are much higher but after 1 week<br />
drop to about 1 �Sv/h. According to the CERN radiation<br />
safety manual [14], dose rates less than 0.1 �Sv/h at 10 cm<br />
are considered non-radioactive, while dose rates above<br />
0.1 �Sv/h but less than 10 �Sv/h at 10 cm are considered<br />
slightly-radioactive. These low dose rates mean that if<br />
(b)<br />
Table 4: Maximum (¬) doses (�Sv/h) for Scenario 1.<br />
extraction were necessary then modules could be stored<br />
simply in supervised areas [14]. After a cool down period of<br />
one month, -dose and ¬-dose values of � 0.1 �Sv/h and<br />
0.24 �Sv/h at 10 cm are given respectively. These values<br />
are similar to those of natural background activity.<br />
Inspection of Tables 4 and 5 show that dose rates from<br />
the barrel ensemble will approach 10 �Sv/h at 10 cm after a<br />
1 month cool-down time. As the predicted values are considered<br />
upper-limits, if barrel extraction were necessary<br />
then it would probably only need be kept in a supervised<br />
area. If the dose rates were higher than 10 �Sv/h but less<br />
than 100 �Sv/h then the barrel ensemble would have to be<br />
stored in a ‘controlled’ area [14].<br />
Finally, it should be stressed that the current study has<br />
assumed a low-silver module design. If the silver content<br />
is much higher, as would be the case if silver-loaded conductive<br />
glues are used, then the results concerning Ñ Ag<br />
should be scaled accordingly. However, the radiological<br />
implications would have to be reevaluated.<br />
180 days irradiating 10 years irradiating<br />
Approaching cooling times cooling times<br />
point 1day 1week 1month 1day 1week 1month<br />
10 cm 6.2 (1084) 1.9 (50) 1.4 (4.1) 8.3 (1088) 3.9 (53) 3.4 (7.4)<br />
30 cm 4.0 (695) 1.2 (32) 0.92 (2.4) 5.3 (697) 2.5 (34) 2.2 (4.1)<br />
100 cm 1.3 (184) 0.38 (9.5) 0.29 (0.35) 1.7 (184) 0.79 (9.6) 0.69 (0.47)<br />
Table 5: Maximum (¬) doses (�Sv/h) for Scenario 2.<br />
180 days irradiating 10 years irradiating<br />
Approaching cooling times cooling times<br />
point 1day 1week 1month 1day 1week 1month<br />
10 cm 9.2 (1598) 2.8 (74) 2.1 (6.4) 12 (1603) 5.8 (79) 5.1 (12)<br />
30 cm 5.1 (888) 1.6 (41) 1.2 (3.4) 6.8 (891) 3.2 (43) 2.8 (5.7)<br />
100 cm 1.5 (254) 0.45 (11) 0.34 (0.48) 1.99 (254) 0.95 (12) 0.83 (0.66)<br />
r 0<br />
z<br />
y<br />
x
VII. REFERENCES<br />
[1] I.Dawson, Review of the Radiation Environment in<br />
the Inner Detector, ATL-INDET-2000-006<br />
[2] SCT Barrel Module, Final Design Reviews: SCT-BM-<br />
FDR-1, SCT-BM-FDR-2, SCT-BM-FDR-3 and SCT-<br />
BM-FDR-4. Available from:<br />
http://atlasinfo.cern.ch/Atlas/GROUPS/INNER<br />
DETECTOR/SCT/module/SCTbarrelmod.html<br />
[3] Table of material for the Cu/Polyimide hybrid<br />
12 �Ñ Copper, version 4. Available from:<br />
http://jsdhp1.kek.jp/�unno/si hybrid/k4/<br />
KhybridXo01feb26.pdf<br />
[4] Private communication, T.Weidberg<br />
[5] T.Niinikoski, Evaporative Cooling - Conceptual Design<br />
for ATLAS SCT, ATL-INDET-98-214<br />
[6] Evaluated Nuclear Data Files (ENDF), obtained<br />
from: http://www-nds.iaea.org/ndsstart.html<br />
[7] Chart of the Nuclides, produced by Knowles Atomic<br />
Power Laboratory; 14th edition - revised to April<br />
1988. (Using National Nuclear Data Center (NNDC)<br />
files.)<br />
[8] A.Fasso, A.Ferrari, J.Ranft and P.Sala, Full details<br />
can be found at FLUKA official website;<br />
http://fluka.web.cern.ch/fluka/<br />
[9] A.Ferrari and P.Sala, The Physics of High Energy Reactions,<br />
ATLAS Internal note, PHYS-NO-113, 1997<br />
[10] R.Engel and J.Ranft, Hadronic photon-photon interactions<br />
at high-energies, Physics. Rev. D54:4244-4262,<br />
1996<br />
[11] K.Potter and G.Stevenson, Average Interaction Rates<br />
for Shielding Specification in High Luminosity LHC Experiments,<br />
CERN AC/95-01, CERN/TIS-RP/IR/95-<br />
05.<br />
[12] K.J.Connor and I.S.McLintock, Radiation Protection,<br />
HHSC Handbook No. 14, 1997, ISBN 0-94237-21-X<br />
[13] I.S.McLintock, Bremsstrahlung from Radionuclides,<br />
HHSC Handbook No. 15, 1994, ISBN 0-948237-23-6<br />
[14] Radiation Safety Manual, CERN – TIS/RP, 1996
Development of a DMILL radhard multiplexer for the ATLAS Glink optical link and<br />
radiation test with a custom Bit ERror Tester.<br />
Daniel Dzahini, on behalf of the ATLAS Liquid Argon Collaboration<br />
Abstract<br />
A high speed digital optical data link has been developed for<br />
the front−end readout of the ATLAS electromagnetic<br />
calorimeter. It is based on a commercial serialiser commonly<br />
known as Glink, and a vertical cavity surface emitting laser.<br />
To be compatible with the data interface requirements, the<br />
Glink must be coupled to a radhard multiplexer that has been<br />
designed in DMILL technology to reduce the impact of<br />
neutron and gamma radiation on the link performance. This<br />
multiplexer features a very severe timing constraints related<br />
both to the Front−End Board output data and the Glink<br />
control and input signals. The full link has been successfully<br />
neutron and proton radiation tested by means of a custom Bit<br />
ERror Tester.<br />
I) INTRODUCTION<br />
The Liquid Argon Calorimeter of the ATLAS experiment<br />
at the LHC is a highly segmented particle detector with<br />
approximately 200 000 channels. The signals are digitized<br />
on the front−end board and then transmitted to data<br />
acquisition electronics located 100m to 200m away. The<br />
front−end electronics has a high degree of multiplexing<br />
allowing the calorimeter to be read out over 1600 links each<br />
transmitting 32 bits of data at the bunch crossing frequency<br />
of 40.08 Mhz. The radiation hardness is a major<br />
consideration in the design of the link, since the emitter side<br />
will be exposed to an integrated fluence of 3*10 13 n(1MeV<br />
Si) over 10 years of the LHC running.<br />
II) OPTICAL LINK DESCRIPTION<br />
The demonstrator link is based on an Agilent<br />
Technologies HDMP1022/1024 serialiser/deserialiser. This<br />
Glink is used in a double frame mode: the incoming<br />
digitized data of 32 bit at 40.08 Mhz are multiplexed and<br />
sent as two separate 16 bit frame segments at 80.16Mhz with<br />
the use of an external multiplexer. The Glink chip set adds a<br />
4 bit control field to each 16 bit data segment which results<br />
in a total data transfert rate of 1.6 Gb/s (see figure 1).<br />
Institut des Sciences Nucléaires<br />
53 avenue des Martyrs,38026 Grenoble Cedex France<br />
80.16 MHz<br />
Transmitting end<br />
32<br />
Data<br />
1.6 Gb/s<br />
Methode<br />
Transceiver<br />
Tx Rx<br />
40.08 MHz<br />
16<br />
HDMP-1022<br />
Serialiser<br />
Not Used<br />
Latch+MUX<br />
40.08 MHz<br />
Radiation<br />
80.16 MHz<br />
Receiving end<br />
Not Used<br />
Methode<br />
Transceiver<br />
HDMP-1024<br />
Deserialiser<br />
Figure 1:A 1.6Gb/s optical link based on the G−link chipset.<br />
The Glink serialiser outputs drive a VCSEL that transforms<br />
the electrical signal into light pulses transmitted over a<br />
Graded Index (GRIN) 50/125 mm multimode fibre to a PIN<br />
diode located on the receiver board. For the link described in<br />
this document the VCSEL and the PIN diode are packaged<br />
together with driving and discriminating circuits as<br />
transceiver modules manufactured by Methode. The PIN<br />
diode output signals are deserialised by the GLINK receiver<br />
chip (HDMP1024), then a Programmable Logic Array<br />
(ALTERA EMP7128) is used for demultiplexing the 16 bits<br />
data into the basic 32 bits format.<br />
III ) MULTIPLEXER CHIP<br />
The Glink is used in a double frame mode so that the full<br />
link has the capability to transfer the 32 bit format data<br />
(figure 2).<br />
No Radiation<br />
Figure 2: The Glink chipset in a double frame mode.<br />
Tx<br />
32<br />
Data<br />
Rx<br />
1.6 Gb/s<br />
16<br />
MUX Tx Rx<br />
DEMUX<br />
CLK<br />
CLK<br />
DeMUX+Latch<br />
40.08 MHz
LHC_CLK<br />
DATA_L[15..0]<br />
DATA_VALID_L<br />
DATA_H[15..0]<br />
DATA_VALID_H<br />
16<br />
16<br />
Figure 3 The multiplexer block diagram.<br />
This configuration requires an external multiplexer. One<br />
can see in figure 3, the block diagram of the multiplexer<br />
ASIC (MUX). Since this chip must be located on the FEB<br />
board, it must be radiation hard, thus the design was done in<br />
the radhard DMILL technology.<br />
Figure 4: Schematic of the multiplexer<br />
DAV_L, DAV_H<br />
DATA_L [15..0]<br />
DATA_H [15..0]<br />
STRBOUT<br />
STRBIN<br />
STRBIN<br />
FLAG<br />
DAV_L, DAV_H<br />
DATA_L [15..0]<br />
DATA_H [15..0]<br />
GL [15..0]<br />
STRBOUT<br />
TTL to CMOS<br />
LVDS to CMOS<br />
17<br />
17<br />
TTL to CMOS<br />
REGISTER[33..0]<br />
L<br />
2:1 GL [15..0]<br />
H MUX<br />
t s<br />
FLAG<br />
1/2 FRAME<br />
PERIOD<br />
t s = SETUP TIME<br />
t h = HOLD TIME<br />
GL[15..0]<br />
Figure 5: Transmitter data interface and timing constraints.<br />
17<br />
17<br />
t h<br />
t strb<br />
MUX<br />
PLL<br />
17<br />
1/2 FRAME<br />
PERIOD<br />
t s<br />
REGISTER[17..0]<br />
16<br />
Transmitter<br />
HDMP-1022<br />
t h<br />
t mux<br />
DATA_L [15..0] DATA_H [15..0]<br />
t s t h t s t h<br />
DAV<br />
STROBIN<br />
FLAG<br />
STROBOUT<br />
HDMP1022<br />
t strb = STRBIN TO STRBOUT DELAY<br />
t mux = 2:1 MULTIPLEXER DELAY<br />
The data signals sent to the multiplexer have a LVDS<br />
logic standard, therefore the first stage of the MUX chip is a<br />
LVDS to CMOS level translator; then at the second stage the<br />
data are registered, and finally multiplexed. In addition to<br />
the data signals, the FEB sends two validation signals (one<br />
for each 16 bit segment) which go through the MUX chip via<br />
the same logic flow. Hence in the output register there are 16<br />
bits (for data)+ 1 bit (for validation), and 1 extra FLAG bit.<br />
In the double frame mode, this FLAG bit is used by the<br />
transmitter and receiver to distinguish the first or second<br />
frame segment. The schematic of the multiplexer chip is<br />
shown in figure 4. One could notice that the output registers<br />
are synchronized with Strobout which is a 80.16Mhz latch<br />
clock generated by a PLL inside the serialiser. This clock<br />
features 50% of duty cycle which is the best configuration<br />
for the Glink in a double frame mode. Figure 5 shows the<br />
transmitter data interface that the multiplexer must fit in<br />
with. The Tstrb delay is defined from the falling edge of<br />
STRBOUT to the corresponding rising or falling edge of<br />
STRBIN. The typical value for this delay is 4ns.<br />
The data (at the MUX output) must be valid for a set−up<br />
time (Ts) before it is sampled and remain valid for a hold<br />
time (Th) after it is sampled. The minimum value required in<br />
the data sheets [1] for both Ts and Th is 2ns.<br />
A double channel version of the multiplexer has been<br />
designed and tested successfully with the full optical link [2].<br />
For the main 40.08Mhz clock the link has shown a tolerance<br />
of the duty cycle from 32% to 65%. The limits found for the<br />
general delays between the main clock edge and the<br />
incoming data sampling time are 1.2ns and 13.2ns. The total<br />
power dissipation is 0.6W, leading into 0.3W/channel.<br />
IV ) A CUSTOM BIT ERROR TESTER<br />
The Glink chip set provides a link−down and a single<br />
error monitoring through a 4 bit control field which is<br />
appended to each 16 bit data field. The control field has a<br />
master transition (which the receiver uses for frequency<br />
locking) and includes information regarding the data type<br />
(control, data, fill frame). The control bits are analyzed by<br />
the deserialiser to provide two output flags: a ’link−down<br />
flag’ occurs when the receiver can not identify a frame to
lock onto, and a ’single error flag’ indicates an illegal<br />
control field in the transmitted frame.<br />
Initially the error detection was done mainly by<br />
monitoring the Glink’s inbuilt error flags, as reported in [3],<br />
but later a custom Bit Error Tester (BERT) has been<br />
developed [4]. It helps to refine the testing and in particular<br />
to discriminate between different types of errors in the link.<br />
Besides it permits several links to be tested simultaneously.<br />
The basic idea was to develop a system capable to send a<br />
flow of ATLAS like data in parallel through two different<br />
paths. One path is the reference one and the other follows the<br />
full optical link to be tested as described in figure 1. The<br />
BERT must also be able to synchronise, read and compare<br />
the out−coming data from both paths.<br />
The BERT system includes EPLD−based boards plugged<br />
in a VME crate. It is coupled to a pseudo−random pattern<br />
generator based on the CompuGen3250 board from Gage[5],<br />
and provide an interface to a computer for on−line<br />
monitoring. One can see the details on this BERT system in<br />
figure 6.<br />
LVDS<br />
Pattern Generator<br />
32 bit @ 40MHz<br />
TTL<br />
Pattern Gen. Interface<br />
Errors Injection<br />
line driver<br />
Multiplexer chip<br />
MUX<br />
MUX<br />
TX<br />
TX<br />
CONTROL Board<br />
TTL output used only<br />
For Glink test without MUX<br />
Transmitters<br />
Figure 6: Details of the custom BERT set−up.<br />
Testing Computer<br />
Configuration<br />
Control<br />
Errors Acquisition<br />
PC Interface<br />
PGI board Control<br />
COMPAR board Control<br />
Reference Data (VME bus)<br />
The CONTROL board:<br />
It provides interfaces both to the pattern generator and to<br />
the acquisition and configuration computer. It sends data<br />
simultaneously via the VME bus (reference data) and<br />
through a set of optical links (including the multiplexer) to<br />
be tested in parallel.<br />
The COMPARISON board:<br />
It reads the reference data sent on the VME bus and<br />
performs a bit to bit comparison with the data transmitted<br />
through an optical link. This comparison result is sent to the<br />
CONTROL board via the VME bus.<br />
The slow control of any step, from the data generation to<br />
the comparison result acquisition, is done by a computer.<br />
By means of this BERT system, we have successfully<br />
tested the Glink in our laboratory for many weeks, and also<br />
during the irradiations tests:<br />
V) RADIATION TEST WITH THE BIT ERROR<br />
TESTER<br />
Several link sender boards were exposed to neutron flux<br />
to assess the radiation tolerance of the DMILL MUX, the<br />
Glink serialiser and the Methode transceiver. During the<br />
radiation tests, the behaviour of the link was monitored on−<br />
line by means of the BERT coupled to a pseudo−random<br />
pattern generator [5].<br />
The radiation tolerance of a G−link serialiser coupled to<br />
a Methode transceiver has been proved under neutron<br />
Optical fibres<br />
Control/Acquisition (VME bus)<br />
Data Synchronization<br />
Comparison<br />
Errors Processing<br />
VME Crate<br />
Comparison Boards<br />
Synchronization<br />
Comparison<br />
Errors Processing<br />
RX Receivers RX<br />
integrated dose up to 5*1013 (1 Mev Si) neutrons/cm2 .<br />
However, transient data transmission errors were observed<br />
and identified as Single Event Upsets (SEU).<br />
The interaction of neutrons with silicon produces<br />
secondary charged particles, which could be located on the<br />
active devices of the electronics chip. A fraction of the<br />
released charge along the ionizing particle paths is collected<br />
at one of the circuit nodes. If it is high enough the resulting<br />
transient current might produce a SEU. In order to estimate<br />
the SEU rate, the main parameters that need to be taken into<br />
account are the sensitive volume of the chip within which
the ionization takes place, and the critical energy to be<br />
exceeded before triggering an upset.<br />
Initially the link error detection was perform by<br />
monitoring the G−link’s inbuilt error flags as reported in [3],<br />
and later by means of the a Bit Error Rate Tester (BERT)<br />
coupled to a pseudo−random pattern generator.[4]<br />
Four different types of errors were identified:<br />
� single bit flip (relative rate 72%)<br />
� n bit flips (relative rate 9%)<br />
� clock corruption in the transmitted frame for a few 40.08<br />
MHz clock counts (relative rate 9%)<br />
� link−down error, in addition to a loss of data this error<br />
leads to a loss of clock information (relative rate 10%)<br />
The experimental data were then interpreted using two<br />
different methods that are described in [3] and [6]. These<br />
methods lead to a predicted ATLAS error rate as high as<br />
0.65 +/− 0.30 error/link/hour.<br />
A test was carried out with a 60 MeV proton beam at the<br />
CRC in Louvain−La−Neuve (Belgium) in June 2001. In this<br />
experiment, the method used to analyse the data recorded<br />
with the BERT system was the one described in Ref [7]. A<br />
nice agreement with the results obtained at CERI was found.<br />
In addition, it confirms that the proton flux (as well as<br />
neutron flux) has very little influence on the DMILL<br />
multiplexer performance, and it induces less than 0.1% of<br />
the total SEU error rate.<br />
VI ) CONCLUSION<br />
The radiation tolerance of the sender part of the link has<br />
been demonstrated under neutron radiation up to 1014 ncm−2 .<br />
Transient data transmission errors (Single Event Upset) were<br />
observed by means of the BERT set−up but it has been<br />
shown that the contribution of the DMILL MUX to this error<br />
rate is very negligible.<br />
VII)REFERENCES<br />
[1] Manufactured by Hewlett Packard, Agilent<br />
Technologies, P.O. Box 10395, Palo Alto, CA 94303,<br />
http://www.semiconductor.agilent.com.<br />
[2] B. Dinkespiller et al, ’Redundancy or GaAs? Two<br />
different approches to solve the problem of SEU (Single<br />
Event Upset) in a Digital Optical Link’, 6th Workshop on<br />
Electronics for LHC experiments, Krakow−Poland, 11−15<br />
September, 2000.<br />
[3] M−L Andrieux et al., Nuclear Instr. And Meth A 456 (<br />
2001, p342.<br />
[4] M−L Andrieux et al. ’Single Event Upset studies und<br />
er neutron radiation of a high speed digital optical data link’,<br />
Proceedings of the IEEE conference , Lyon−France, October<br />
15th −20th, 2000.<br />
[5] Gage Applied Sciences, Inc., 2000, 32nd Avenue<br />
Lachine, Montreal, GC Canada H8T 3H7, http://www.gage−<br />
applied.com/.<br />
[6]M. Huhtinen et al ’Computational method to estimate<br />
Single Event Upset rates in an accelerator environment’ Nucl<br />
Instr. And Meth A 450 (2000) p155−172.<br />
[7] Ph Fartouat et al. ’ATLAS policy on radiation tolerant<br />
electronics’ ATLAS document No ATC−TE−QA−0001, 21<br />
July 00.
Direct Study of Neutron Induced Single-Event Effects<br />
¢¡¤£¦¥¨§�©�����§ 1,4, J. Bro� 1�¤���¨���¤������� 2 , D. Chren 2�¤���¤�¦�¤����� �¤�����¨� 2�¤���¤��������� 2 ,<br />
P. Kodyš 1 , C. Leroy 3 , S. Pospíšil 2 , B. Sopko 2 , A. Tsvetkov 1 and I. Wilhelm 1<br />
Abstract<br />
A facility for direct study of neutron induced Single Event<br />
Effects (SEE) has been developed in Prague using collimated<br />
and monoenergetic neutron beams available on the Charles<br />
University van de Graaff accelerator. In this project, silicon<br />
diodes and LHC Voltage Regulator are being irradiated by<br />
neutrons of different energies (60 keV, 3.8 MeV, and 15<br />
MeV). Furthermore, the associated particle method is used, in<br />
which 15 MeV neutrons produced in the 3 H(d,n) 4 He reaction<br />
are tagged. The measurements in progress should allow<br />
estimating a probability of neutron interactions per sensitive<br />
volume of the junction and a probability of SEE occurrence in<br />
the LHC Voltage regulator chip.<br />
I. INTRODUCTION<br />
CMOS integrated circuits (CMOSICs) are largely used in<br />
space, aviation and particle accelerator environments, i.e. at<br />
high radiation environments. The use of submicron CMOS<br />
processes in these adverse radiation environments requires the<br />
application of special architectural and layout techniques.<br />
Failures could come not only from total dose effects but also<br />
from so-called Single Event Effects (SEE) believed to be<br />
responsible for latch-up that can destroy ICs completely or<br />
render them unusable indefinitely or for various periods of<br />
time. Therefore, there is a need to understand the importance<br />
of SEE in specific operational environments and to find ways<br />
of quantifying the tolerance of the different technologies to<br />
these effects. We tried to find out a probability of latch-up<br />
phenomena in CMOSICs caused by fast neutrons.<br />
The experimental tests are oriented on estimation of SEE<br />
provoked by the following specific interactions of neutrons in<br />
the silicon chip: Si(n,n’)Si, Si(n,n)Si, Si(n,α)C, Si(n,p)Al and<br />
B(n,α)He.<br />
II. STUDY OF ENERGY THRESHOLD BEHAVIOUR OF<br />
IC-FAILURES<br />
The experimental set-up of a collimated, monochromatic<br />
and tagged neutron beam has been realised at the van de<br />
Graaff accelerator of Charles University, in the collaboration<br />
with Montreal University and Czech Technical University,<br />
Prague. Neutrons are produced using several production<br />
reactions. Overview of the reactions is in Table 1. Beams of<br />
1 Charles University, Prague, Czech Republic<br />
2 Czech Technical University in Prague, Prague, Czech Republic<br />
3 University of Montreal, Montreal, Canada<br />
4 corresponding author, e-mail:Zdenek.Dolezal@mff.cuni.cz<br />
neutrons with energies of 60 keV (with an energy spread 10<br />
keV), 3.8 MeV (100 keV) and 15 MeV (100 keV) are<br />
available. This provides a neutron beam energy range<br />
allowing one to estimate the expected energy threshold<br />
behaviour of SEE.<br />
The experimental setup of neutron production is at Fig. 1.<br />
Beam of accelerated deuterons or protons strikes onto a<br />
tritium or deuterium target with the molybdenum backing.<br />
The interaction point of the beam with the target represents a<br />
nearly point source of monochromatic neutrons.<br />
Figure 1: Experimental arrangement of neutron production target.<br />
d (p): beam of deuterons (protons) from the van de Graaff<br />
accelerator, D 1:deuterium (tritium) target on molybdenum backing,<br />
n: neutron beam<br />
Monitoring of neutron flux is based on Bonner<br />
spectrometer. This method allows precise determining neutron<br />
dose for each individual irradiation at one neutron energy.<br />
Furthermore, after calibration of its energy sensitivity,<br />
irradiations performed at different energies can be compared.<br />
III. DIRECT SINGLE EVENT EFFECT OBSERVATION<br />
In the case of 3 H(d,n) 4 He reaction, so called associatedparticle<br />
technique can be applied to obtain a tagged neutron<br />
beam. The technique is based on the spectroscopic detection<br />
of produced alpha particles by means of silicon diode what<br />
brings an information about neutron emission within the time<br />
uncertainty on the level of about 10 ns. The principle of the<br />
technique is displayed on Fig. 2. Registered recoiled alpha<br />
particle serves as a tag of neutron moving in a kinematically<br />
determined direction (e.g. in a direction to an IC). The conical<br />
neutron beam is collimated to the diameter of 3-4 mm at the<br />
distance of the target of 15 cm. The intensity in this beam is<br />
around 10 6 n/s, giving 10 7 n cm -2 s -1 . The detailed description<br />
of the tagged neutron beam facility is given in [1].
Figure 2: Sketch of kinematics of the associated particle method.<br />
The upper and lower cone denote outgoing neutrons and alpha<br />
particles, respectively. Aluminium foils are used to absorb elastically<br />
scattered incident deuterons.<br />
Recoil<br />
Particle<br />
Detector<br />
Fast<br />
Discriminator<br />
Coincidence<br />
Unit<br />
Scaler<br />
Time-todigital<br />
Converter<br />
PC<br />
IC<br />
Sample<br />
IC failure<br />
Event<br />
Generator<br />
Figure 3: Block diagram for SEE monitoring using associated<br />
particle method<br />
To estimate a probability of SEE, neutron beam intensity<br />
is monitored by the registration of associated particles and,<br />
directly by neutron Bonner sphere spectrometer.<br />
The circuitry of SEE registration is at Fig. 3. IC failure<br />
event generator is connected to the coincidence unit together<br />
with the associated particle detector. This ensures that effects<br />
correlated to the neutrons are counted only. In addition to the<br />
coincidence unit time between associated particle registration<br />
and SEE is measured via TDC, to give more information<br />
about particular IC failure.<br />
IV. APPLICATIONS<br />
Tests of two types of devices are currently in progress<br />
using both methods. Silicon diodes of different sizes were put<br />
to the neutron beam and their response is studied. These<br />
measurements should allow estimating a probability of the<br />
reaction per sensitive volume of the junction and amount of<br />
energy deposited by reaction products in this volume using<br />
pulse-height analysis of silicon diode signal.<br />
The same measurements are carried out with LHC Voltage<br />
Regulator (RD49 project [2]). The results will be published.<br />
V. REFERENCES<br />
[1] I. Wilhelm, P. Murali and Z. Dolezal, Production of<br />
monoenergetic neutrons... Nuclear Instruments and Methods<br />
in Phys.Res., A317(1992)553<br />
[2] CERN-LHCC RD49 Project,<br />
http://rd49.web.cern.ch/RD49/<br />
Reaction T(p,n) 3 He D(d,n) 3 He T(d,n) 4 He<br />
Neutron energy 60 keV 3.8 – 5 MeV (tunable) En = 15 MeV<br />
Energy spread 10 keV 100 keV 300 keV<br />
Reaction energy Q -0.97 MeV 4.6 MeV 17.8 MeV<br />
Intensity 10 5 n/s 10 5 n/s 10 6 n/s<br />
Table 1: Overview of neutron production reactions
THE ATLAS READ OUT DATA FLOW CONTROL MODULE<br />
AND THE TTC VME INTERFACE PRODUCTION STATUS<br />
Per Gällnö, CERN, Geneva, Switzerland<br />
(email: per.gallno@cern.ch)<br />
Abstract<br />
The ATLAS detector data flow from the Front<br />
End to the Read Out Drivers (ROD) has to be<br />
controlled in order to avoid that the ROD data<br />
buffers get filled up and hence data getting<br />
lost. This is achieved using a throttling<br />
mechanism for slowing down the Central<br />
Trigger Processor (CTP) Level One Accept<br />
rate. The information about the state of the<br />
data buffers from hundreds of ROD modules<br />
are gathered in daisy-chained fan-in ROD-<br />
BUSY modules to produce a single Busy signal<br />
to the CTP. The features and the design of the<br />
ROD-BUSY module will be described in this<br />
paper.<br />
The RD-12 TTC system VMEbus interface,<br />
TTCvi, will be produced by an external<br />
electronics manufacturer and will then be<br />
made available to the users via the CERN<br />
EP/ESS group. The status of this project is<br />
given.<br />
INTRODUCTION<br />
Dead-Time Control Review<br />
The data flow in the ATLAS sub-detector<br />
acquisition systems needs to be controlled in<br />
order to prevent information losses in the case<br />
the data buffers in the Front End, Read Out<br />
Drivers (ROD) or Read Out Buffers (ROB) get<br />
saturated.[1],[2]<br />
Three different mechanisms to control the data<br />
flow will be implemented:<br />
• By Back pressure using a XON/XOFF<br />
protocol on the read-out links between the<br />
ROD's and the ROB's.<br />
• By Throttling to slow down the level one<br />
(LVL1) trigger rate from the CTP when<br />
the ROD data buffers are nearly filled.<br />
• By Prevention introducing a constant<br />
dead-time combined with one set by a preprogrammed<br />
algorithm in the CTP in order<br />
to avoid buffer overflow in the Front End.<br />
The constant dead-time is chosen to be 4<br />
BC's after each LVL1 and the algorithm,<br />
called "leaky bucket", limits the number of<br />
LVL1 to 8 in any window of 80µs.[3]<br />
1<br />
The introduction of a dead-time by a throttling<br />
mechanism is based on a ROD busy signalling<br />
scheme informing the Central Trigger<br />
Processor about the state of the ROD data<br />
buffers as each ROD is able to produce a<br />
ROD-Busy signal when its buffer is filled up.<br />
The busy signals from each ROD are summed<br />
and monitored in ROD-Busy Modules<br />
connected in a tree structure to finally produce<br />
a veto signal for the CTP. The ROD Busy<br />
signalling scheme and associated hardware<br />
will be described in this context.<br />
ROD BUSY STATUS SUMMING<br />
AND MONITORING<br />
System Overview<br />
The Read-Out Drivers (ROD), of which there<br />
will be several hundred in the ATLAS<br />
experiment, buffer, process and format the data<br />
from the Front End electronics before being<br />
sent to the Read-Out Buffers (ROB).<br />
IfthedatabuffersintheRODareclosetoget<br />
filled up the Level-1 trigger rate must be<br />
reduced. A way of achieving this is to send a<br />
busy flag to the CTP to introduce a deadtime.[4]<br />
Busy<br />
Sub-system-x<br />
Busy<br />
Sub-detector-2<br />
Busy<br />
Sub-detector-<br />
Sub-detector-3<br />
Busy<br />
Sub-system-y<br />
From ROD's in Sub-detector-1<br />
15<br />
ROD<br />
BUSY<br />
Module<br />
ROD<br />
BUSY<br />
Module<br />
15<br />
From ROD's in Sub-detector-n<br />
ROD<br />
BUSY<br />
Module<br />
CTP<br />
Veto
Figure 1. The ROD -Busy tree structure<br />
Each ROD produces a Busy signal, which is<br />
sent to a ROD-Busy module together with<br />
Busy signals from other ROD's in the same<br />
sub-system. The ROD-Busy module sums the<br />
incoming Busy signals to produce one Busy<br />
signal of the particular sub-system. In turn the<br />
sub-system Busy signal is summed with other<br />
sub-system Busy signals in another Busy<br />
module to form a sub-detector Busy signal.<br />
Finally all sub-detector Busy signals are<br />
gathered to form a Busy input to the CTP.<br />
THE ATLAS ROD-BUSY<br />
MODULE FEATURES<br />
Basic Operation<br />
The ROD-Busy module [5] has been designed<br />
to perform the following functionality:<br />
• Collect and make a logical OR of up to 16<br />
Busy input signals.<br />
• Monitor the state of any input Busy signal.<br />
• Mask off any input Busy signal in the case<br />
a ROD is generating a false Busy state.<br />
• Measure the integrated duration any Busy<br />
input is asserted for a given time period.<br />
• Store a history of the integrated Busy<br />
duration for each input.<br />
• Generate an interrupt if any Busy input is<br />
asserted for longer than a pre-set time<br />
limit.<br />
• Generate a Busy output serving as an<br />
input for a subsequent ROD-Busy module<br />
in the tree structure or as a veto for the<br />
CTP.<br />
Software Controlled Mode<br />
In this mode of operation are the resetting and<br />
enabling of the integrating duration counters,<br />
as well as the resetting, writing and reading of<br />
the history FIFO buffers done entirely under<br />
program control. The FIFO empty and full<br />
status flags for each FIFO are available to the<br />
VMEbus.<br />
Circular Buffer Mode<br />
In this mode of operation is the transfer of data<br />
from the Busy duration counters to the FIFO<br />
buffers controlled by a timed sequencer. Bits<br />
may be set in a register in order to allow a<br />
circular buffer operation, i.e. a word is read out<br />
from the FIFO for each word written when the<br />
FIFO full flag is present. The maximum time<br />
between two consecutive data transfers, from<br />
counter to FIFO, is 6.55 ms. This time may be<br />
adjusted in a 16 bit VME register.<br />
2<br />
Busy In<br />
1<br />
2<br />
16<br />
Test Driver Test Register<br />
10 MHz clock<br />
Monitor Latch<br />
Sequencer<br />
16 bit counter FIFO<br />
16 bit counter FIFO<br />
16 bit counter FIFO<br />
Mask register<br />
VME IRQ-gen<br />
o/p<br />
Driver<br />
Figure 2. ROD -Busy module block diagram<br />
Additional Features<br />
• Each input path may be tested by setting<br />
bits in a VME test register.<br />
• A status bit reflects the state of the Busy<br />
Out.<br />
• A bit may be set in a control register in<br />
order to turn on a global Busy signal on all<br />
Busy Outputs.<br />
• The Busy Time-Out service requester may<br />
be controlled by software functions, i.e.<br />
enable, disable, set and clear of the service<br />
request.<br />
• The VMEbus interrupter may be tested<br />
with a software function.<br />
• The module may be globally reset by a<br />
software function.<br />
Modular VHDL blocks<br />
The code for the different functional blocks<br />
has been written in VHDL and may be<br />
obtained on request in the case a designer<br />
wants to implement the Busy module functions<br />
directly in a ROD module.<br />
The following VHDL entities are made<br />
available:<br />
VMEbus<br />
Busy Out
1. Input monitoring, masking, stimulating<br />
and summing.<br />
2. Quad 16-bit up-counter.<br />
3. FIFO read/write sequencer.<br />
4. VME slave and interrupter interface.<br />
5. Busy time-out service requester to drive<br />
interrupter.<br />
MODULE DESIGN DESCRIPTION<br />
Input signal receivers and test drivers<br />
The inputs are terminated with a Thévenin<br />
network resulting in a 50Ω resistive input<br />
impedanceandcalculatedtogivea+0.8V<br />
idle voltage. A Busy TRUE input corresponds<br />
to a 0 V level and a Busy FALSE to a + 0.8 V<br />
level. The input voltage threshold is set to +<br />
0.4 V and the ultra fast input comparators have<br />
an internal hysteresis circuit producing clean<br />
input signals even when receiving data over<br />
long lines. All inputs may be monitored by<br />
reading a 16 bit input status VME register.<br />
Each input may be tested by being pulled<br />
down by an internal open-collector driver<br />
connected in turn to a 16 bit VME test register.<br />
Output signal drivers<br />
The four Busy Out outputs are driven by FAST<br />
TTL open-collector drivers. The outputs have<br />
the following characteristics and usage:<br />
0. Pulledupto+5Vby10kΩ and should<br />
be used to drive a following Busy Input or<br />
the CTP Busy Input.<br />
1. Same as 0.<br />
2. Pulledupto+5Vby510Ω and should<br />
be used for monitoring purposes, i.e.<br />
oscilloscope etc.<br />
3. Same as 2.<br />
Busy Input masking and summing<br />
The cleaned-up input signals drive the Busy<br />
Summing circuit and the Busy Duration<br />
counters. The input signals to the Summing<br />
circuit may be masked off in order to isolate<br />
faulty ROD units. The Summing circuit<br />
produces a global Busy signal, which is fed to<br />
the four Busy Out outputs. A control bit may<br />
be set to produce a global Busy Out for system<br />
test purposes. This block is implemented in a<br />
FPGA named ip_reg_structure.<br />
Duration Counting<br />
The 16 bit duration counters increment at a<br />
speed of 10 MHz as long as there are Busy In<br />
signals on the inputs. There are global counter<br />
enable and reset functions generated by either<br />
accessing VME control bits or by the Buffer<br />
3<br />
Sequencer. The sixteen counters are<br />
implemented in four FPGA's named<br />
quad_count_struct.<br />
Duration Count Buffering and Read-<br />
Out<br />
The 512 word deep FIFO's buffer the Duration<br />
Counter data until read out by the VMEbus.<br />
There are global FIFO write cycles and reset<br />
functions generated by either accessing VME<br />
control bits or by the Buffer Sequencer. The<br />
FIFO read cycles are either done by the<br />
VMEbus or by the Buffer Sequencer. Control<br />
bits enable the FIFO's to be configured as<br />
circular buffer, i.e. they maintain always the<br />
history of the 512 last entered Duration Count<br />
figures. If not configured as circular buffers<br />
the FIFO's will only contain the first 512<br />
entered Duration Count figures.<br />
Duration Counter/Buffer Sequencer<br />
The sequencer, when enabled, handles the<br />
control of the Duration counters and the<br />
FIFO's. A 16 bit down counter with a VME<br />
programmable shadow register, clocked by the<br />
10 MHz clock, is used to set the rate for<br />
transferring the duration counts to the FIFO's.<br />
This block is implemented in a FPGA named<br />
fifo_sequencer.<br />
Global Busy Time-Out Service<br />
<strong>Request</strong>er<br />
The Time-Out circuit monitors the duration of<br />
the global busy signal and generates a service<br />
request if a certain time limit is reached. Two<br />
sets of 16 bit counters, magnitude comparators<br />
and VME programmable registers are used for<br />
this monitoring circuitry. An Interval<br />
counter/comparator/register circuit sets the<br />
frequency at which the two counters are reset.<br />
The Limit counter/comparator/register circuit,<br />
where the counter increments during the time<br />
the Busy is true, generates a service request, if<br />
the preprogrammed level is attained before<br />
being reset by the Interval circuit. Both<br />
counters are incremented at 10 MHz. The<br />
Time-Out service request may be programmed<br />
to trigger a VMEbus interrupt. This block is<br />
implemented in a FPGA named<br />
sreq_timer_struct.<br />
VMEbus Data Bus Interface<br />
The VME bus slave interface is of<br />
conventional type and accepts only 16 bit word<br />
data cycles (D16). The addressing can either<br />
be standard or short (A24 or A16). Address<br />
pipelining and address only cycles are<br />
accepted. Four hexa-decimal switches are used
for setting the base address of the module. This<br />
block is implemented in a FPGA named<br />
vme_if.<br />
VMEbus Interrupt generator<br />
A VMEbus interrupt can be generated when a<br />
Time-Out service request occurs. A control<br />
register in which the VME Interrupt <strong>Request</strong><br />
level is programmed and the interrupter is<br />
enabled controls the interrupt generator.<br />
Another register contains the Status/ID<br />
information in an 8 bit format (D). This<br />
block is also implemented in the FPGA named<br />
vme_if.<br />
Module Configuration EEPROM<br />
Manufacturer/module identification and serial<br />
number, as well as module revision number<br />
should be stored in this non-volatile memory<br />
chip. There are spare locations for storing<br />
supplementary information. A strap must be<br />
installed in order to program this memory chip.<br />
ISP Module Firmware programming<br />
All ALTERA® FPGA chips, except for the<br />
VMEbus interface chip, are programmed with<br />
an In-System Programming scheme, using a<br />
"Byte-Blaster" adapter connected to a PC,<br />
where the ALTERA MAX-PLUS®<br />
programming software is installed.<br />
System Clock generation and<br />
distribution<br />
An internal 10 MHz system clock generator is<br />
implemented and a clock driver fan-out chip is<br />
used to drive the seven impedance matched<br />
clock lines, each terminated with a series RC<br />
network.<br />
STATUS AND DOCUMENTATION<br />
The design of the ROD-Busy module is<br />
finished and two prototypes has been<br />
debugged and tested which are now available<br />
for evaluation. The prototypes are packaged as<br />
6U/4TE form factor VME modules. A<br />
conversion kit for implementation on larger<br />
VME boards is also foreseen.<br />
A technical manual has been produced for the<br />
ROD Busy Module, which can be retrieved<br />
from the CERN EDMS system, together with<br />
all the engineering data.[6]<br />
CONCLUSION<br />
The implementation of the ROD-Busy<br />
modules and their associated tree structured<br />
signal gathering scheme makes it possible to<br />
4<br />
efficiently control the dead-time in the ATLAS<br />
experiment and to easily detect faulty ROD<br />
modules introducing excessive dead-time.<br />
Figure 3. ATLAS ROD Busy Module<br />
TTCvi PRODUCTION STATUS<br />
The CERN EP/ESS Group will in the future<br />
handle the out-sourcing of the fabrication of<br />
the TTCvi modules and then make them<br />
available to users. During the fourth quarter of<br />
2001 will a new batch of modules be produced,<br />
which should be available when tested by the<br />
EP/ESS group in the beginning of 2002. The<br />
EP/ESS group will also be responsible for the<br />
service (maintenance, support and spares).<br />
Teams who already have TTCvi MkII version<br />
modules will be able to subscribe to this<br />
service, by paying a yearly fee.
REFERENCES<br />
[1] P. Gällnö:"Timing, Trigger and Control<br />
Distribution and Dead-Time Control in<br />
Atlas", LEB 2000 Workshop, Kraków,<br />
Poland, Sept. 2000<br />
http://lebwshop.home.cern.ch/lebwshop/LE<br />
B00_Book/daq/gallno.pdf<br />
[2] Ph. Farthouat; "TTC & Dead-time handling"<br />
2 nd ATLAS ROD Workshop, University of<br />
Geneva, Oct. 2000<br />
http://dpnc.unige.ch/atlas/rod00/transp/P_Fa<br />
rthouat2.pdf<br />
[3] R. Spiwoks: "Dead-time Generation in the<br />
Level-1 Central Trigger Processor", ATLAS<br />
Internal Note<br />
[4] ATLAS Level-1 TDR Chapter 20<br />
http://atlasinfo.cern.ch/Atlas/GROUPS/DA<br />
QTRIG/TDR/tdr.html<br />
[5] P. Gällnö:"The ATLAS ROD Busy Module"<br />
1 st ATLAS ROD Workshop, University of<br />
Geneva, Nov. 1998<br />
http://mclaren.home.cern.ch/mclaren/atlas/c<br />
onferences/ROD/programme.htm<br />
[6] P. Gällnö:"ATLAS ROD Busy Module -<br />
Technical description and users manual",<br />
EDMS Item no: CERN-0000003935<br />
5
Optically Based Charge Injection System for Ionization Detectors.<br />
H. Chen 1 , M. Citterio 2 , F. Lanni 1 , M.A.L. Leite 1,* , V. Radeka 1 , S. Rescia 1 and H. Takai 1<br />
(1) Brookhaven National Laboratory, Upton, NY − 11973, USA<br />
(*) leite@bnl.gov<br />
Abstract<br />
An optically coupled charge injection system for<br />
ionization based radiation detectors which allows a test<br />
charge to be injected without the creation of ground loops<br />
has been developed. An ionization like signal from an<br />
external source is brought into the detector through an<br />
optical fiber and injected into the electrodes by means of a<br />
photodiode. As an application example, crosstalk<br />
measurements on a liquid Argon electromagnetic calorimeter<br />
readout electrodes were performed.<br />
I. INTRODUCTION<br />
For the performance tests of ionization based radiation<br />
detectors it is desirable to have a system which is capable of<br />
injecting a charge of known value in a condition that is as<br />
close as possible to the operating environment, where charge<br />
is locally generated by the ionization of the sensitive media<br />
in the detector. One of the main problems with the<br />
conventional approach of the direct injection through an<br />
electrical cable connected to the detector electrodes is the<br />
change of the detector electrical characteristics[1]. In<br />
particular, the grounding configuration of the system can be<br />
completely modified by the introduction of the injection<br />
circuit new ground path. The use of an optically coupled<br />
injection, in which a light to current converter is placed on<br />
the electrodes of the detector to generate the ionization<br />
signal, allows for a full galvanic isolation between the<br />
detector and the test pulser (Fig. 1).<br />
Figure 1: Principle of the optically coupled injection of a test<br />
signal to an ionization detector. The capacitance added by the<br />
photodiode is negligible.<br />
An optical fiber carries a light pulse, modulated to the<br />
same waveshape of a physical signal, to a photodiode<br />
connected in parallel to the detector. The photodiode is<br />
biased using the same voltage distribution network for the<br />
detector bias, using an appropriate voltage, typically 30 V −<br />
(2) INFN, Via Celoria, 16 20100 Milano, Italy<br />
50 V. Since no additional electrical connection to the<br />
detector is necessary, the electrical environment of the<br />
detector, including grounding, is left undisturbed. This<br />
makes possible to study on a test bench issues like<br />
crosstalk[1] or electromagnetic interference[2] where<br />
additional ground loops might taint the results of bench tests.<br />
A photodiode installed on the electrodes and biased by<br />
means of the high voltage system achieves the light to<br />
current conversion. It has a capacitance of only a few<br />
picofarads, small if compared with detector capacitances of<br />
the order of nanofarads. The photodiode should also have a<br />
fast time response and low dark current. Size is also an issue,<br />
as this device may needs to fit in spaces of only a few<br />
millimeters. The light, generated by a laser diode stimulated<br />
to produce a suitable signal for the detector, is brought to the<br />
photodiode using a multimode optical fiber. Fig. 2 and Fig. 3<br />
show the characteristics of two commercial devices which<br />
can be used for this application. The laser diode is a VCSEL<br />
(Vertical Cavity Surface Emitting Laser) device, capable of<br />
generating up to 3 mW of optical power, with a peak<br />
wavelength of 860 nm, matching the photodiode maximum<br />
sensitivity of 850 nm. The use of a PIN photodiode allows<br />
the generation of signals with a rise time of 2 ns, adequate<br />
for this application.<br />
Figure 2: PIN diode capacitance. Capacitance of a PIN diode<br />
(Panasonic PNZ334) as a function of the bias voltage. At the<br />
operating voltage (28 V) the capacitance is 3.6 pF.
Figure 3: VCSEL power. Power output as a function of the bias<br />
current for VCSEL laser diode (MITEL 1A444). Eight different<br />
devices (M1 − M8) have been measured.<br />
II. EXAMPLE OF APPLICATION: CROSSTALK STUDY<br />
As an example of this method, the ATLAS<br />
Electromagnetic Liquid Argon Calorimeter[3] test stand at<br />
BNL has been modified by adding a photodiode on a few<br />
calorimeter channels (Fig. 5). The optical signal is created<br />
using the VCSEL modulated to produce the same triangular<br />
shaped pulse generated by an ionization signal. Crosstalk<br />
studies were performed by injecting a signal in one channel<br />
at a time and recording the crosstalk pattern on neighbor<br />
channels. The output signal is processed using an identical<br />
segment of the readout chain of the ATLAS barrel<br />
calorimeter: a transimpedance preamplifier (BNL 25Ω<br />
IO824 [4]) and a CR−RC 2 shaper, using a time constant of<br />
15 ns (Fig. 4).<br />
Figure 4: Pulse generation and readout signal circuit. The optical<br />
signal is generated by LD1 (MITEL 1A444 VCSEL), biased by a<br />
current source (ILX Lightwave LDX−3630) and driven directly by<br />
an arbitrary signal generator (Analogic Model 2040) programmed<br />
to give out a triangular pulse. The light signal is transmitted by<br />
the optical fiber to PD1 (Panasonic PNZ 334). A current is then<br />
injected in one channel of the calorimeter test module<br />
(capacitance Cd), which is AC coupled by CAC directly to the<br />
readout chain.<br />
Figure 5: Experimental setup for optically coupled charge injection<br />
of the ATLAS barrel liquid Argon electromagnetic calorimeter. A<br />
PIN photodiode is connected (a) within the gap from the absorber<br />
(ground) to the high voltage layer. M1 through M4 and B1, B2 are<br />
readout channels. The signal ground is provided by two points<br />
(GND Spring 1 and 2). The optical fiber is run along the tip of the<br />
bending of an electrode (b), and brought to the side of the module,<br />
where it is coupled to a laser source. The photodiode is mounted on<br />
a carrier PC board (c), soldered to the electrode (anode) and<br />
contacting the absorber (cathode). An indium foil is used to assure<br />
a low resistance electrical connection.<br />
(b)<br />
(c)
Figure 6: Measured crosstalk. A signal is injected in one channel<br />
(Fig. 5a M1 or M2) and the crosstalk is observed in the closest and<br />
in the farthest neighboring channels with ground springs both to<br />
the left (GND spring 2) and to the right (GND spring 1) of the<br />
connector (continuous trace). Removal of the ground springs to the<br />
left of the connector increases the distant crosstalk (dashed traces).<br />
In this example, one of the ground connections between<br />
the electrodes and the absorbers on the left side of the signal<br />
connector (GND Spring 2 in Fig. 5a) was removed, and the<br />
change in the crosstalk pattern observed by injecting a signal<br />
in two different neighbor channels (M1 and M2). The results<br />
(Fig. 6) show that the nearest neighbor crosstalk is<br />
unchanged. The distant crosstalk (channel M4) is indeed<br />
affected by the removal of the ground springs, but remains<br />
always less than 0.2%. The optical injection allows to<br />
reliably detect differences in crosstalk signals of less than 1<br />
mV and to rule out any ground loop effect as the cause of<br />
these small differences.<br />
III. CONCLUSIONS<br />
We showed that an ionization−like signal can be injected<br />
by optical means, allowing measurements of small amplitude<br />
signals in large systems without disturbing the grounding<br />
configuration of the detector. Since the capacitance of the<br />
photodiode is much smaller than the detector capacitance,<br />
there is no change in the electrical characteristics of the<br />
system. Most important, the undisturbed ground<br />
configuration makes possible to systematically study small<br />
amplitude effects that otherwise would be masked by<br />
interferences caused by differences in the ground path.<br />
This setup can be expanded for injecting signals in<br />
multiple channels simultaneously. This may be<br />
accomplished by the use of a direct current injection of the<br />
laser (Fig. 7) and direct coupling to the fiber without optical<br />
connectors, allowing injection and inter−calibration of<br />
several channels. With this method it is possible, for<br />
example, to reproduce the charge distribution of an EM<br />
shower over many cells of a calorimeter, thus allowing a<br />
more complete study of the crosstalk of the system.<br />
Figure 7: Fast voltage−to−current converter for direct laser<br />
modulation. This simple driver circuit allows to build many<br />
compact driver−laser−photodiode assemblies which can be inter−<br />
calibrated before installation on the electrodes. The optical<br />
connections could be simply made with optical glue, thus avoiding<br />
optical connectors and the variation in attenuation inherent in the<br />
mating of optical connectors.<br />
IV. ACKNOWLEDGEMENTS<br />
We would like to express our gratitude to Dr. T. Tsang,<br />
from the Instrumentation Division of the Brookhaven<br />
National Laboratory, for the many helpful discussions about<br />
the optical setup.<br />
V. REFERENCES<br />
[1] M. Citterio, M. Delmastro, M. Fanti. A study of the<br />
electrical properties and of the signal shapes in the ATLAS<br />
liquid Argon accordion calorimeter using a hardware model.<br />
ATLAS Larg Internal Note, May 2001.<br />
[2] B. Chase, M. Citterio, F. Lanni, D. Makowiecki, V.<br />
Radeka, S. Rescia, H. Takai, J. Bán, J. Parsons, W.<br />
Sippach. Characterization of the coherent noise,<br />
electromagnetic compatibility and electromagnetic<br />
interference of the ATLAS EM calorimeter Front End Board.<br />
5th Conference on Electronics for LHC Experiments,<br />
Snowmass, CO, USA, 20 − 24 Sep 1999. CERN−99−09 ;<br />
CERN−LHCC−99−033 − pp.222−226.<br />
[3] ATLAS Collaboration. ATLAS liquid argon<br />
calorimeter technical desig report. CERN/LHCC/96−41,<br />
CERN (1996).<br />
[4] R. L. Chase and S. Rescia. A linear low power<br />
remote preamplifier for the ATLAS liquid argon EM<br />
calorimeter. IEEE Trans. Nucl. Sci., 44:1028, 1997.
An Emulator of Timing, Trigger and Control (TTC) System<br />
for the ATLAS End cap Muon Trigger Electronics<br />
Y. Ishida, C. Fukunaga, K. Tanaka and N. Takahata<br />
Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji, Tokyo, 192-0397 Japan<br />
Abstract<br />
We have developed a stand-alone TTC emulator system.<br />
This system simulates relevant TTC signals and their<br />
sequences needed for the ATLAS TGC electronics. Almost<br />
all functionalities are packed in an FPGA chip, which is<br />
mounted on the same board as one of TTCrx test board<br />
developed by CERN EP/mic group. The signal pin<br />
allocation is also the same as the TTCrx test board. Hence,<br />
instead of the test board, if the emulator board is mounted,<br />
TTC signals are generated and distributed consistently with<br />
this board without any modification of the mother board<br />
electronics system.<br />
I. INTRODUCTION<br />
In general a facility for TTC signal generation and<br />
distribution system is indispensable for development of<br />
electronics for an LHC experiment [1]. We need two VME<br />
modules [2,3], TTCrx chip [4], fiber optical/electronics<br />
converter, and software in order to implement full<br />
functionality of the TTC signal distribution system.<br />
Electronics development of a particular sub-detector of<br />
an LHC experiment will be collaborated with several<br />
institutes of universities and laboratories. Although each<br />
institute will need more-or-less TTC signals, to facilitate<br />
such the TTC signal distribution system in every institute is<br />
unthrifty and inefficient.<br />
The ATLAS Thin Gap Chamber (TGC) electronics<br />
development team consists of seven institutes from two<br />
countries [5]. More-or-less all the institutes will need at<br />
least a few restricted TTC signals for their electronics<br />
development.<br />
The group has then developed a TTC emulator. The<br />
emulator has a functionality to emulate relevant TTC<br />
signals, which the TGC electronics system will use. Timing<br />
sequences among the signals are also emulated. In the final<br />
TGC system, a TTCrx chip will be mounted as a mezzanine<br />
card. As we have installed the emulator system into a small<br />
M. Ikeno and O. Sasaki<br />
KEK, 1-1 Oho, Tsukuba, Ibaraki, 305-0801 Japan<br />
circuit mountable on the same size daughter board, physical<br />
consistency is also satisfied with the final TTCrx board. We<br />
discuss the structure of the emulator in the section 2, and<br />
show some emulator performance of several signals in the<br />
section 3. Finally in the section 4 we summarize the results.<br />
II. STRUCTURE AND FUNCTIONALITY<br />
A. Hardware structure<br />
The main functionality of the emulator is wholly packed<br />
in a Xilinx-FPGA of SPARTAN-2 (XC2S150). Beside the<br />
FPGA, a 40.08MHz clock generator, a PROM for storage of<br />
the FPGA firmware, a variable delay, several jumpers and a<br />
dip switch, Lemo connectors for external signal inputs and<br />
a JTAG connector for the FPGA configuration are mounted<br />
on a PC board. The size and the footprint of this PC board<br />
are the same as the TTCrx carrier test board (TTCrx test<br />
board) developed by the CERN EP/mic group [4]. The top<br />
face of the emulator board is shown in Fig.1.<br />
Figure 1: TTC emulator board<br />
We have used about 61% of the maximum number of<br />
available gates for the FPGA, which is 150 000. In order to<br />
extend the emulation behavior beyond the firmware<br />
contents, some signals can be inputted through the surface<br />
mounted Lemo connectors. One can operate an own<br />
electronics with specially adjusted TTC signals by inputting
them through the connectors. The jumpers and the DIP<br />
switch are used to switch signal sources (external or<br />
internal) or the emulation mode.<br />
The 40.08MHz quartz oscillator is mounted to supply the<br />
default clock. The variable delay is used to emulate the<br />
signal called Clock40Des1. The skew of the clock signal is<br />
adjusted with this variable delay.<br />
B. Emulation Firmware<br />
Situation of Emulation is summarized in Table 1. Since<br />
the pin assignment of the emulator board is the same as the<br />
TTCrx test board, the situation of the emulation for the<br />
actual board is listed in the same way as the TTCrx board in<br />
the table. Since even if pins, which are not emulated, are<br />
connected to the FPGA, we can utilize these pins in the<br />
future firmware upgrade. In this subsection we discuss how<br />
the firmware emulates and handles the signals, and give<br />
emulation recipes of some signals.<br />
Table 1: Situation of Emulation for TTCrx board<br />
The emulated signals are indicated with "X" in the third column.<br />
Connector J1 Connector J2<br />
Pin# Name Emulation Pin# Name Emulation<br />
1 Clock40 - 1 BrcstStr2 X<br />
2 Clock40Des1 X 2 ClockL1A -<br />
3 Brcst X 3:4 Brcst X<br />
4:6 Brcst - 5 EvCntRes X<br />
7 Clock40Des2 - 6 L1Accept X<br />
8 Brcststr1 X 7 EvCntLStr X<br />
9 DbErrstr - 8 EvCntHStr X<br />
10 SinErrStr - 9 BcntRes X<br />
11:12 Subaddr X 10 GND X<br />
13:18 Subaddr X 11:22 Bcnt X<br />
19:22 DQ X 23:26 JTAG -<br />
23 DoutStr X 27 I2C -<br />
24 GND X 28 JTAG -<br />
25:32 Dout X 29 BCntStr X<br />
33 Reset_b X 30 Serial_B -<br />
34 TTCReady X 31:34 GND X<br />
35:50 GND X 35:38 PIN Vcc -<br />
39 N.C. X<br />
40 I2C -<br />
41:42 GND X<br />
43:46 TTCrx Vdd X<br />
47:50 GND X<br />
The bunch-crossing signal (BX), which is, although, not<br />
outputted from the emulator, is simulated in the firmware.<br />
The emulator generates the timing structure of BX as<br />
exactly the same as one that the actual LHC will supply [6].<br />
The Level1 Accept (L1A) signal must be made coincidence<br />
with BX even if the signal is supplied externally.<br />
L1A is generated in the firmware or can be inputted<br />
externally through the Lemo connector. One can select the<br />
trigger source with one of the jumpers mounted on the<br />
board. For the internal generation, seven different<br />
generation modes are prepared, which are the average<br />
frequencies of 100K, 10K, 1K, 100 and 1Hz of the random<br />
mode and the ones of 75K and 1Hz of the regular mode.<br />
The random mode means the L1A is generated with the<br />
time interval of Poisson distribution based on the given<br />
average rate. Bunch crossing ID (BCID) is outputted<br />
together with L1A, and one (two) clock later from L1A, the<br />
Event Identification Number EVID (EVID)<br />
is outputted. Furthermore the Trigger Type,<br />
EVID, EVID and EVID are outputted<br />
on the Dout with each clock after 4.4µs from the L1A<br />
generation. The SubAddr is used to distinguish which<br />
data fragment is on Dout bus.<br />
Although Clock40des1 is one of the total three LHC<br />
40.08MHz clock signals in the actual TTCrx, this is only<br />
the one that the emulator will output. The clock deskew is<br />
emulated through the on-board variable delay with either<br />
the internal or external clock signal. The variable delay can<br />
adjust the delay timing of maximum 20ns in 40 steps. As<br />
we foresee the necessity of a variable delay of 25ns at most,<br />
we will implement such a delay in future. The clock signal<br />
can be generated by the 40.08MHz oscillator installed on<br />
the board or inputted externally through the Lemo<br />
connector. One can select the internal clock or external<br />
input with the jumper connection. The clock of different<br />
frequency rather than 40.08MHz can be supplied externally.<br />
But in this case the BX timing signal is scaled with the<br />
input frequency.<br />
Since the emulator will not emulate Clock40 and<br />
Clock40des2, Brcst are also synchronized with only<br />
Clock40des1 as Brcst, whereas the actual TTCrx can<br />
set the synchronization of Brcst optionally with the<br />
Clock40des2.<br />
Among the Brcst signals, the emulator simulates<br />
Brcst and Brcst. All these five signals are<br />
inputted through the Lemo connectors. A Lemo-input called<br />
as RST is shared with both Brcst and Brcst. We
have two kinds of the reset signal, one is for the whole<br />
electronics system except DCS (Detector Control System),<br />
and the other one is for the DCS reset. Although we should<br />
prepare two different Lemo connectors for these two reset<br />
signals, there is no more room to install one more connector<br />
on the board. It is foreseen to emulate hardly the DCS reset<br />
signal with this emulation board.<br />
III. Performance<br />
In Figure 2, the L1A signals of the frequency 100KHz<br />
randomly generated by the emulator are shown. In the<br />
scope image of this figure, one division of the horizontal<br />
axis corresponds to 10µs. The time interval between signals<br />
will be Poisson distributed with the average interval of<br />
10µs.<br />
Figure 2: Internal L1A signal generation (100KHz random mode)<br />
Figure 3 shows externally supplied L1A with the original<br />
pulse width of 350ns and the output one produced by the<br />
emulator. If an externally supplied pulse for L1A is longer<br />
or shorter than 25nsec, the emulator will adjust the width as<br />
25 ns. The latency in the emulator from the input to the<br />
output pulse for the width modification is predicted as 53ns<br />
by the FPGA simulation. In Fig.3, the wider pulse indicated<br />
in the lower part is the input one, and the narrower one in<br />
the upper part is the output, and the horizontal division is<br />
40ns. One can find the width of the output is 25ns, and the<br />
timing interval between the leading edges of both the input<br />
and output is measured as 53ns, although this actual value<br />
have uncertainty of a few 10ns due to the measurement<br />
setup. The emulator works anyway as the simulation<br />
indicated.<br />
Figure 3: External L1A signal Input (Lower) and<br />
Output (Upper) through the emulator.<br />
After L1A signal, BCID, EVID and<br />
EVID are loaded on BCnt lines periodically<br />
with the clock. BCntStr is generated when BCID is<br />
loaded on BCnt. EvCntLStr and EvCntHStr are also<br />
generated when EVID and EVID are loaded<br />
on the BCnt respectively. The sequence of the signal<br />
generation has been observed with a logic analyzer, and we<br />
show its output in Fig.4. A typical timing sequence of<br />
relevant signals is shown for three L1A generations in the<br />
figure. From this figure we can confirm the emulation of<br />
BCID and EVID data loading on Bcnt<br />
with three clock-intervals works fine.<br />
Figure 4: Data loading sequence on Bcnt<br />
There is another 24bit event/orbit counter, which is<br />
implemented in the TTCvi module. Together with the
trigger type parameter (8 bit), which is received from the<br />
Central Trigger Processor and stored in the module, these<br />
two data are broadcasted by the module through the<br />
B-channel of the TTC network to individual TTCrx after<br />
approximately 4.4µs of the L1A generation. The emulator<br />
also emulates this sequence because this emulator will be<br />
used for the trigger and timing generation at the developing<br />
stage of the ROD module.<br />
In Figure 5 the Trigger Type and Event/Orbit counter<br />
output sequence observed by a logic analyzer is shown. The<br />
contents of the data are loaded on Dout bus with a bunch of<br />
8bit. The data are loaded together with the DoutStr and<br />
SubAddr. If SubAddr is 00, Trigger Type is loaded<br />
on Dout. The next three numbers with these two bits of<br />
SubAddr are assigned to indicate the Dout loading of three<br />
bytes of the event/orbit counter from the most significant to<br />
least significant bytes.<br />
In principle in the original system, the event counter to be<br />
loaded on BCnt and event/orbit counter to be loaded on<br />
Dout must be different ones. In this emulator system, the<br />
same number is used for both two operations. Hence, the<br />
content of event/orbit counter will be reset by EvCntRes<br />
signal.<br />
Figure 5: Data loading sequence on Dout<br />
IV. SUMMARY<br />
We have made the TTC emulator. We have implemented<br />
the system on a board, which is the same dimension and<br />
same footprint as the TTCrx test board. In order to do that,<br />
most of the emulation logic is installed in an FPGA. With<br />
this emulator, signals needed for operation of TGC<br />
electronics are all generated. The signals successfully<br />
emulated are listed in Table 1.<br />
Consequently we do not need a set of the whole TTC<br />
generation system at every development cite. As TTCrx<br />
boards are always used as mezzanine board in circuits in<br />
which the TTC signals are required in the current TGC<br />
electronics, instead to put TTCrx board, we can get all the<br />
necessary TTC signals by putting the emulator. With this<br />
emulator, we can save money, time as well as man power<br />
resource to prepare and maintain the whole TTC system.<br />
Although the emulator has been customized for the TGC<br />
electronics development, it will be easily adopted by any<br />
other electronics systems if they use the TTCrx test board<br />
as a daughter one.<br />
As all-in-one emulator system like the presently<br />
discussed one is easy to handle and easy to realize the<br />
complicated signal sequence, it will be useful for not only<br />
the development stage but also the production stage with<br />
various workshops.<br />
IV. ACKNOWLEDGEMENT<br />
This work was done in Japanese contribution framework<br />
of the ATLAS experimental project. We would like to<br />
acknowledge all of Japanese ATLAS TGC group as well as<br />
KEK electronics group for their supports and discussions<br />
throughout the work. We would like to express our<br />
gratitude to Prof. T. Kondo of KEK for his support and<br />
encouragement.<br />
V. REFERENCES<br />
[1] B.G.Taylor: “RD12 Timing, Trigger and Control (TTC)<br />
systems for LHC Detectors” Web page<br />
http://www.cern.ch/TTC/intro.html<br />
[2] Ph.Farthouat and P.Gallno: “TTC-VMEbus Interface<br />
(TTCvi-MkII)”<br />
http://www.cern.ch/TTC/TTCviSpec.pdf<br />
[3] P.Gallno: “TTCvx Technical Description and Users<br />
Manual”<br />
http://www.cern.ch/TTC/TTCvxManual1a.pdf”<br />
[4] J.Christiansen et al.: “TTCrx Reference Manual ver3.2”<br />
http://www.cern.ch/TTC/TTCrx_Manual3.2.pdf”<br />
[5] K.Hasuko et al.: “First-Level Endcap Muon Trigger<br />
System for ATLAS”, Proceedings of LEB2000,<br />
Cracow, Poland, Sept.,2000, pp328.<br />
[6] P. Collier: “SPS Cycles for LHC Injection”<br />
http://sl.web.cern.ch/SL/sli/Cycles.htm
��Ö×Ø �ÓÑÑ�××�ÓÒ�Ò� Ê�×ÙÐØ× �ÓÖ Ø�� À�Ö� �<br />
��Ö×Ø Ä�Ú�Ð ÌÖ����Ö<br />
Å �ÖÙ�Ò×Ñ�  �Ð�ÑÑ�Ö À �Ð� ��Ò×Ø��Ò Â �Ð��×× � �Ö�ÓÔÐ � À���Ò�Ð Ê Å��ÒÒ�Ö<br />
� Å� ��ØØ� Å Æ�ÓÖ�Ò��Ö� Ê È�ÖÒ� � � � Ê�××�Ò� Á Ê�Ù � Ë �Û�Ò��Ò��Ù�Ö � � ËÓÑÓÚ<br />
Í ÍÛ�Ö � � ÏÙÖÞ<br />
� ÆÁÃÀ�� È Ç �ÓÜ � �� ÆÄ � �� �Ñ×Ø�Ö��Ñ<br />
� ��Ë� � � À�Ñ�ÙÖ� ��ÖÑ�ÒÝ<br />
� �ÓÑÔÙØ�Ö Ë ��Ò � Î ÍÒ�Ú�Ö×�ØÝ Ó� Å�ÒÒ���Ñ � �� Å�ÒÒ���Ñ ��ÖÑ�ÒÝ<br />
�� �� ���Ö�� � È�Ý×�� ÍÒ�Ú�Ö×�ØÝ Ó� ÊÓ×ØÓ � � � � ÊÓ×ØÓ �<br />
�� Å�Ü ÈÐ�Ò � ÁÒ×Ø�ØÙØ �Ö Ã�ÖÒÔ�Ý×�� � �� � À����Ð��Ö� ��ÖÑ�ÒÝ<br />
�� È�Ý×���Ð�× ��× ÁÒ×Ø�ØÙØ ÍÒ�Ú�Ö×�ØÝ Ó� À����Ð��Ö� � �� À����Ð��Ö�<br />
��×ØÖ� Ø<br />
ÁÒ ÓÖ��Ö ØÓ ¬ÐØ�Ö Ö�Ö� � �� �Ý× ÓÙØ Ó� Ø�� �Ñ<br />
Ñ�Ò×� ��Ø� ×ØÖ��Ñ ÔÖÓ�Ù �� �Ý Ø�� � ��Î ÔÖÓ<br />
ØÓÒ ÒÙ Ð�ÓÒ �ÒØ�Ö� Ø�ÓÒ× �Ø ÅÀÞ �Ò À�Ö� �<br />
Û� ��Ú�ÐÓÔ�� � Ú�ÖÝ ��×Ø ØÖ� � ¬Ò��Ö ���Ò�<br />
Ã�ÐÑ�Ò ¬ÐØ�Ö�Ò� �Ò×Ô�Ö�� �Ø ×��Ö ��× ÔÓ××��Ð� Ô�Ö<br />
Ø� Ð� ØÖ� �× Ø�ÖÓÙ�� ÙÔ ØÓ ×�Ú�Ò Ð�Ý�Ö× Ó� ØÖ� �<br />
�Ò� ��Ñ��Ö× Ì�� ØÖ����Ö �� �×�ÓÒ �× ��×�� ÓÒ<br />
Ø�� ÑÓÑ�ÒØ� �Ò� Ñ�××�× Ó� ØÖ� � Ô��Ö× Ì�� �Ñ<br />
ÔÐ�Ñ�ÒØ�Ø�ÓÒ �× � Ô�Ö�ÐÐ�Ð �Ò� Ô�Ô�Ð�Ò�� ×Ý×Ø�Ñ<br />
Ó� ��ÓÙØ Ù×ØÓÑ Ñ��� ��Ö�Û�Ö� ÔÖÓ �××ÓÖ×<br />
Ì��× Ô�Ô�Ö ��× Ö���× �Ø× ¬Ö×Ø ÓÑÑ�××�ÓÒ�Ò� �ÙÖ<br />
�Ò� Ø�� Ý��Ö �× � Ö�×ÙÐØ Ó� Ø�� ÓÑÑ�××�ÓÒ<br />
�Ò� ÖÙÒ �Ò Ø�� �ÙÒ Ø�ÓÒ�Ð�ØÝ ÓÙÐ� �� Ú�Ö�¬��<br />
�Ò� Ø�� Ô�Ö�ÓÖÑ�Ò � Û�× �×Ø�Ñ�Ø��<br />
Á Ê�ÕÙ�Ö�Ñ�ÒØ×<br />
À�Ö� � ��× ���Ò ��×��Ò�� Û�Ø� Ø�� Ñ��Ò �Ó�Ð<br />
ØÓ Ñ��×ÙÖ� �È Ú�ÓÐ�Ø�ÓÒ �Ò Ø�� � ×Ý×Ø�Ñ� ℄ Ì��<br />
� Ñ�×ÓÒ× �Ö� ÔÖÓ�Ù �� �ÖÓÑ Ö�� Ø�ÓÒ× ��ØÛ��Ò<br />
� ��Î ÔÖÓØÓÒ× Û�Ø� � ¬Ü�� Ø�Ö��Ø �ÑÓÒ� Û�Ø�<br />
� ÓÔ�ÓÙ× �� ��ÖÓÙÒ� ÁÒ ÓÖ��Ö ØÓ Ó�Ø��Ò � ×Ù�<br />
¬ ��ÒØ ÒÙÑ��Ö Ó� �× �Ò �Ú�Ö��� � �ÒØ�Ö� Ø�ÓÒ×<br />
Ø��� ÔÐ� � �Ø � �ÙÒ � ÖÓ××�Ò� Ö�Ø� Ó� ÅÀÞ<br />
Ë�Ò � �ÒØ�Ö�×Ø�Ò� �Ú�ÒØ× �Ö� Ö�Ö� � ÀÞ � ×�Ð�<br />
Ø�Ú� ØÖ����Ö ×Ý×Ø�Ñ �× � ��Ý �Ð�Ñ�ÒØ Ó�À�Ö� �<br />
Ì�� Ö�Ø� Ó� �Ú�ÒØ× ØÖ�Ò×Ñ�ØØ�� �ÖÓÑ Ø�� �ÓÒØ<br />
�Ò� ØÓ Ø�� Ö���ÓÙØ �Ó�Ö�× �× Ð�Ñ�Ø�� ØÓ �Ò �Ú<br />
�Ö��� Ó� � �ÀÞ À�Ò � Ø�� ��Ö×Ø Ä�Ú�Ð ÌÖ��<br />
��Ö �ÄÌ ��× ØÓ Ö��Ù � Ø�� Ö�Ø� �Ý Ø�Ö�� ÓÖ<br />
��Ö× Ó� Ñ��Ò�ØÙ�� Ì�� Ð�Ò�Ø� Ó� Ø�� �ÖÓÒØ �Ò�<br />
Ô�Ô�Ð�Ò�× �× � ×ÐÓØ× Û�� � �ÐÐÓÛ× �ÓÖ � �� �×�ÓÒ<br />
Ø�Ñ� Ó� �× Ø�� Ö�×Ø Ó� Ø�� Ô�Ô�Ð�Ò� �× Ò�����<br />
�ÓÖ Ø�� ÓÒØÖÓÐ ×Ý×Ø�Ñ× ÁÒ ÓÖ��Ö ØÓ Ð�Ñ�Ø Ø��<br />
�ÑÓÙÒØ Ó� ��Ø� ØÓÖ ��ÒÒ�Ð× Ø�� Ó ÙÔ�Ò ��× �Ö�<br />
�ÐÐÓÛ�� ØÓ Ö�� � ÙÔ ØÓ �Ø Ñ�Ü�Ñ�Ð �ÒØ�Ö�<br />
Ø�ÓÒ Ö�Ø� Û�� � �× Ø�� Ñ��ÓÖ ��ÐÐ�Ò�� �ÓÖ Ø��<br />
ØÖ����Ö �Ò� Ø�� ÓØ��Ö Ö� ÓÒ×ØÖÙ Ø�ÓÒ ×Ý×Ø�Ñ×<br />
Ì�� Ñ��Ò �Ú�ÒØ ×�Ð� Ø�ÓÒ Ö�Ø�Ö�� Ù×�� �Ò Ø��<br />
�ÄÌ �Ö��<br />
¯ Ð�ÔØÓÒ× Û�Ø� Ð�Ö�� ØÖ�Ò×Ú�Ö×� ÑÓÑ�ÒØÙÑ<br />
Ô Ì<br />
¯ Ô��Ö× Ó� Ð�ÔØÓÒ× Û�� � �ÓÖÑ Ð�Ö�� �ÒÚ�Ö��ÒØ<br />
Ñ�××�×<br />
ÁÒ ÓÖ��Ö ØÓ Ñ�Ò�Ñ�Þ� Ø�� ����Ø�Ñ� Ó� Ø�� �ÜÔ�Ö�<br />
Ñ�ÒØ Ø�� ×�ØÙÔ Ó� Ø�� ×Ý×Ø�Ñ �ÒÒÓØ �Ü ��� � ��Û<br />
Ñ�ÒÙØ�× ÈÓ××��Ð� Ñ�Ð�ÙÒ Ø�ÓÒ× ÑÙ×Ø �� ��Ø� Ø��<br />
�ÙÖ�Ò� ×�ØÙÔ �Ò� ÔÖÓ�Ð�Ñ× ÑÙ×Ø �� ØÖ� ���Ð�<br />
ÕÙ� �ÐÝ �Ò ÓÚ�ÖÚ��Û ��ÓÙØ Ø�� �Ð� ØÖÓÒ� × �Ò<br />
ÚÓÐÚ�� �Ò À�Ö� � �× ��Ú�Ò �Ø Ø��× ÓÒ��Ö�Ò �� ℄ �<br />
ÑÓÖ� �ÜØ�Ò×�Ú� ÓÚ�ÖÚ��Û Ó� Ø�� ��Ö×Ø Ä�Ú�Ð ÌÖ��<br />
��Ö ×Ý×Ø�Ñ �Ò �� Ó�Ø��Ò�� �ÖÓÑ� ℄<br />
ÁÁ ÌÖ����Ö ËØÖ�Ø��Ý<br />
� ��×Ø �Ò� �Æ ��ÒØ Û�Ý ØÓ �ÑÔÐ�Ñ�ÒØ Ø�� Ô�Ø<br />
Ø�ÖÒ Ö� Ó�Ò�Ø�ÓÒ �Ð�ÓÖ�Ø�Ñ ×Ô� �¬�� ��ÓÚ� �× �<br />
Ã�ÐÑ�Ò ��ÐØ�Ö ÁØ �ÐÐÓÛ× �Ø�Ö�Ø�Ú� ÔÖÓ �××�Ò� �Ò�<br />
��Ò � ��Ø� Ö�ÕÙ�Ö�� �ÓÖ Ø�� ÔÖÓ �××�Ò� �Ò �� Ó�<br />
ÐÓ �Ð Ò�ØÙÖ� ÇÙÖ ×Ô� �¬ �ÑÔÐ�Ñ�ÒØ�Ø�ÓÒ Ñ���×<br />
� ��Û ×�ÑÔÐ�¬ �Ø�ÓÒ×�<br />
¯ ÆÓ �Ö��Ø Ø�Ñ� �ÙØ ÓÒÐÝ ��Ø �Ò�ÓÖÑ�Ø�ÓÒ �×<br />
Ù×��
¯ ÅÙÐØ�ÔÐ� × �ØØ�Ö�Ò� �×Ø�Ñ�Ø�× �ÓÖ Ø�� ØÖ� �<br />
�ÜØÖ�ÔÓÐ�Ø�ÓÒ �Ö� ÓÒÐÝ Ù×�� �Ò Ø�� ÑÙÓÒ ×Ý×<br />
Ø�Ñ<br />
¯ Ì�� ØÖ� �× �×Ø��Ð�×��� �ÙÖ�Ò� Ø�� �Ø�Ö�Ø�Ú�<br />
Ô�ØØ�ÖÒ Ö� Ó�Ò�Ø�ÓÒ �Ö� ÒÓØ Ö�¬Ø<br />
Ì�� �Ð�ÓÖ�Ø�Ñ �× ×��Ø ��� �Ò ¬� ÔÖÓ ���× �×<br />
�ÓÐÐÓÛ×�<br />
�Ò �Ò�Ø��Ð ×��Ö � Ö���ÓÒ � ×Ó �ÐÐ�� Ê�<br />
��ÓÒ Ç� ÁÒØ�Ö�×Ø ÊÇÁ �× �ÓÖÑ�� �Ý � ÔÖÓ<br />
�××ÓÖ Ù×�Ò� Ô�ÖØ� Ð� Á� �Ò�ÓÖÑ�Ø�ÓÒ �ÖÓÑ<br />
Ø�� ÑÙÓÒ ×Ý×Ø�Ñ��℄ Ø�� �Ð� ØÖÓ Ñ��Ò�Ø�<br />
�ÐÓÖ�Ñ�Ø�Ö��℄ ÓÖ Ø�� ���� Ô Ì ���ÖÓÒ ×����Ò�<br />
Ô�� ��Ñ��Ö×<br />
Ì�� ×��Ö � Ö���ÓÒ �× �ÜØÖ�ÔÓÐ�Ø�� ØÓ Ø�� Ò�ÜØ<br />
ÙÔ×ØÖ��Ñ ��Ñ��Ö<br />
Ì�� ÊÇÁ �× ÔÖÓ�� Ø�� ×�ÑÙÐØ�Ò�ÓÙ×ÐÝ ØÓ �ÐÐ<br />
×Ø�Ö�Ó Ú��Û× Ó� Ø�� ��Ñ��Ö �Ò� Ø�� �ÔÔÖÓ<br />
ÔÖ��Ø� ��Ø× �Ö� �ÜØÖ� Ø��<br />
� Ì�� ��Ø Ô�ØØ�ÖÒ× �Ö� ÓÚ�ÖÐ�Ý�� �Ò� Ó�Ò �<br />
��Ò �× �Ö� �ÓÙÒ�<br />
� Ì�� ÓÓÖ��Ò�Ø�× Ó� Ø�� �ÓÙÒ� Ó�Ò ���Ò �×<br />
ØÓ��Ø��Ö Û�Ø� Ø�� ×Ó ��Ö ��×Ø �ÒÓÛÒ ÔÓ�ÒØ<br />
ÓÒ Ø�� ØÖ� � �Ö� Ù×�� ØÓ �×Ø��Ð�×� �Ò ÙÔ��Ø��<br />
ÊÇÁ<br />
� Ì�� ÙÔ��Ø�� ÊÇÁ �× ×�ÒØ ØÓ Ø�� Ò�ÜØ ÙÔ<br />
×ØÖ��Ñ ��Ñ��Ö �Ò� Ø�� ÔÖÓ �×× �× �Ø�Ö�Ø��<br />
×Ø�ÖØ�Ò� �ÖÓÑ ×Ø�Ô Ø�ÐÐ Ø�� Ð�×Ø ��Ñ��Ö �×<br />
�� ���<br />
� Ì�� ��Ò�Ñ�Ø� × Ó� Ø�� �ÓÙÒ� ØÖ� �× �× �Ð Ù<br />
Ð�Ø�� �××ÙÑ�Ò� �Ò ÓÖ���Ò �Ø Ø�� Ñ��Ò �ÒØ�Ö<br />
� Ø�ÓÒ ÔÓ�ÒØ �Ò� ØÖ� �Ð�Ú�Ð ÙØ× �Ö� �ÔÔÐ���<br />
� �ÐÐ ×Ù ����Ò� ØÖ� �× �Ö� ×ØÓÖ�� �Ò� �ÐÐ ÔÓ×<br />
×��Ð� ØÖ� � Ô��Ö× �Ö� ÔÖÓ �××�� �Ò ÓÖ��Ö ØÓ<br />
¬Ò� Ø�� ÒÙÑ��Ö Ó� �×ÓÐ�Ø�� ØÖ� �× �Ò� Ø��<br />
�ÒÚ�Ö��ÒØ Ñ�××�× Ó� Ø�� Ô��Ö×<br />
� ÁÒ �×� � ×ÙÆ ��ÒØ ÒÙÑ��Ö Ó� �×ÓÐ�Ø�� ØÖ� �×<br />
Ó� � �ÖØ��Ò Ð�×× ÓÖ � ØÖ� � Ô��Ö Û�Ø� ÔÖÓÔ�Ö<br />
Á� ��Ö�� �Ò� Ñ�×× �× �ÓÙÒ� � ÔÓ×�Ø�Ú� ØÖ��<br />
��Ö �Ò�ÓÖÑ�Ø�ÓÒ �× �ÓÖÛ�Ö��� ØÓ Ø�� À�Ö� �<br />
��×Ø �ÓÒØÖÓÐ ËÝ×Ø�Ñ Û�� � �× Ö�×ÔÓÒ×��Ð�<br />
ØÓ ÓÓÖ��Ò�Ø� Ø�� Ö���ÓÙØ Ó� Ø�� �ÜÔ�Ö�Ñ�ÒØ<br />
���ÙÖ� � Ë ��Ñ�Ø� Ó� Ø�� ØÖ� � ¬Ò��Ò� �Ð�ÓÖ�Ø�Ñ<br />
� À�Ö�Û�Ö�<br />
ÁÁÁ ÌÖ� � ��Ò��Ö<br />
�ÑÔÐ�Ñ�ÒØ�Ø�ÓÒ<br />
���� �Ø�� Ù×ØÓÑ ��Ö�Û�Ö� ÔÖÓ �××ÓÖ× ��×�� ÓÒ<br />
Ñ�××�Ú� Ù×� Ó� �ÈÄ�× �ËÁ�× �Ò� ÄÓÓ� ÍÔ Ì�<br />
�Ð�× �Ö� Ø�� ÓÖ� Ó� Ø�� ×Ý×Ø�Ñ �ÔÔÖÓÜ�Ñ�Ø�ÐÝ<br />
×Ù � ÔÖÓ �××ÓÖ× ÓÔ�Ö�Ø� �ÙÐÐÝ Ô�Ô�Ð�Ò�� �Ò� �Ò<br />
Ô�Ö�ÐÐ�Ð Ì�� ÔÖÓ �××ÓÖ× �Ö� �ÒØ�Ö ÓÒÒ� Ø�� Û�Ø�<br />
���Ø�× Ð�Ò�×��℄ �Ò ÓÖ��Ö ØÓ ÓÑÑÙÒ� �Ø� Ø�� ÔÖÓ<br />
�×× ��Ø� �Ò� Ø�� ��Ø� ØÓÖ ��Ø� �× Ö� ��Ú�� ÓÚ�Ö<br />
Ñ�ÒÝ � Å��Ø�× ÓÔØ� �Ð ¬��Ö× �ÐÐ ÔÖÓ �××ÓÖ×<br />
ÓÒØ��Ò � Å��� �ÈÍ �× ÓÒØÖÓÐÐ�Ö �Ò� �Ö� �<br />
�××��Ð� �Ý Û�Ý Ó� ÎÅ� �ÐÐ ��Ö�Û�Ö� �× �ÓÙ×��<br />
�Ò �Ù ÎÅ� Ö�Ø�× Ì�� ÎÅ� Ö�Ø�× �Ö� ÓÒ<br />
ØÖÓÐÐ�� �Ý ÈÓÛ�Ö È� ÓÑÔÙØ�Ö× ÖÙÒÒ�Ò� ÄÝÒÜ<br />
ÇË Ì�� ×�ØÙÔ �Ò� ÓÒØÖÓÐ ÔÖÓ �××�× �Ö� �Ü� ÙØ��<br />
ÓÒ È�× ÓÔ�Ö�Ø�� �Ý Ä�ÒÙÜ<br />
� �ÓÒØÖÓÐ ×Ó�ØÛ�Ö� �Ö ��Ø� ØÙÖ�<br />
Ì�� ÍÒ�Ü ��×�� ÔÖÓ �××�× �Ö� ÓÑÑÙÒ� �Ø�Ò� �Ý<br />
Û�Ý Ó� � Ñ�××��� ×Ý×Ø�Ñ ��×�� ÙÔÓÒ Í�È ÁÈ<br />
Ì�� ×�Ñ� Ñ�××��� ×Ý×Ø�Ñ �× Ù×�� ØÓ Ö�� � Ø��<br />
ÓÒØÖÓÐÐ�Ö× ÓÒ Ø�� ÎÅ� ÔÖÓ �××ÓÖ× ÒÓÛ ��×�� ÓÒ
� Ñ��Ð �ÓÜ ÔÖÓØÓ ÓÐ ÓÚ�Ö ÎÅ� ÇÒ ØÓÔ Ó� Ø��<br />
ÓÑÑÙÒ� �Ø�ÓÒ Ð�Ý�Ö � ×Ø�Ò��Ö� ×�Ø Ó� ÓÒØÖÓÐ<br />
�Ò� ÑÓÒ�ØÓÖ Ñ�Ø�Ó�× �× �ÑÔÐ�Ñ�ÒØ�� �ÐÐÓÛ�Ò�<br />
���� Ð�Ú�Ð � �×× ØÓ Ø�� ÔÖÓ �××ÓÖ× Ì��×� Ñ�Ø�<br />
Ó�× �Ö� Ù×�� �Ý Ø�� ÖÙÒ ×�ØÙÔ �Ò� ÓÒØÖÓÐ ÔÖÓ<br />
�××�× �× Û�ÐÐ �× �Ý ���� �Ø�� Ø�×Ø ÔÖÓ ��ÙÖ�×<br />
� ËÝ×Ø�Ñ Î�Ö�¬ �Ø�ÓÒ<br />
ÁÒ�Ø��ÐÐÝ �Ú�ÖÝ ×Ý×Ø�Ñ ÓÑÔÓÒ�ÒØ ØÝÔ� �ÐÐÝ<br />
�Ó�Ö�× �Ö� ÓÑÔÐ�Ø�ÐÝ Ú�Ö�¬�� �Ò � ×Ø�Ò� �ÐÓÒ�<br />
ÑÓ�� ÔÖ�ÓÖ ØÓ �Ò×Ø�ÐÐ�Ø�ÓÒ �ÓÖ �Ü�ÑÔÐ� �� � Ó�<br />
ØÖ� � ¬Ò��Ò� ÔÖÓ �××ÓÖ× ÙÒ��Ö�Ó � Û��� Ø�×Ø<br />
Û�� �Ú�Ö�¬�× �Ú�ÖÝ ×�Ò�Ð� Ó« ��Ô ×��Ò�Ð �Ò �Ú�ÖÝ<br />
ÔÖÓ �×× Ý Ð� �Ò � �ÝÒ�Ñ� ��×��ÓÒ Ì��× ���<br />
ØÙÖ� Ð��ÖÐÝ ��� ØÓ �� �Ö��ÙÐÐÝ ��×��Ò�� �ÒØÓ Ø��<br />
�Ó�Ö�×<br />
Ì�� ÑÓ×Ø �ÑÔÓÖØ�ÒØ �Ó�Ð Ó� Ø�� ×Ý×Ø�Ñ ×�ØÙÔ<br />
�× ØÓ Ú�Ö��Ý Ø�� �ÒØ�Ö� ×Ý×Ø�Ñ �ÙÒ Ø�ÓÒ�Ð�ØÝ Ì��×<br />
�× � ���Ú�� �Ý ÓÑÔ�Ö�Ò� �Ú�ÖÝ �ÒØ�ÖÑ����Ø� ÔÖÓ<br />
�××�Ò� Ö�×ÙÐØ �Ø �� � Ô�Ô�Ð�Ò� ×Ø�Ô Ó� Ø�� ��Ö�<br />
Û�Ö� ÔÖÓ �××ÓÖ× Û�Ø� Ø�� ÔÖ��� Ø�ÓÒ Ó� � ×�ÑÙÐ�<br />
Ø�ÓÒ Ì�� ÑÓ×Ø �ÑÔÓÖØ�ÒØ Ô�ÖØ �× Ø�� ÓÔ�Ö�Ø�ÓÒ Ó�<br />
�ÐÐ ÔÖÓ �××ÓÖ× �Ò � ×ÝÒ �ÖÓÒÓÙ× ×�Ò�Ð� ×Ø�Ô ÑÓ��<br />
ß �Ø � ×ÙÆ ��ÒØ Ö�Ø� ØÓ Ñ�Ô ÓÙØ Ø�� Ö�ÕÙ�Ö�� Ô�<br />
Ö�Ñ�Ø�Ö ×Ô� � Ó� �ÐÐ ØÖ� �×<br />
�ÓÖ ×Ô��� Ö��×ÓÒ× � ×�ÑÔÐ� ×�Ö��Ð ÓÔ�Ö�Ø�Ò� Ó�<br />
�ÐÐ ÔÖÓ �××ÓÖ× �× �Ü ÐÙ��� Ï� �ÑÔÐ�Ñ�ÒØ�� �<br />
×Ý×Ø�Ñ Û��Ö� Ø�� ÐÓ �Ð ÓÒØÖÓÐÐ�Ö× ÓÔ�Ö�Ø� Ø���Ö<br />
ÔÖÓ �××ÓÖ �Ò Ô�Ö�ÐÐ�Ð ØÓ Ø�� ÓØ��Ö× ÁÒØ�Ö ÔÖÓ �×<br />
×ÓÖ ×ÝÒ �ÖÓÒ�Þ�Ø�ÓÒ �× �Ò×ÙÖ�� �Ý ���� �Ø�� Ñ�×<br />
×���× ÐÓ ��Ò� Ø�� ×Ý×Ø�Ñ ÁÒ Ø��× Û�Ý Û� �Ö�<br />
��Ð� ØÓ ÔÖÓ �×× �Ú�ÒØ× Û�Ø� � ÀÞ �ÐÐÓÛ�Ò� Ø��<br />
�ÑÙÐ�Ø�ÓÒ Ó� � Ö��Ð Ø�Ñ� ×� ÓÒ� �Ò ��ÓÙØ ÓÒ� ��Ý<br />
��Ø� ØÓÖ ��Ø� �Ö� �Ò×�ÖØ�� ÓÒ Ø�� Ý Ì�� ��Ø�<br />
�Ò ÓÖ���Ò �ÖÓÑ ÔÖ�Ú�ÓÙ×ÐÝ Ö� ÓÖ��� ��Ø� ØÓÖ ��Ø�<br />
ÓÖ ÅÓÒØ� ��ÖÐÓ ��Ò�Ö�ØÓÖ×<br />
ÁÒ Ø��× Û�Ý Û�Û�Ö� ��Ð� ØÓ Ú�Ö��Ý Ø�� �ÙÒ Ø�ÓÒ<br />
�Ð�ØÝ Ó� Ø�� ×Ý×Ø�Ñ �Ü �ÔØ �ÓÖ Ø�� ��Ø� �ÒÔÙØ Ð�Ò�×<br />
×�Ò � Û� �Ö� ÒÓØ ��Ð� ØÓ ÛÖ�Ø� �Ú�ÒØ ��Ø� �� �ØÓ<br />
Ø�� �ÖÓÒØ �Ò� Û�� �Û� �Ö� Ð��ÖÐÝ Ñ�××�Ò� Ì��<br />
��Ø� �ÒÔÙØ× �Ò ÓÒÐÝ �� �� ��� �Ý �Ò��Ô�Ò��ÒØ<br />
ÑÓÖ� ×Ø�Ø� Ø�×Ø ÔÖÓ ��ÙÖ�×<br />
ÁÎ Ê�×ÙÐØ×<br />
�× � Ö�×ÙÐØ Ó� Ø�� ÓÑÑ�××�ÓÒ�Ò� ÖÙÒ �Ò Ø��<br />
Ý��Ö Û� ÓÙÐ� Ú�Ö��Ý Ø�� �ÙÒ Ø�ÓÒ�Ð�ØÝ Ó� Ø��<br />
�ÄÌ �× ØÖ� � ¬Ò��Ö<br />
18000<br />
16000<br />
14000<br />
12000<br />
10000<br />
8000<br />
6000<br />
4000<br />
2000<br />
���ÙÖ� � À�Ø ØÖ� � Ö�×��Ù�Ð×<br />
0<br />
-10 -8 -6 -4 -2 0 2 4 6 8 10<br />
all x-res. sl PC2 0 [cm]<br />
���ÙÖ� ×�ÓÛ× Ø�� ��×ØÖ��ÙØ�ÓÒ Ó� Ø�� Ö�×��Ù<br />
�Ð× ��ØÛ��Ò ØÖ� �× �ÓÙÒ� �Ý Ø�� �ÄÌ �Ò� ��Ø× �Ò<br />
� ×ÙÔ�ÖÐ�Ý�Ö Ó� Ø�� ØÖ� ��Ö Û�� � �× ÒÓØ Ù×�� �Ò<br />
Ø�� �ÄÌ � Ð��Ö Ô��� Û�Ø� Ø�� �ÜÔ� Ø�� Û��Ø�<br />
�× Ó�×�ÖÚ�� Ì�� ×�Ñ� ÔÐÓØ �Ò �� ÔÖÓ�Ù �� Ù×<br />
�Ò� Ø�� ×�ÑÙÐ�Ø�ÓÒ Ó� Ø�� ØÖ����Ö �Ð�ÓÖ�Ø�Ñ ÓÒ Ø��<br />
Ö� ÓÖ��� ��Ø� Ì�� ÒÓÒ �Ø �� ��ÖÓÙÒ� ÐÓ×� ØÓ<br />
Ø�� Ô��� �Ò �� �ÜÔÐ��Ò�� �Ý ÓÖÖ�Ð�Ø�� ��Ø× �Ò<br />
Ø�� ØÖ� ��Ò� ��Ú� � Ì�� �Ø ÙÒ��ÖÐÝ�Ò� �� �<br />
�ÖÓÙÒ� �× �×Ø�Ñ�Ø�� Ù×�Ò� ØÖ� �× �ÖÓÑ ÓÒ� �Ú�ÒØ<br />
�Ò� ��× �ÖÓÑ �Ò ÓØ��Ö �Ú�ÒØ Ì�� �� ��ÖÓÙÒ�<br />
Ð�Ú�Ð �× ÓÒ×�×Ø�ÒØ Û�Ø� � ��Ó×Ø Ö�Ø� Ó� �<br />
1<br />
0.9<br />
0.8<br />
0.7<br />
0.6<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
0<br />
���ÙÖ� � À�Ø ØÖ� � Ö�×��Ù�Ð×<br />
Track Efficiency per Superlayer<br />
●: realistic MC<br />
▲: Data<br />
TDU TPU PC 1 PC 4 TC 1 TC 2 MU 1 MU 3 MU 4<br />
Á� ÓÒ� �ÓÐÐÓÛ× Ø�� ØÖ� ��Ò� �Ò Ø�� ×�ÑÙÐ�Ø�ÓÒ
×Ø�Ô �Ý ×Ø�Ô ÓÒ� �Ò �×Ø�Ñ�Ø� Ø�� �Æ ��Ò Ý Ó� Ø��<br />
�Ð�ÓÖ�Ø�Ñ �ÓÖ �� � ÙÔ��Ø� �× � Ö���Ö�Ò � ×�ÑÔÐ�<br />
ØÖ� �× Û�� � �Ö� �� �Ý ÔÖÓ�Ù Ø× �ÖÓÑ ���ÒØ�¬��<br />
Â��× �Ö� Ù×�� ���ÙÖ� Ö�ÔÖ�×�ÒØ× Ø��× �Æ ��Ò Ý<br />
�ÓØ� �ÓÖ Ø���Ò ��Ø� �Ò� �ÓÖ � ×�ÑÙÐ�Ø�ÓÒ ÓÒ �<br />
Ö��Ð�×Ø� ��Ø� ØÓÖ ÅÓÒØ� ��ÖÐÓ Ì�� ØÛÓ ÔÐÓØ×<br />
Ð��ÖÐÝ ��Ö�� � �� ÓÑÔÓ×�Ø�ÓÒ Ó� Ø�� �Æ ��Ò ��×<br />
�ØØÖ��ÙØ�× ��ÓÙØ ��Ð� Ó� Ø�� �Ò�Æ ��Ò Ý ØÓ ��Ø�<br />
ØÓÖ �ÐÐ �Ò�Æ ��Ò ��× �Ò� Ø�� ÓØ��Ö ��Ð� ØÓ ×�ÓÖØ<br />
ÓÑ�Ò�× �Ò Ø�� �Ð�ÓÖ�Ø�Ñ Ó� Ø�� �ÄÌ� Ø��×� �Ò<br />
�Ò� �Ö� ÙÖÖ�ÒØÐÝ Ð��ÖÐÝ �� �ÑÔÖÓÚ�� �Ý ÑÓÖ�<br />
�Ö��ÙÐ ���ÔØ�ÓÒ Ó� Ø�� ÄÓÓ� ÍÔ Ì��Ð� Ó��Ò� ØÓ<br />
Ø�� �Ü�×Ø�Ò� ��Ø� ØÓÖ ��ÓÑ�ØÖÝ<br />
Î �ÓÒ ÐÙ×�ÓÒ<br />
Ï� ��Ú� ��×��Ò�� ÓÒ×ØÖÙ Ø�� �Ò� ÓÑÑ�×<br />
×�ÓÒ�� � ����Ø�Ñ� Ð�×× Ð�Ú�Ð ÓÒ� ØÖ� � ¬Ò��Ö<br />
Û�� � ÓÒ ÐÙ��× �Ò ��ÓÙØ �× � ��Ý ���ØÙÖ�<br />
Ó� �ÐÐ Ø�� ��Ö�Û�Ö� �× �Ù�ÐØ �Ò ×�Ð� Ø�×Ø�Ò� �Ô�<br />
��Ð�Ø��× Û�� � �ÐÐÓÛ �ÓÖ �Ò ×�ØÙ ×Ý×Ø�Ñ Ú�Ö�¬ �<br />
Ø�ÓÒ �Ò �ÒØ�Ò×� ÓÑÑ�××�ÓÒ�Ò� ÖÙÒ Ó� �ÐÑÓ×Ø<br />
ÓÒ� Ý��Ö Û�× ×Ø�ÐÐ Ö�ÕÙ�Ö�� ØÓ ×�ØÙÔ �ÐÐ Ø�� ÓÒØÖÓÐ<br />
�Ò� ÑÓÒ�ØÓÖ�Ò� ×ØÖÙ ØÙÖ�× ×Ù � Ø��Ø Ø�� ×Ý×Ø�Ñ<br />
�ÙÒ Ø�ÓÒ�Ð�ØÝ ÓÙÐ� �� Ú�Ö�¬�� Ð��ÖÐÝ ØÓ � Ð�Ö��<br />
�ÜØ�Ò� ÓÑÔÐ� �Ø�� �Ý Ø�� ÓÒ�Ó�Ò� ÓÑÑ�××�ÓÒ<br />
�Ò� Ó� Ø�� ØÖ� ��Ò� ��Ú� �× Ø��Ñ×�ÐÚ�× Ì��× �×<br />
Ø�� ¬Ö×Ø Ø�Ñ� � ØÖ����Ö ×Ý×Ø�Ñ Ó� � ÓÑÔÐ�Ü�ØÝ<br />
ÓÑÔ�Ö��Ð� ØÓ Ø�� ×Ý×Ø�Ñ× ��×��Ò�� �ÓÖ Ø�� ÄÀ�<br />
�ÜÔ�Ö�Ñ�ÒØ× ��× ���Ò ÓÔ�Ö�Ø��<br />
Ê���Ö�Ò �×<br />
� ℄ � À�ÖØÓÙÒ� �Ø �Ð À�Ê� � ��×��Ò Ê�ÔÓÖØ<br />
��Ë� ÈÊ� �� ���<br />
� ℄ � Ë �Û�Ò��Ò��Ù�Ö Ø��×� ÔÖÓ ����Ò�×<br />
� ℄ Ì �ÙÐ���Ò �Ø �Ð �ÓÒ �ÔØ Ó� Ø�� ��Ö×Ø<br />
Ä�Ú�Ð ÌÖ����Ö �ÓÖ À�Ê� � Á���<br />
ÌÖ�Ò× ÆÙ Ð Ë � ÆË �� ��� �� ���<br />
��℄  �Ð��� �Ø �Ð Ì�Ö���Ø Ô�Ö Ë� ÓÒ� ��Ø�<br />
ÌÖ�Ò×��Ö �ÓÖ Ø�� À�Ê� ���Ö×Ø Ä�Ú�Ð ÌÖ��<br />
��Ö ÈÖÓ ����Ò�× Ó� Á���� �ÓÒ��Ö�Ò � ÓÒ<br />
Ê��ÐØ�Ñ� ËÝ×Ø�Ñ× ÔÔ � � Î�Ð�Ò ��<br />
ËÔ��Ò ÂÙÒ�<br />
��℄ Å ��Ó ��Ö �Ø �Ð Ì�� ÅÙÓÒ ÈÖ�ØÖ����Ö<br />
ËÝ×Ø�Ñ Ó� Ø�� À�Ê� ��ÜÔ�Ö�Ñ�ÒØ Á���<br />
ÌÖ�Ò× ÆÙ Ð Ë � ÆË �� ÌÆË �<br />
��℄ � ��Ð��ÒÞ� �Ø �Ð Ì�� À�Ê� ��Ð� ØÖÓÒ<br />
ÈÖ� ÌÖ����Ö ËÝ×Ø�Ñ ÆÙ Ð ÁÒ×ØÖ Å�Ø� �� �<br />
��� ��
HIGH-SPEED MULTICHANNEL ICs FOR FRONT-END ELECTRONIC<br />
SYSTEMS<br />
A.Goldsher*, Yu.Dokuchaev*, E.Atkin**, Yu.Volkov**<br />
*-- State unitary enterprise “Science and Technology Enterprise “Pulsar”, Russia, 105187,<br />
Moscow, Okruzhnoy proezd, 27.<br />
** -- Moscow State Engineering Physics Institute (Technical University), Russia, 115409,<br />
Moscow, Kashirskoe shosse, 31.<br />
Abstracts<br />
The basic set of high -speed multichannel<br />
analog ICs for front-end electronic systems,<br />
designed and put into production in Russia, is<br />
considered. It is implemented as a number of<br />
application specific ICs (ASIC) and ICs, based on an<br />
application specific semicustom array (ASSA),<br />
containing eight channels of analog signal collection<br />
and <strong>preliminary</strong> processing.<br />
By their electrical parameters the created<br />
ICs are on a par with foreign functional analogs.<br />
The prospects of the further development<br />
of analog front -end ICs in Russia are expounded.<br />
Introduction<br />
Contemporary front-end electronic<br />
systems, intended, particularly, to carry out<br />
research in high-energy and elementary particle<br />
physics, show a demand for an increase in the<br />
number of data collecting and processing channels,<br />
reaching nowadays some hundreds thousand and<br />
expected to reach several millions during the<br />
nearest 5…7 years.<br />
Low-hoise<br />
pream<br />
Linear<br />
shaper<br />
In its turn that demands a radical change in<br />
the approach to the creation of front-end electronics,<br />
which should simultaneously provide a high speed,<br />
wide dynamic range, relatively low power<br />
consumption, high sensitivity, as well as an<br />
increased radiation hardness, since the electronics is<br />
placed either directly on the radiation detectors or in<br />
the vicinity of them. Besides that, especial<br />
importance is acquired by such factors, as<br />
dimensions, cost, durations of design and<br />
manufacture of printed circuit units.<br />
The multichannel equipments of physical<br />
experiment, of environment monitoring, of nuclear<br />
reactor safety and fissionable material supervision<br />
are united by common algorithms of analog signal<br />
processing. That allowed the specialists of certain<br />
Russian enterprises to create in short terms a basic<br />
set of analog ICs for front-end electronic systems. It<br />
was implemented within the framework of a project,<br />
financed by the International Science and<br />
Technology Center, in the forms both of application<br />
specific ICs (ASIC) and an application specific<br />
semicustom array (ASSA).<br />
The generalized structural diagram of a<br />
front-end electronic channel, considered in [1], is<br />
shown in fig.<br />
Fig. Generalized structural diagram of a channel of front -end electronic<br />
Types of ICs and basic peculiarities of<br />
their design and technology.<br />
At present the basic set of ICs includes:<br />
• an ASIC containing a high-speed<br />
comparator and D-trigger, intended for<br />
Buse line<br />
Voltage stabilizer<br />
restorer<br />
High-speed<br />
comparator<br />
Output<br />
stages<br />
(drivers)<br />
U?<br />
application in fast timing (fractions of<br />
nanosecond) circuits – A1181;<br />
• a 4-channel ASIC of an amplifier -shaper<br />
with differential input and output, intended<br />
for amplitude data processing<br />
(amplification, filtration) – A1182;
• a 4-channel ASIC of a differential low<br />
power comparator of the nanosecond<br />
range, intended for analog signal<br />
discrimination – A1183A;<br />
• a 4-channel ASIC of a differential<br />
comparator of the nanosecond range,<br />
having an output stage with open collector<br />
and implementing the OR-function with<br />
four inputs – A1183? ;<br />
• the LSIC A1184, implemented with the<br />
analog ASSA.<br />
The use of ASSA allows by means of<br />
changing only the plating layers the creation of<br />
ASICs, which circuitry takes into consideration the<br />
peculiarities of specific physical experiments, and<br />
that is done within exceptionally short terms and at<br />
low costs.<br />
The circuitry of ICs, the contents of ASSA<br />
functional modules have been elaborated by the<br />
specialists of the Electronics Department of the<br />
Moscow State Engineering Physics Institute<br />
(Technical University), namely E.Atkin,<br />
Yu.Volkov, I.Ilyushchenko, S.Kondratenko,<br />
Yu.Mishin and A.Pleshko.<br />
The lay -outs of all ICs are protected by<br />
copyright documents, what witnesses on their<br />
novelty and originality.<br />
The set of active devices of ICs contains<br />
NPN transistor structures, including those with<br />
Shottky diodes, and PNP vertical transistors with<br />
collectors in substrate [2]. The set of passive<br />
devices contains high- and low -resistance resistors<br />
and capacitors based on MOS structures. The range<br />
of working currents of standard (library) elements<br />
extends from fractions of mA to (3…5)mA, the<br />
values of resistors – from tens of Ohm to tens of<br />
kOhm.<br />
The employment of such an element basis<br />
provides simultaneously high electrical parameters<br />
of IC, resistance to external disturbing factors,<br />
easiness of the processes of IC manufacture, high<br />
yield and, hence, a low cost.<br />
The technological process of IC<br />
manufacture is based on the planar-epitax<br />
technology with element insulation by a reversebiased<br />
p-n junction. The insulating boron diffusion<br />
is then conducted by using highly doped p + -layers<br />
with a retention of boron-silicon glass before the<br />
second stage of diffusion. Comparing with usually<br />
applied modes, that allowed to reduce the stray<br />
capacitances of insulating layers by 1.5 times [3].<br />
One should also regard as basic ones the<br />
following technological peculiarities of ICs:<br />
• small depths of p-n junction embedding,<br />
equaling fractions of um (base width<br />
hB~0.1um), what ensures (at appropriate<br />
element lay -outs and manufacturing<br />
processes) a unity-gain frequency about<br />
7GHz for n-p-n transistor structures;<br />
• the use of antimony, boron and arsenic ion<br />
doping, what ensures high reproducibility<br />
of layer electrophysical parameters in a<br />
wide range of dopant concentrations;<br />
• small sizes of elements, that influence the<br />
IC speed to the greatest extent,<br />
particularly, the use of the so-called full<br />
emitter; the minimal emitter size is<br />
restricted, in its turn, by the requirements<br />
to the collector bulk resistance rC of<br />
transistor structures;<br />
• the use of molybdenum as a barrier metal<br />
in Shottky diodes; molybdenum,<br />
comparing with aluminum or platinum<br />
silicide, has a considerably lower potential<br />
barrier f B, what allowed to provide<br />
simultaneously the specified direct voltage<br />
drops UD and a low value of stray<br />
capacitances C;<br />
• the use of a double-level plating, based on<br />
aluminum, doped with silicon (about 1%);<br />
as an interlayer dielectric there was used<br />
the silicon dioxide film, partially doped<br />
with phosphorus.<br />
Parameters of ICs<br />
IC A1181<br />
Functional destination: the IC is intended for use<br />
in arrangements of nanosecond pulse processing.<br />
The basic parameters of functional modules:<br />
Parameters of comparator<br />
Name of parameter, literal<br />
designation, unit of measurement<br />
Range of<br />
parameter<br />
values<br />
? min Xmax<br />
Input offset voltage, UOF, mV -3.0 +3.0<br />
Input bias current, IBI, uA<br />
Difference of input bias currents,<br />
25.0<br />
IDBI , uA<br />
5.0<br />
Range of input voltages, ^U IN, V<br />
Current consumption from first<br />
-2.5 +2.5<br />
supply source, ICON.1, mA<br />
Current consumption from second<br />
supply source, ICON.2, mA<br />
At ECL output<br />
21.0<br />
30.0<br />
Output voltage high, U 1 OUT, V -1.00 -0.73<br />
Output voltage low, U 0 OUT, V -1.00 -1.60<br />
Switching time, tSW , ns 2.0<br />
Transition time at switch-on<br />
(switch-off), t 1,0 (t 0,1 ), ns<br />
At TTL output<br />
2.0<br />
(2.0)<br />
Output voltage high, U 1 OUT, V 2.4 4.0<br />
Output voltage low, U 0 OUT, V 0.2 0.6<br />
Output current, IOUT, mA 4 10<br />
Switching time, tSW , ns 15<br />
Transition time at switch-on<br />
(switch-off), t 1,0 (t 0,1 ), ns<br />
15<br />
(15)
Parameters of ECL D-trigger<br />
Name of parameter, literal<br />
designation, unit of measurement<br />
Range of<br />
parameter<br />
values<br />
? min Xmax<br />
Input current low at the D, C, R<br />
inputs, I 0 IN , uA<br />
Input current high at the D, C, R<br />
0.5<br />
inputs, I 1 IN , uA<br />
Input voltage low, U<br />
0.8<br />
0 IN, V<br />
Input voltage high, U<br />
-2.0 -1.5<br />
1 IN, V<br />
Output voltage high, U<br />
-1.1 -0.73<br />
1 OUT, V<br />
Output voltage low, U<br />
-1.0 -0.73<br />
0 OUT, V<br />
Output current from output Q,<br />
-1.95 -1.60<br />
I Q OUT, mA<br />
50<br />
Propagation delay of signal at<br />
inputs C and R, tPR.D, ns<br />
0.8 1.5<br />
Transition time at<br />
(switch -off), t<br />
switch-on<br />
1,0 (t 0,1 ), ns<br />
0.6<br />
(0.6)<br />
1.3<br />
(1.3)<br />
Current consumption, ICON, mA 60<br />
Parameters of ECL-NIM converter<br />
Name of parameter, literal<br />
designation, unit of measurement<br />
Range of<br />
parameter<br />
values<br />
? min Xmax<br />
Input current low (in statics), I 0 IN,<br />
uA<br />
Input current high (in statics), I<br />
10<br />
1 IN,<br />
uA<br />
Propagation delay of signal, tPR.D,<br />
7 10<br />
ns<br />
0.8<br />
Transition time at<br />
(switch -off), t<br />
switch-on<br />
1,0 (t 0,1 ), ns<br />
2.0<br />
(2.0)<br />
The IC has two supply sources, the<br />
voltages of which are, correspondingly, USS1=3V,<br />
USS2=-3V.The<br />
voltages is ±5%.<br />
permissible spread of supply<br />
The IC is placed in the case H06-24-2b. It<br />
is available also in a caseless version (modification<br />
4).<br />
IC A1182<br />
Functional destination: the IC is intended to<br />
amplify and shape signals, coming from highresistance<br />
radiation detectors.<br />
Basic parameters of the four-channel<br />
amplifier -shaper IC<br />
Name of parameter, literal<br />
designation, unit of measurement<br />
Typical<br />
value of<br />
parameter<br />
Transimpedance, RT, kOhm 10<br />
Input impedance, RIN, Ohm 160<br />
Output rise-time, tR, ns<br />
Output pulse duration at base level, tP,<br />
5<br />
ns<br />
40<br />
Output signal full swing, U, V 0.8<br />
Range of input currents, IIN, uA 1÷100<br />
Crosstalks of neighboring channels, % ≤1<br />
Power consumption per channel, PCON,<br />
mW<br />
15<br />
The IC has two supply sources, the<br />
voltages of which are, correspondingly, USS1=3V,<br />
USS2=-3V.The permissible spread of supply<br />
voltages is ±5%.<br />
The IC is placed in the case H09-28-1b. It<br />
is available also in a caseless version (modification<br />
4).<br />
IC A1183<br />
Functional destination: the IC is intended for<br />
analog-to-digital signal conversion.<br />
Basic parameters of the four-channel<br />
comparator IC<br />
Name of parameter, literal<br />
designation, unit of measurement<br />
Value of<br />
parameter<br />
Input currents, IIN, uA 8<br />
Input offset voltage, UOF, mV 5<br />
Comparator threshold, U TH, mV 10÷180*<br />
Dynamic range, DR, V 3.0<br />
Rise-time tR, ns 3<br />
Fall-time t F, ns -<br />
3<br />
Propagation delay of signal, tPR.D, ns 6<br />
Maximal common -mode voltage, UCM,<br />
V<br />
±1.1<br />
Power consumption per channel, PCON,<br />
mW<br />
18<br />
Output signal logic GTL (TTL)<br />
Remarks:<br />
*- the threshold voltage is given for the<br />
case of using the comparator together with<br />
amplifier-shaper A -1182;<br />
**- GTL – low -level CMOS logic (logic<br />
unity -- +1.2V, logic zero -- +0.4V).<br />
The IC has two supply sources, the<br />
voltages of which are, correspondingly, USS1=3V,<br />
USS2=-3V. The permissible spread of supply<br />
voltages is ±5%.<br />
The IC is placed in the case H06 -24-2b. It<br />
is available also in a caseless version (modification<br />
4).<br />
LSIC A1184<br />
The LSIC contains 8 channels, collecting<br />
and processing <strong>preliminary</strong> the analog signals from<br />
tracking detectors. Each channel contains: a lownoise<br />
preamp, linear shaper, comparator and output<br />
driver. The LSIC contains also one OR-circuit,<br />
common for the 8 chan nels. It is implemented on<br />
the basis of an ASSA, containing 7000 elements,<br />
including approximately 1400 n-p-n transistor<br />
structures with a unity-gain frequency of 7GHz [3].<br />
The printed circuit units (PCU), created in<br />
the Research Institute of Pulse Technology<br />
(Moscow) on the basis of the above described<br />
ASICs and LSIC, by their electrical characteristics<br />
are on a par with the PCUs, implemented with the<br />
LSIC ASD/BLR, designed in the Pennsylvania<br />
University (USA) and widely used by research<br />
centers in the advanced countries of the world [4].<br />
As it was marked earlier, the active devices<br />
of the ICs A1181…A1184 are n-p-n transistor
structures. One should acknowledge, that the<br />
absence of complementary high-frequency<br />
transistor structures entails a complication of<br />
circuitry, deterioration of performance of ICs and in<br />
some cases practically excludes the possibility to<br />
implement some important units in a differential<br />
version. Therefore the further improvement of the<br />
ICs for front-end electronics depends on the<br />
creation of LSICs, containing n-p-n and p-n-p highfrequency<br />
transistor structures with unity-gain<br />
frequencies not less than (2…3)GHz, sufficiently<br />
close values of their basic electrical parameters and<br />
using dielectric insulation. That will, particularly,<br />
allow t o:<br />
• increase considerably the ratio of speed to<br />
power consumption – the major quality<br />
index of multichannel ICs; that is<br />
especially important for amplifiers and<br />
comparators, driving a large capacitive<br />
load;<br />
• increase essentially (1.5 - 2 times) the<br />
integration scale at the expense of using pn-p<br />
transistor structures as current setting<br />
elements instead of resistors with high<br />
nominal value;<br />
• exclude practically the mutual coupling of<br />
IC channels through supply sources at the<br />
expense of introducing internal voltage<br />
stabilizers, built with complementary<br />
transistors; those stabilizers should meet<br />
stringent requirements on the voltage drop<br />
(not exceeding 100 – 300mV) and<br />
efficiency (not less than 90%), whereas<br />
practice shows, that a voltage stabilizer,<br />
built with n-p-n transistors only, can not<br />
have a voltage drop less than (1…1.5)V<br />
and an efficiency greater than (60…70)%.<br />
The elaboration of an ASSA, based on n-pn<br />
and p-n-p high-frequency transistor structures, is<br />
conducted at present by the specialists of a number<br />
of Russian enterprises.<br />
The created set of high-speed analog ICs<br />
has successfully passed probation in the equipment<br />
of the leading Russian and international research<br />
centers. It can find application not only in front -end<br />
electronic systems, but also in those of ecological<br />
and radiational environment monitoring,<br />
spectrometric material analysis, in space and<br />
medical research.<br />
Conclusion<br />
1. At present in Russia the following ASICs and<br />
LSIC for front-end electronic systems have been<br />
designed and put into production:<br />
• an ASIC containing a high-speed<br />
comparator and D-trigger, intended for<br />
application in fast timing (fractions of<br />
nanosecond) circuits;<br />
• a 4-channel ASIC of an amplifier-shaper<br />
with differential input and output, intended<br />
for amplitude data processing<br />
(amplification, filtration);<br />
• a 4-channel ASIC of a differential low<br />
power comparator of the nanosecond<br />
range, intended for analog signal<br />
discrimination;<br />
• a 4-channel ASIC of a differential<br />
comparator of the nanosecond range,<br />
having an output stage with open collector<br />
and implementing the OR-function with<br />
four inputs;<br />
• the LSIC, implemented with the analog<br />
ASSA.<br />
• 2. The process of IC manufacture is based<br />
on the planar-epitax technology with<br />
insulation of elements by a reverse biased<br />
p-n junction. Among the design and<br />
technology peculiarities the following ones<br />
should be regarded as basic:<br />
• the use of highly doped p+-layers (NS ~<br />
4*10 20 cm -3 ) for element insulation, what<br />
in combination with the retention of boronsilicon<br />
glass before the second stage of<br />
diffusion allowed to reduce 1.5-fold the<br />
stray capacitances of insulating junctions,<br />
comparing with the commonly used modes<br />
(NS ~ 4*10 18 cm -3 ).<br />
• small depths of p-n junction embedding,<br />
equaling fractions of um (base width<br />
hB~0.1um), what ensures (at appropriate<br />
element lay-outs and manufacturing<br />
processes) a unity-gain frequency about<br />
7GHz for n-p-n transistor structures;<br />
• the use of antimony, boron and arsenic ion<br />
doping, what ensures high reproducibility<br />
of layer electrophysical parameters in a<br />
wide range of dopant concentrations;<br />
• small sizes of elements, that influence the<br />
IC speed to the greatest extent,<br />
particularly, the use of the so-called full<br />
emitter; the minimal emitter size is<br />
restricted, in its turn, by the requirements<br />
to the collector bulk resistance rC of<br />
transistor structures;<br />
• the use of molybdenum as a barrier metal<br />
in Shottky diodes; molybdenum,<br />
comparing with aluminum or platinum<br />
silicide, has a considerably lower potential<br />
barrier f B, what allowed to provide<br />
simultaneously the specified direct voltage<br />
drops UD and a low value of stray<br />
capacitances C;<br />
• the use of a double-level plating, based on<br />
aluminum, doped with silicon (about 1%);<br />
as an interlayer dielectric there was used<br />
the silicon dioxide film, partially doped<br />
with phosphorus.<br />
3. On the basis of the designed ASICs and LSIC<br />
there have been implemented PCUs, by their<br />
electrical parameters being on a par with those<br />
implemented with the LSIC ASD/BLR, created<br />
in the Pennsylvania University (USA) and<br />
widely used by research centers in the advanced<br />
countries of the world.
References<br />
1. E.Atkin, Yu.Volkov, S.Kondratenko,<br />
Yu.Mishin, A.Pleshko. “Analog signal<br />
<strong>preliminary</strong> processing ICs, based on<br />
bipolar semicustom arrays” – Scientific<br />
Instruments, 1999, #5, pp.54-57.<br />
2. A.Goldsher, V.Kucherskiy, V.Mashkova.<br />
“Components of Fast Analog Integrated<br />
Microcircuits for Front-end Electronic<br />
Systems” – Fourth Workshop on<br />
Electronics for LHC Experiments. Rome,<br />
September 21-25, 1998, pp.545 – 549.<br />
3. A.Goldsher, V.Kucherskiy, V.Mashkova.<br />
“A Semicustom Array Chip for Creating<br />
High-speed Front -end LSICs” – Third<br />
Workshop on Electronics for LHC<br />
Experiments. London, September 22 -26,<br />
1997, pp.257 – 259.<br />
4. E.Atkin, Yu.Volkov, Yu.Mishin,<br />
V.Subbotin, V.Chernikov. “Electronic<br />
units for the collection and <strong>preliminary</strong><br />
processing of multiwire chamber signals”<br />
– Scientific Instruments, 1999, #5, pp.58-<br />
62.
New building blocks for the ALICE SDD readout and detector control system in a<br />
commercial 0.25µm CMOS technology<br />
Abstract<br />
This paper describes building blocks designed for the<br />
readout and detector control systems of the Silicon Drift<br />
Detectors (SDDs) of ALICE. In particular, the paper<br />
focuses on a low−dropout regulator, a charge redistribution<br />
ADC and a current steering DAC. All the parts have been<br />
developed in a commercial 0.25µm CMOS technology, using<br />
radiation tolerant layout practices.<br />
I. INTRODUCTION<br />
Silicon Drift Detectors will be used in the third and fourth<br />
layer of the Inner Tracking System of the ALICE experiment<br />
[1]. The specifications and the implementation of the<br />
electronics for the SDDs are described in detail in [2], [3],<br />
[4] and will not be covered here. For the purpose of this work<br />
it is sufficient to remember that the SDD system requires the<br />
development of four full−custom ASICs, which are used both<br />
for data processing and control tasks. Two of these chips are<br />
purely digital and two are mixed−mode designs. The<br />
building blocks presented in this paper will be integrated in<br />
the mixed−mode circuits.<br />
The more complex mixed−signal component is the<br />
front−end chip (named PASCAL), which performs the<br />
amplification, filtering and analogue−to−digital conversion<br />
of the detector signals. In its final implementation, the ASIC<br />
host 64 amplifiers, each coupled with a row of a switched<br />
capacitor array (SCA), having 256 cells. When a trigger<br />
signal is received, 32 on board ADCs convert the analogue<br />
data stored in the pipeline to a 10 bit digital code. A single<br />
ADC digitises the content of two adjacent rows of the SCA.<br />
The converter, described in detail in [5], is based on the<br />
successive approximation technique [6], which offers a very<br />
good trade−off between speed and power consumption. Due<br />
to the severe space constraints on the front−end board, the<br />
reference voltage for the ADCs must be provided on chip.<br />
The generation of this reference is an issue, because the<br />
voltage source will be heavily loaded by the ADCs, which<br />
operate in parallel. For these reasons, a dedicate low dropout<br />
regulator had to be developed. The design of this block as<br />
well as the first results of the experimental measurements are<br />
discussed in section II.<br />
The chip for the Detector Control System (DCS)<br />
performs the monitoring of vital parameters of the system<br />
A.Rivetti, G. Mazza, M. Idzik and F. Rotondo<br />
for the ALICE collaboration<br />
INFN, Sezione di Torino, Via Pietro Giuria 1 − 10125 Torino −Italy<br />
rivetti@to.infn.it<br />
and the control of some critical voltages and currents. For<br />
instance, this chip is used to measure the temperature of the<br />
electronics board. Additionally, it has to generate the<br />
amplitude−controlled signal which is needed to trigger the<br />
calibration circuitry of the detector. Therefore, an ADC and<br />
a DAC are the basic elements of the DCS ASIC. These<br />
circuits need to provide a medium resolution (8 bits), while<br />
minimising area and power consumption. They are<br />
described in section III and in section IV of this paper,<br />
respectively.<br />
II. THE LOW DROPOUT REGULATOR<br />
The low dropout regulator provides a stable DC voltage<br />
of 1.9V that is needed by the ADCs of the front−end chip.<br />
Depicted in fig. 1, the circuit uses a linear scheme, [7]<br />
whose key elements are the error amplifier A1, the pass<br />
transistor P0 and the resistive feedback network formed by R1<br />
and R2. All these components are integrated on chip. The<br />
output voltage is derived from the reference voltage VBG,<br />
provided by a bandgap circuit.<br />
The capacitor C0 serves to filter the current spikes and<br />
provides the frequency compensation of the loop. Given the<br />
high value required for C0 (1µF), an external discrete<br />
component must be used. In fact, it is very important to<br />
assure that the regulator works in all conditions with an<br />
adequate phase margin. The value chosen for C0 guarantees<br />
that the feedback loop has always a phase margin of 76<br />
degrees.<br />
Figure 1 : LDO simplified scheme.
The LDO can deliver to the load a maximum current of<br />
100mA. The static power consumption of the circuit is<br />
2.5mW and it occupies an area of 230 x 150 µm2 (excluding<br />
the bandgap reference, which will be shared by several<br />
circuits on the front−end chip.)<br />
In order to check the functionality of the device before its<br />
integration in the final version of the front−end ASIC, a<br />
dedicated test chip has been produced and measured. The test<br />
results show a good agreement with the computer<br />
simulations. The circuit provides the required reference<br />
voltage with an overall accuracy of 1%. As an example of<br />
the performance of the circuit, Figure 2 reports the plot<br />
obtained for the load regulation. The current driven by the<br />
load is changed from zero to the maximum value of 100mA.<br />
The maximum change in the output voltage is 13 mV,<br />
resulting in a load regulation figure (defined as ∆V0/∆I0) of<br />
0.13 Ω.<br />
Voltage (V)<br />
1.920<br />
1.917<br />
1.915<br />
1.912<br />
1.910<br />
1.907<br />
1.905<br />
Load regulation<br />
0 500 1000 1500 2000 2500 3000<br />
time (ns)<br />
Figure 2 : LDO load regulation performance<br />
The line regulation has been checked for Vdd ranging<br />
from 2.2 to 2.7 V, corresponding to the maximum power<br />
supply variation acceptable for the front−end chip. In this<br />
interval, the output voltage changes by 1mV giving a line<br />
regulation (∆V0/∆Vi) of 0.002. The measured noise at the<br />
output of the LDO is 250µV rms, but the accuracy of this<br />
measurement is limited by the experimental set−up.<br />
III. THE ANALOGUE TO DIGITAL CONVERTER<br />
The ADC is an evolution of the converter embedded in<br />
the front−end chip. In order to profit as much as possible<br />
from existing blocks, the same successive approximation<br />
topology used in PASCAL has been chosen. The main<br />
modification occurs in the DAC, in which the resolution has<br />
been traded for area. Figure 3 shows the block diagram of a<br />
typical implementation of a 8 bit charge redistribution ADC.<br />
The two basic elements of the circuit are a voltage<br />
comparator and a DAC made of binary weighted capacitors.<br />
The conversion is performed in two steps:<br />
� In the acquisition phase, all the capacitors are<br />
connected in parallel and are used to sample the input<br />
signal.<br />
� In the redistribution phase, the capacitors are used<br />
individually to generate binary fractions of the<br />
reference voltage, which are compared with the sample<br />
previously stored.<br />
VIN<br />
VREF<br />
VREF<br />
C C 2C 4C 8C 16C 32C 64C 128C<br />
GND<br />
VREF<br />
Figure 3 : Schematic of a generic 8 bit ADC<br />
The conversion algorithm is described in detail in [6]. For<br />
the purpose of our discussion, it is sufficient to observe that<br />
if the DAC is implemented exactly with the scheme of<br />
Figure 3, its area doubles for every extra bit of resolution<br />
required. For instance, an 8 bit DAC occupies twice the area<br />
needed by a 7 bit one.<br />
In order to reduce the surface of the DAC, the scheme of<br />
Figure 4 has been used. In this approach, the DAC has been<br />
splitted in two blocks, each with a 5 bit capability.<br />
VREF<br />
VREF<br />
C<br />
GND<br />
2C<br />
4C<br />
8C<br />
VIN<br />
16C<br />
VREF<br />
GND<br />
Figure 4 : Schematic of the implemented ADC with<br />
segmented DAC<br />
The first DAC samples also the input signal and is<br />
capacitively coupled to the second DAC, which is used only<br />
during the redistribution phase.<br />
In this way, two advantages are achieved:<br />
� The total area is equivalent to the area of a 6 bit DAC,<br />
which is four times smaller than the area of a direct 8 bit<br />
DAC.<br />
� Since the sampling is performed only by the main DAC,<br />
the input capacitance of the ADC is reduced by a factor<br />
eight with respect a straightforward 8 bit design.<br />
The matching between the capacitors is critical to obtain<br />
a good linearity in the ADC. To improve the matching, in<br />
the layout only a fundamental cell is used and the bigger<br />
capacitors are built by connecting a suitable number of<br />
elementary cells in parallel. Actually, the structure of Figure<br />
C<br />
C<br />
2C<br />
4C<br />
8C<br />
16C<br />
VREF<br />
C<br />
VREF<br />
O UT<br />
OUT
4 is more sensitive than the architecture of Figure 3 to the<br />
effect of random mismatches. This point can be understood<br />
by noting that in the ADC of Figure 4 the capacitor which<br />
determines the MSB is formed by 16 unit elements, whereas<br />
in Figure 3 it is composed by 128 individual cells. In the<br />
latter case, the MSB capacitor will be more precise, because<br />
it uses more elements in parallel and the random error<br />
affecting each one tend to average out. For this reason, the<br />
ADC is built using two 5 bit DACs instead of two 4 bit<br />
DACs, which in principle would be sufficient to get a total 8<br />
bit resolution.<br />
It must also be observed that the circuit of Figure 3 could<br />
be used to implement a 10 bit ADC. However, for the<br />
matching reasons mentioned above this resolution can not be<br />
assured if the individual capacitor is implemented with the<br />
minimum size allowed by the technology. In our application<br />
a 10 bit resolution is not required, whereas the area of the<br />
circuit is important. Therefore, the minimum size<br />
capacitance has been used, targeting 8 bit performance.<br />
Nevertheless, two extra bits have been made available for<br />
test purposes.<br />
The layout of the circuit has an area of 300 x 800 µm2 ,<br />
which is about 60% of what would be required by the direct<br />
implementation of Figure 3.<br />
A test chip containing three identical converters has<br />
been designed and fabricated. The ADC is tested by sending<br />
to the circuit a sinusoidal input signal and reading the digital<br />
output codes with a logic state analyser. The data are the<br />
processed with a dedicated software to calculate the DNL,<br />
the INL, and the FFT. In the measurements, the ADC is<br />
operated with a full scale range of 1V, which results in a<br />
LSB of 4mV.<br />
During the test, the circuit is running with a clock of<br />
40MHz. Since two clock cycles are left to sample the input<br />
signal and to compensate the offset in the comparator [5], 10<br />
clock periods are required for a complete 8 bit conversion.<br />
The sampling frequency is hence 4Msample/sec. The circuit<br />
is powered with a single rail power supply of 2.5V and<br />
dissipates 3mW.<br />
Figure 5 show the result of a typical FFT that is obtained<br />
in the measurements.<br />
Figure 5 : FFT measured for the ADC.<br />
As it can be seen from the figure, the second harmonic is<br />
well below 48dB and the ADC has a very low distortion.<br />
The converter has a maximum DNL of 0.8 LSB, which is<br />
sufficient to guarantee a 8 bit resolution without missing<br />
codes. To give a more intuitive view of the performance of<br />
the circuit, Figure 6 shows the result of the conversion of a<br />
full scale ramp.<br />
Figure 6 : Digitization of a full scale ramp.<br />
IV. THE DIGITAL TO ANALOGUE CONVERTER<br />
The 8 bit DAC is the other analogue building blocks for<br />
the DCS chip. It must be observed that switched capacitor<br />
DAC used in the ADC is very compact, but it suffers from a<br />
number of drawbacks. These drawbacks make it inadequate<br />
to work as a stand−alone component. For instance, the<br />
parasitic capacitance on the top plate attenuates the voltage<br />
steps generated by the DAC. This and other phenomena are<br />
of minor concern in an ADC, since they affect in the same<br />
way both the signal and the partitions of the reference<br />
voltage that are used in the conversion. However, they can<br />
determine severe limitations when the DAC is used just to<br />
convert a digital code to an analogue level. For these reasons<br />
it has been necessary to develop a new circuit, using a<br />
different approach.<br />
The circuit is based on a current steering configuration<br />
and is made of an array of current sources and a digital<br />
control logic. In order to improve as much as possible the<br />
matching between the sources and hence the linearity, a fully<br />
thermometric segmentation has been chosen. Therefore, the<br />
DAC is formed by 256 identical cells, which are organised in<br />
a 16x16 matrix.<br />
The schematic of the elementary bit cell is shown in<br />
Figure 7. The cell is essentially a cascode current mirror,<br />
which copies a reference current. The sizes of the transistor<br />
are dictated by the need of having an adequate area and<br />
overdrive voltage, in order to get an appropriate matching. If<br />
the cell is selected by the control logic, the switch driven by<br />
sel_b is closed and the current is directed towards the output<br />
node. At the output node, all the currents generated by the<br />
individual mirrors are summed and converted to a voltage.<br />
The current−to−voltage conversion can be carried−out either<br />
by a simple resistor or by a transimpedance amplifier. Both
options are available in the DAC.<br />
Each elementary cell is biased with a tail current of 5µA,<br />
resulting in a static power consumption 3mW for the whole<br />
analogue part of the DAC.<br />
The circuit has been optimised to reach a conversion<br />
speed of at least 50Msamples/sec.<br />
Figure 7 : Schematic of the unit cell of the DAC.<br />
Shown if Figure 7, the layout of the complete DAC has<br />
an area of 1 x 0.6 mm2. . The test chip is in production and no<br />
experimental result is available at the time of writing.<br />
Figure 8 : Layout of the full DAC<br />
V. CONCLUSIONS<br />
The front−end and detector control electronics of the<br />
SDD of ALICE requires complex analogue building blocks.<br />
The most critical circuits have been prototyped, in order to<br />
check their functionality before their integration in the final<br />
ASICs. These circuits are a low dropout regulators, an ADC<br />
and a DAC with moderate resolution (8 bits) but reduced<br />
area.<br />
The low dropout regulator is needed to generate on board<br />
the reference voltage for the ADC of the front−end chip. The<br />
circuit integrates all the critical parts except the filtering<br />
capacitor and has shown characteristics which are adequate<br />
for the application. The output voltage change by 1mV for a<br />
change of 0.5V in the power supply. The regulator is able to<br />
provide a maximum current of 100mA with an output<br />
voltage drop of 13mV. The noise is less than 200µV rms.<br />
The analogue to digital converter uses a segmented<br />
architecture in the DAC, in order to minimise the area as<br />
much as possible. The area of the ADC is 0.24mm2 .The<br />
circuit has a resolution of 8 bits over a full scale range of 1V<br />
and operate at a speed of 4Msamples/sec, dissipating 3mV<br />
from a 2.5V supply.<br />
The last building block that has been developed is a<br />
current steering digital to analogue converter. This circuit is<br />
still in production and it will be tested in the forthcoming<br />
months.<br />
All the three designs discussed in this paper have been<br />
implemented in a 0.25µm CMOS technology, using enclosed<br />
layout transistors and guardrings to prevent the damage from<br />
ionising radiation.<br />
VI. REFERENCES<br />
[1] The ALICE collaboration, "Technical proposal of the<br />
ALICE experiment", CERN/LHCC 95−71, Dec. 1995.<br />
[2] The ALICE collaboration, "Technical design report of<br />
the Inner Tracking System", CERN/LHCC 99−12, June<br />
1999, pp 83−173.<br />
[3] A. Rivetti, G. Anelli, F. Anghinolfi, G. Mazza, P.<br />
Jarron, "A Mixed−Signal ASIC for the Silicon Drift<br />
Detectors of the ALICE Experiment in a 0.25µm CMOS",<br />
Proceedings of the Sixth Workshop on Electronics for LHC<br />
Experiments CERN, LHCC/2000−041, Oct. 2000.<br />
[4] G. Mazza et al., "Test Results of the Front−end<br />
System for the Silicon Drift Detectors of ALICE,<br />
Proceedings of the Seventh Workshop on Electronics for<br />
LHC Experiments, Stockholm, Sept. 2001.<br />
[5] A. Rivetti, G. Anelli, F. Anghinolfi, G. Mazza and F.<br />
Rotondo, "A Low−Poer 10 bit ADC in a 0.25µm CMOS:<br />
Design Considerations and Test Results", IEEE Transactions<br />
on Nuclear Science, Aug. 2001.<br />
[6] J. L. McCreary and P. Gray, "All−MOS Charge<br />
Redistribution Analog−to−Digital Conversion Techniques −<br />
Part I", IEEE Journal of Solid State Circuits, vol. SC 10, no.<br />
6, pp. 371−379, Dec. 1975.<br />
[7] Texas Instruments, "Technical Review of low<br />
Dropout Regulator Operation and Performance" application<br />
note SLVA072, Aug. 1999.