Sessions - DPG-Tagungen
Sessions - DPG-Tagungen
Sessions - DPG-Tagungen
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Nuclear Physics Tuesday<br />
HK 18 Instrumentation and Applications II<br />
Time: Tuesday 15:30–18:30 Room: C<br />
HK 18.1 Tue 15:30 C<br />
Particle identification in the HADES experiment ∗ — •T.<br />
Christ 1 , M. Golubeva 2 , R. Holzmann 3 , M. Jaskula 4 , J.<br />
Mousa 5 , P. Salabura 4 , P. Tlusty 6 , T. Wojcik 4 , and D.<br />
Zovinec 3 for the HADES collaboration — 1 TU München, Garching —<br />
2 INR, Moscow, Russia — 3 GSI, Darmstadt — 4 Jagiell. Univ. Cracow,<br />
Poland — 5 Univ. of Cyprus, Nicosia, Cyprus — 6 CAS, Rez, Czech<br />
Republic<br />
The HADES experiment at GSI is designed to reconstruct lepton pairs<br />
from decays of vector mesons produced in hadron and heavy ion induced<br />
nuclear reactions at energies of 1-2 AGeV. For a quantitative<br />
understanding of the measured data it is necessary to extract the absolute<br />
abundancies of particle species emitted from the collision zone and<br />
to perform event-by-event track identification. In the HADES analysis<br />
package HYDRA the particle identification (PID) software assigns particle<br />
species to tracks reconstructed from various detector signals. The<br />
HADES PID ansatz is based on Probability Density Functions (PDFs)<br />
and Bayes’ theorem. This ansatz requires measurements and simulations<br />
of particle properties and detector response and a statistical analysis of<br />
correlations among particle signatures. The method and first results from<br />
the PID analysis of data taken in C+C collisions are presented.<br />
∗ supported by BMBF (06MT190) and GSI (TM-FR1, TM-KR1).<br />
HK 18.2 Tue 15:45 C<br />
Prototype of a Dedicated Multi-Node Data Processing<br />
System for Realtime Trigger and Analysis Applications —<br />
•Daniel Kirschner, Marco Destefanis, Roukaia Djeridi,<br />
Ingo Fröhlich, Wolfgang Kühn, Jörg Lehnert, Tiago Perez,<br />
and Alberica Toia for the HADES collaboration — II. Phys. Inst.<br />
Giessen, Heinrich-Buff-Ring 16, 35392 Giessen<br />
Modern experiments in hadron physics like the HADES detector at<br />
GSI-Darmstadt produce a large amount of data that has to be distributed,<br />
stored and analyzed. Analysis of this data is very time consuming<br />
due to the large amount of data and the complex algorithms needed.<br />
This problem can be addressed by a dedicated multi-node and multi-CPU<br />
computing architecture interconnected by Gigabit-Ethernet. Dedicated<br />
hardware has the advantages over “Grid-Computers” in skaleability, price<br />
per computational unit, predictability of time behavior (possibility of real<br />
time applications) and ease of administration. Gigabit-Ethernet provides<br />
an efficient and standardized infrastructure for data distribution. This<br />
infrastructure can be used to distribute data in an experiment as well<br />
as to distribute data in a multi-node computing environment. The prototype<br />
VME-Bus card has of two major units: a network unit featuring<br />
two Gigabit Ethernet over Copper connections and a computational part<br />
featuring a TigerSHARC Digital Signal Processor. The discussion will<br />
concentrate the real-life performance of Gigabit Ethernet as the main<br />
topic. Supported by bmbf and GSI.<br />
HK 18.3 Tue 16:00 C<br />
Performance of the Trigger System of the HADES Detector<br />
in C+C reactions at 1-2 AGeV — •Jörg Lehnert 1 , Alberica<br />
Toia 1 , Roukaia Djeridi 1 , Ingo Fröhlich 1 , Wolfgang Koenig 2 ,<br />
Wolfgang Kühn 1 , Tiago Perez 1 , James Ritman 1 , Daniel<br />
Schäfer 1 , and Michael Traxler 2 for the HADES collaboration<br />
— 1 II. Physikalisches Institut Univ. Gießen — 2 Gesellschaft für<br />
Schwerionenforschung Darmstadt<br />
The HADES detector at GSI Darmstadt investigates the dilepton production<br />
in hadron and heavy ion induced reactions up to 2 AGeV. The<br />
second level trigger system (LVL2) is designed to reduce the event rate<br />
by selecting lepton pairs within a given invariant mass window. The<br />
hardware-based LVL2 consists of Image Processing Units (IPU) which<br />
perform pattern recognition to detect lepton signatures in different subdetectors<br />
and a Matching Unit (MU) which combines the position and<br />
momentum information of these signatures into tracks to select events<br />
with lepton pairs of given invariant mass.<br />
Since the recognition of Cherenkov rings is the most selective algorithm<br />
of the trigger, its behavior has been studied with the help of simulations<br />
and by comparing it to the offline analysis algorithm. This allowed to<br />
study and optimize the performance of the second level trigger and to<br />
improve the analysis of the dilepton content of the analyzed C+C reaction<br />
at 1-2 AGeV in terms of efficiency and purity of the signal. Results<br />
collected during the experiment runs in November 2001 and November<br />
2002 are presented.<br />
HK 18.4 Tue 16:15 C<br />
New VHDL design for the HADES TOF Trigger Unit — •T.<br />
Pérez for the HADES collaboration — II. Physikalisches Institut, Universität<br />
Gießen<br />
The HADES spectrometer is intended to investigate the dilepton production<br />
especially from vector meson decays. The low branching ratios<br />
of vector mesons into dileptons, in the order of 10 −5 . To reach the necesary<br />
amount of statistics it is necessary the use high reaction rates up to<br />
10 5 central heavy ion reactions per second. In order to reduce such an<br />
amount of data a very sofisticated online trigger system is used. With a<br />
combination of trigger several modules for distribution (CTU and DTU)<br />
and detector specific processing (IPUs), maximun speed the goal. That<br />
lead us to upgrade our Detector specific Trigger Units to newer, more<br />
flexible and faster design. And, finaly to adopt VHDL,the standard high<br />
level description language in electronics, as a tool to reach that goal.<br />
HK 18.5 Tue 16:30 C<br />
High Level Trigger Identification of High Energy Jets in ALICE<br />
— •Constantin Loizides for the ALICE collaboration — August-<br />
Euler-Str. 6, 60487 Frankfurt<br />
One interesting observable at ALICE will be the measurement of the<br />
inclusive jet cross section at 100 to 200 GeV transversal jet energy and<br />
its fragmentation function; both to be compared to pp. The window<br />
of about 100 to 200 GeV transversal jet energy is compatible with the<br />
expected pt resolution of the inner tracking complex (ITS+TPC+TRD)<br />
and the energy of the leading parton, but requires an HLT online processing<br />
of TPC data at a rate of 200 Hz central PbPb in order to collect<br />
sufficient statistics. The online trigger algorithm running on the HLT<br />
system is based on charged tracking and jet recognition using a cone jet<br />
finder algorithm,which might be improved by the additional online evaluation<br />
of the EM calorimeter towers. The dependence of the efficiency<br />
versus the selectivity of the trigger has been studied by simulations of pp<br />
and PbPb interactions using PYTHIA and HIJING events. For a chosen<br />
parameter set of the trigger algorithm the triggered events are the relevant<br />
sample of events to be further analyzed by offline. Their resulting<br />
jet Et distribution and the corresponding fragmentation functions might<br />
be sensitive to different jet attenuation scenarios.<br />
HK 18.6 Tue 16:45 C<br />
A Fault Tolerant Data Flow Framework for Clusters<br />
— •Timm M. Steinbeck for the Alice High Level Trigger<br />
collaboration — Kirchhoff Institute of Physics, Ruprecht-Karls-<br />
University Heidelberg, Im Neuenheimer Feld 227, D-69120 Heidelberg,<br />
http://www.ti.uni-hd.de/HLT/<br />
The ALICE experiment’s High Level Trigger (HLT) has to reduce the<br />
data rate of up to 25 GB/s to at most 1.25 GB/s before permanent<br />
storage. To cope with these rates a PC cluster system of several 100<br />
nodes connected by a fast network is being designed. For the system’s<br />
software an efficient, flexible, and fault tolerant data transport software<br />
framework is being developed. It consists of components, connected via<br />
a common interface, allowing to construct different configurations that<br />
are even changeable at runtime. To ensure a fault-tolerant operation, the<br />
framework includes fail-over mechanisms to replace whole nodes as well<br />
as to restart and reconnect components during runtime of the system.<br />
The last functionality utilizes the runtime reconnection feature of the<br />
component interface. To connect components on different cluster nodes<br />
a communication class library is used to abstract from the network used<br />
to retain flexibility in the hardware choice. It contains two working prototype<br />
versions for TCP and the SCI SAN. Extensions can be added<br />
to this library without modifications to other parts of the framework.<br />
Performance tests show very promising results, indicating that ALICE’s<br />
requirements concerning the data transport can be fulfilled. In a test<br />
with simulated proton-proton data for a part of the TPC an event rate<br />
of more than 430 Hz was achieved with full tracking being performed.