24.04.2013 Views

CBM Progress Report 2006 - GSI

CBM Progress Report 2006 - GSI

CBM Progress Report 2006 - GSI

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>CBM</strong> <strong>Progress</strong> <strong>Report</strong> <strong>2006</strong> FEE and DAQ<br />

Requirements and concept<br />

Developments for a future DAQ framework DABC<br />

J. Adamczewski 1 , H.G. Essel 1 , N. Kurz 1 , and S. Linev 1<br />

The Data Acquisition Backbone Core DABC will provide<br />

a general software framework for DAQ tasks over<br />

the next years. It serves as test bed for FAIR detector<br />

tests, readout components tests, data flow investigations<br />

(switched event building) and DAQ controls. Specifically,<br />

the system must be able to handle the large data bandwidth<br />

of experiments with self-triggered front-end electronics<br />

like <strong>CBM</strong> [1]. Additionally, it is necessary to integrate<br />

the current <strong>GSI</strong> standard data acquisition system<br />

MBS [2]. The huge installed MBS equipment cannot be replaced.<br />

Instead, MBS driven front-end components (readout)<br />

should be attachable as data sources to the new framework.<br />

The DABC replaces the MBS event building functionality.<br />

The XDAQ C++ software framework [3] developed for<br />

the CMS experiment at CERN was chosen as base for the<br />

first implementation of DABC. It features:<br />

Task management: One node may contain several XDAQ<br />

Executives (processes); each Executive may contain XDAQ<br />

Applications as threads. Each application may create additional<br />

threads (workloops).<br />

Data transfer management: Peer Transport and Messenger<br />

interfaces.<br />

Hardware integration: Hardware Access Library.<br />

Control support: state machines; process variable Infospaces;<br />

message and error loggers; web server for each Executive.<br />

Evaluation and testing<br />

The developments so far concentrated on performance<br />

and functionality evaluations.<br />

Data transport<br />

Data transport on a fast switched network has been investigated<br />

on a small InfiniBand (IB) linux cluster installed<br />

at <strong>GSI</strong> in 2005. An XDAQ Peer Transport over IB was<br />

implemented based on the uDAPL library to check the<br />

performance of IB data transfer with the XDAQ I2O<br />

messaging mechanism. For package sizes P ≥ 15 kByte<br />

the bandwidth B saturated at 905 MByte/s to be compared<br />

with 955 MByte/s for measurements with direct<br />

uDAPL. However, the rise of the B(P) curve for small<br />

packages was less steep for the XDAQ transport, since<br />

this is ruled by the minimum transfer time τmin (“latency”<br />

overhead of the framework), from ( dB<br />

1 <strong>GSI</strong>, Darmstadt, Germany<br />

dP )P→0 = 1<br />

τmin .<br />

Depending on the benchmark set up, XDAQ showed values<br />

τmin 10...30 µs, well exceeding the plain uDAPL<br />

latency of τmin 4 µs.<br />

Hardware access<br />

As general software interface to attach DAQ hardware like<br />

55<br />

readout boards, XDAQ provides a Hardware Access Library<br />

package [3]. This defines base classes for user-space<br />

communication with boards on a bus (e.g. PCI or VME).<br />

We implemented HAL BusAdapter and DeviceIdentifier<br />

classes for a generic PCI/PCIe driver of the Mannheim<br />

FPGA group 1 .<br />

The new HAL classes were tested with the available<br />

<strong>GSI</strong> PCIGTB2 board. It was possible to access the<br />

board from an XDAQ Application, setting up registers<br />

and reading/writing on the PCIGTB2 internal memory.<br />

Although the tests showed that the general HAL interface<br />

is not sufficient for all cases, (e.g. DMA; exact i/o timing),<br />

it turned out that it is well possible to implement missing<br />

features as methods of the specific HardwareDevice class.<br />

Control System<br />

XDAQ offers a http server on each node to exchange<br />

control messages and monitoring data via SOAP protocol.<br />

We developed a simple prototype of a control GUI as<br />

JAVA application. However, every update of a monitored<br />

value requires an active http/SOAP request from the GUI.<br />

An improved approach for monitoring consists in a<br />

“publisher-subscriber”-model, where each GUI registers to<br />

be updated automatically if a variable changes in the monitored<br />

application. DIM [4] is a well established protocol<br />

library for such a usage. We developed adapter classes to<br />

run a DIM server in the XDAQ executive. XDAQ infospace<br />

variables are exported as DIM services. Additionally, the<br />

XDAQ application state machine can be switched by DIM<br />

commands. The DIM server provides control access from<br />

any other DIM interfaced packages, like the Labview-DIM<br />

interface of the <strong>GSI</strong> CS-framework [5] for a test control<br />

GUI, or the EPICS DIM gateway currently under development<br />

at <strong>GSI</strong> (http://wiki.gsi.de/Epics).<br />

References<br />

[1] “<strong>CBM</strong> technical status report”, <strong>GSI</strong>, January 2005, pp.235<br />

[2] H.G. Essel and N. Kurz, “Multi Branch System homepage”,<br />

http://daq.gsi.de<br />

[3] J. Gutleber and L. Orsini, “XDAQ framework”,<br />

http://xdaqwiki.cern.ch/index.php/Main Page<br />

[4] C. Gaspar, “Distribution Information Management system<br />

DIM”, http://dim.web.cern.ch/dim/<br />

[5] D. Beck and H.Brand, “The CS framework”,<br />

https://sourceforge.net/projects/cs-framework/<br />

1 thanks to G. Marcus, H. Singpiel, and A. Kugel, Technische Informatik<br />

V, Universtät Mannheim

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!