Athena Developer Guide
Athena Developer Guide
Athena Developer Guide
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
ATLAS<br />
<strong>Athena</strong><br />
The ATLAS Common Framework<br />
<strong>Developer</strong> <strong>Guide</strong><br />
Version: 2<br />
Issue: 0<br />
Edition: 2<br />
Status: Draft<br />
ID: 1<br />
Date: 16 August 2001<br />
DRAFT<br />
European Laboratory for Particle Physics<br />
Laboratoire Européen pour la Physique des Particules<br />
CH-1211 Genève 23 - Suisse
<strong>Athena</strong> 16 August 2001 Version/Issue: 2.0.0<br />
page 2
<strong>Athena</strong><br />
Table of Contents Version/Issue: 2.0.0<br />
Table of Contents<br />
Chapter 1<br />
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9<br />
1.1 Purpose of the document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9<br />
1.2 <strong>Athena</strong> and GAUDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9<br />
1.2.1 Document organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10<br />
1.3 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10<br />
1.3.1 Units. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10<br />
1.3.2 Coding Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10<br />
1.3.3 Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10<br />
1.3.4 Conventions of this document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10<br />
1.4 Release Notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11<br />
1.5 Reporting Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11<br />
1.6 User Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11<br />
Chapter 2<br />
The framework architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13<br />
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13<br />
2.2 Why architecture? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13<br />
2.3 Data versus code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14<br />
2.4 Main components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15<br />
Chapter 3<br />
Release notes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
3.2 New Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
3.3 Changes that are not backwards compatible . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
3.4 Changed dependencies on external software. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
3.5 Bugs Fixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
3.6 Known Bugs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20<br />
Chapter 4<br />
Establishing a development environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />
4.2 Establishing a login environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />
4.2.1 Commands to establish a bourne-shell or varient login environment . . . . . . 21<br />
4.2.2 Commands to establish a c-shell or varient login environment . . . . . . . . . . . 22<br />
4.3 Using SRT to checkout ATLAS software packages . . . . . . . . . . . . . . . . . . . . . . 22<br />
Chapter 5<br />
page 3
<strong>Athena</strong> Table of Contents Version/Issue: 2.0.0<br />
page 4<br />
Writing algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23<br />
5.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23<br />
5.2 Algorithm base class. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23<br />
5.3 Derived algorithm classes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25<br />
5.3.1 Creation (and algorithm factories) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />
5.3.2 Declaring properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />
5.3.3 Implementing IAlgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />
5.4 Nesting algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29<br />
5.5 Algorithm sequences, branches and filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30<br />
5.5.1 Filtering example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />
Chapter 6<br />
Scripting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35<br />
6.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35<br />
6.2 Python scripting service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35<br />
6.3 Python overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35<br />
6.4 How to enable Python scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />
6.4.1 Using a Python script for configuration and control. . . . . . . . . . . . . . . . . . . 36<br />
6.4.2 Using a job options text file for configuration with a Python interactive shell<br />
36<br />
6.5 Prototype functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37<br />
6.6 Property manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />
6.7 Synchronization between Python and <strong>Athena</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . 39<br />
6.8 Controlling job execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40<br />
Chapter 7<br />
Accessing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43<br />
7.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43<br />
7.2 Using the data stores. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43<br />
7.3 Using data objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />
7.4 Object containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47<br />
7.5 Using object containers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48<br />
7.6 Data access checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />
7.7 Defining new data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />
7.8 The SmartDataPtr/SmartDataLocator utilities . . . . . . . . . . . . . . . . . . . . . . . . . . 51<br />
7.8.1 Using SmartDataPtr/SmartDataLocator objects . . . . . . . . . . . . . . . . . . . . . . 51<br />
7.9 Smart references and Smart reference vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 52<br />
7.10 Saving data to a persistent store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53<br />
Chapter 8<br />
StoreGate - the event data access model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55<br />
8.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55<br />
8.2 The StoreGate design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55<br />
8.2.1 System Features and Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
<strong>Athena</strong><br />
Table of Contents Version/Issue: 2.0.0<br />
8.2.2 Characteristics of the Transient Data Store . . . . . . . . . . . . . . . . . . . . . . . . . . 57<br />
8.3 The StoreGate Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58<br />
8.4 Creating a DataObject. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59<br />
8.4.1 Creating a single DataObject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59<br />
8.4.2 Creating a Collection of ContainedObjects . . . . . . . . . . . . . . . . . . . . . . . . . . 60<br />
8.4.3 Creating a ContainedObject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61<br />
8.5 Recording a DataObject. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62<br />
8.5.1 Recording DataObjects without keys. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62<br />
8.5.2 Recording DataObjects with keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63<br />
8.6 Retrieving a DataObject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63<br />
8.6.1 Retrieving DataObjects without a key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64<br />
8.6.2 Retrieving a keyed DataObject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65<br />
8.6.3 Retrieving all DataObjects of a given type . . . . . . . . . . . . . . . . . . . . . . . . . . 66<br />
8.7 Store Access Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67<br />
Chapter 9<br />
Data dictionary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />
9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />
9.1.1 Definition of Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />
9.1.2 Roles of a Data Dictionary (DD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70<br />
9.1.3 Implementation Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71<br />
9.1.4 Data Dictionary Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71<br />
9.1.5 Code Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72<br />
9.1.6 Data Dictionary Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72<br />
9.1.7 Time Line & Milestones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73<br />
9.1.8 Bibliography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73<br />
Chapter 10<br />
Detector Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75<br />
10.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75<br />
Chapter 11<br />
Histogram facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77<br />
11.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77<br />
11.2 The Histogram service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78<br />
11.3 Using histograms and the histogram service . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79<br />
11.4 Persistent storage of histograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80<br />
11.4.1 HBOOK persistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80<br />
11.4.2 ROOT persistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80<br />
Chapter 12<br />
N-tuple and Event Collection facilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83<br />
12.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83<br />
page 5
<strong>Athena</strong> Table of Contents Version/Issue: 2.0.0<br />
page 6<br />
12.2 N-tuples and the N-tuple Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83<br />
12.2.1 Access to the N-tuple Service from an Algorithm.. . . . . . . . . . . . . . . . . . . . 84<br />
12.2.2 Using the N-tuple Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84<br />
12.3 Event Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89<br />
12.3.1 Writing Event Collections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89<br />
12.3.2 Reading Events using Event Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . 91<br />
12.4 Interactive Analysis using N-tuples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92<br />
12.4.1 HBOOK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92<br />
12.4.2 ROOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93<br />
Chapter 13<br />
Framework services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95<br />
13.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95<br />
13.2 Requesting and accessing services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95<br />
13.3 The Job Options Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97<br />
13.3.1 Algorithm, Tool and Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97<br />
13.3.2 Job options file format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100<br />
13.3.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103<br />
13.4 The Standard Message Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104<br />
13.4.1 The MsgStream utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104<br />
13.5 The Particle Properties Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107<br />
13.5.1 Initialising and Accessing the Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107<br />
13.5.2 Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107<br />
13.5.3 Service Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107<br />
13.5.4 Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109<br />
13.6 The Chrono & Stat service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110<br />
13.6.1 Code profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110<br />
13.6.2 Statistical monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111<br />
13.6.3 Chrono and Stat helper classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111<br />
13.6.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112<br />
13.7 The Auditor Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113<br />
13.7.1 Enabling the Auditor Service and specifying the enabled Auditors . . . . . . 113<br />
13.7.2 Overriding the default Algorithm monitoring. . . . . . . . . . . . . . . . . . . . . . . 114<br />
13.7.3 Implementing new Auditors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114<br />
13.8 The Random Numbers Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115<br />
13.9 The Incident Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118<br />
13.9.1 Known Incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118<br />
13.10Developing new services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119<br />
13.10.1 The Service base class. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119<br />
13.10.2 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120<br />
Chapter 14<br />
Tools and ToolSvc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123<br />
14.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
<strong>Athena</strong><br />
Table of Contents Version/Issue: 2.0.0<br />
14.2 Tools and Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123<br />
14.2.1 “Private” and “Shared” Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124<br />
14.2.2 The Tool classes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124<br />
14.3 The ToolSvc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129<br />
14.3.1 Retrieval of tools via the IToolSvc interface. . . . . . . . . . . . . . . . . . . . . . 130<br />
14.4 GaudiTools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131<br />
14.4.1 Associators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131<br />
Chapter 15<br />
Converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137<br />
15.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137<br />
15.2 Persistency converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137<br />
15.3 Collaborators in the conversion process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138<br />
15.4 The conversion process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140<br />
15.5 Converter implementation - general considerations . . . . . . . . . . . . . . . . . . . . . 142<br />
15.6 Storing Data using the ROOT I/O Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143<br />
15.7 The Conversion from Transient Objects to ROOT Objects . . . . . . . . . . . . . . . 144<br />
15.7.1 Non Identifiable Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145<br />
15.8 Storing Data using other I/O Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146<br />
Chapter 16<br />
Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147<br />
16.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147<br />
Chapter 17<br />
Physical design issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149<br />
17.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149<br />
17.2 Accessing the kernel GAUDI framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149<br />
17.2.1 The GaudiInterface Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149<br />
17.3 Framework libraries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150<br />
17.3.1 Component libraries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150<br />
17.3.2 Linker Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153<br />
17.3.3 Dual purpose libraries and library strategy . . . . . . . . . . . . . . . . . . . . . . . . . 154<br />
17.3.4 Building the different types of libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154<br />
17.3.5 Linking FORTRAN code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154<br />
Chapter 18<br />
Framework packages, interfaces and libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157<br />
18.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157<br />
18.2 Gaudi Package Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157<br />
18.2.1 Gaudi Package Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
18.2.2 Packaging <strong>Guide</strong>lines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159<br />
18.3 Interfaces in Gaudi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160<br />
page 7
<strong>Athena</strong> Table of Contents Version/Issue: 2.0.0<br />
18.3.1 Interface ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161<br />
18.3.2 Query Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162<br />
18.4 Libraries in Gaudi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162<br />
18.4.1 Component libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162<br />
18.4.2 Linker libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165<br />
18.4.3 Library strategy and dual purpose libraries. . . . . . . . . . . . . . . . . . . . . . . . . 166<br />
18.4.4 Building and linking with the libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166<br />
18.4.5 Linking FORTRAN code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168<br />
Chapter 19<br />
Analysis utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169<br />
19.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169<br />
19.2 CLHEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169<br />
19.3 HTL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169<br />
19.4 NAG C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170<br />
19.5 ROOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170<br />
Appendix A<br />
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171<br />
page 8<br />
Appendix D<br />
Installation guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187<br />
19.6 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187<br />
19.6.1 Acknowledgements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187<br />
19.6.2 External Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188<br />
19.6.3 Installing CLHEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188<br />
19.6.4 Installing HTL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188<br />
19.6.5 Installing CERNLIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188<br />
19.6.6 Installing ROOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189<br />
19.6.7 Installing Qt. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189<br />
19.6.8 Installing CMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189<br />
19.6.9 Installing GAUDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189<br />
19.6.10 Installing the ATLAS release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
<strong>Athena</strong><br />
Chapter 1 Introduction Version/Issue: 2.0.0<br />
Chapter 1<br />
Introduction<br />
1.1 Purpose of the document<br />
This document is intended for developers of the <strong>Athena</strong> control framework. <strong>Athena</strong> is based upon the<br />
GAUDI architecture that was originally developed by LHCb, but which is now a joint development<br />
project. This document, together with other information about <strong>Athena</strong>, is available online at:<br />
http://web1.cern.ch/Atlas/GROUPS/SOFTWARE/OO/architecture<br />
This version of the <strong>Athena</strong> corresponds to <strong>Athena</strong> release 2.0.0. This is based upon ATLAS GAUDI<br />
version 0.7.2, which itself is based upon GAUDI version 7 with some patches.<br />
1.2 <strong>Athena</strong> and GAUDI<br />
As mentioned above <strong>Athena</strong> is a control framework that represents a concrete implementation of an<br />
underlying architecture. The architecture describes the abstractions or components and how they<br />
interact with each other. The architecture underlying <strong>Athena</strong> is the GAUDI architecture originally<br />
developed by LHCb. This architecture has been extended through collaboration with ATLAS, and an<br />
experiment neutral or kernel implementation, also called GAUDI, has been created. <strong>Athena</strong> is then the<br />
sum of this kernel framework, together with ATLAS-specific enhancements. The latter include the<br />
event data model and event generator framework.<br />
The collaboration between LHCb and ATLAS is in the process of being extended to allow other<br />
experiments to also contribute new architectural concepts and concrete implementations to the kernel<br />
GAUDI framework. It is expected that implementation developed originally for a particular experiment<br />
will be adopted as being generic and will be migrated into the kernel. This has already happened with,<br />
page 9
<strong>Athena</strong> Chapter 1 Introduction Version/Issue: 2.0.0<br />
for example, the concepts of auditors, the sequencer and the ROOT histogram and ntuple persistency<br />
service.<br />
For the remainder of this document the name <strong>Athena</strong> is used to refer to the framework and the name<br />
GAUDI is used to refer to the architecture upon which this framework is based.<br />
1.2.1 Document organization<br />
The document is organized as follows:<br />
1.3 Conventions<br />
1.3.1 Units<br />
This section is blank for now.<br />
1.3.2 Coding Conventions<br />
This section is blank for now.<br />
1.3.3 Naming Conventions<br />
This section is blank for now.<br />
1.3.4 Conventions of this document<br />
Angle brackets are used in two contexts. To avoid confusion we outline the difference with an<br />
example.<br />
The definition of a templated class uses angle brackets. These are required by the C++ syntax, so in the<br />
instantiation of a templated class the angle brackets are retained:<br />
AlgFactory s_factory;<br />
page 10
<strong>Athena</strong><br />
Chapter 1 Introduction Version/Issue: 2.0.0<br />
This is to be contrasted with the use of angle brackets to denote “replacement” such as in the<br />
specification of the string:<br />
“/”<br />
which implies that the string should look like:<br />
“EmptyAlgorithm/Empty”<br />
Hopefully what is intended will be clear from the context.<br />
1.4 Release Notes<br />
Although this document is kept as up to date as possible, <strong>Athena</strong> users should refer to the release notes<br />
that accompany each ATLAS software release for any information that is specific to that release. The<br />
release notes are kept in the offline/Control/ReleaseNotes.txt file.<br />
1.5 Reporting Problems<br />
Eventually ATLAS will use the Remedy bug reporting system for reporting and tracking of problems.<br />
Until this is available, users should report problems to the ATLAS Architecture mailing list at<br />
atlas-sw-architecture@atlas-lb.cern.ch.<br />
1.6 User Feedback<br />
Feedback on this User <strong>Guide</strong>, or any other aspects of the documentation for <strong>Athena</strong>, should also be sent<br />
to the ATLAS Architecture mailing list.<br />
page 11
<strong>Athena</strong> Chapter 1 Introduction Version/Issue: 2.0.0<br />
page 12
<strong>Athena</strong><br />
Chapter 2 The framework architecture Version/Issue: 2.0.0<br />
Chapter 2<br />
The framework architecture<br />
2.1 Overview<br />
In this chapter we outline some of the main features of the Gaudi architecture. A (more) complete view<br />
of the architecture, along with a discussion of the main design choices and the reasons for these choices<br />
may be found in reference [1].<br />
2.2 Why architecture?<br />
The basic “requirement” of the physicists is a set of programs for doing event simulation,<br />
reconstruction, visualisation, etc. and a set of tools which facilitate the writing of analysis programs.<br />
Additionally a physicist wants something that is easy to use and (though he or she may claim otherwise)<br />
is extremely flexible. The purpose of the Gaudi application framework is to provide software which<br />
fulfils these requirements, but which additionally addresses a larger set of requirements, including the<br />
use of some of the software online.<br />
If the software is to be easy to use it must require a limited amount of learning on the part of the user. In<br />
particular, once learned there should be no need to re-learn just because technology has moved on (you<br />
do not need to re-take your licence every time you buy a new car). Thus one of the principal design<br />
goals was to insulate users (physicist developers and physicist analysists) from irrelevant details such as<br />
what software libraries we use for data I/O, or for graphics. We have done this by developing an<br />
architecture. An architecture consists of the specification of a number of components and their<br />
interactions with each other. A component is a “block” of software which has a well specified interface<br />
and functionality. An interface is a collection of methods along with a statement of what each method<br />
actually does, i.e. its functionality.<br />
page 13
<strong>Athena</strong> Chapter 2 The framework architecture Version/Issue: 2.0.0<br />
We may summarise the main benefits we gain from this approach:<br />
Flexibility This approach gives flexibility because components may be plugged together in different<br />
ways to perform different tasks.<br />
Simplicity Software for using, for example, an object data base is in general fairly complex and time<br />
consuming to learn. Most of the detail is of little interest to someone who just wants to read data or store<br />
results. A “data access” component would have an interface which provided to the user only the<br />
required functionality. Additionally the interface would be the same independently of the underlying<br />
storage technology.<br />
Robustness As stated above a component can hide the underlying technology. As well as offering<br />
simplicity, this has the additional advantage that the underlying technology may be changed without the<br />
user even needing to know.<br />
It is intended that almost all software written by physicists, whether for event generation, reconstruction<br />
or analysis, will be in the form of specialisations of a few specific components. Here, specialisation<br />
means taking a standard component and adding to its functionality while keeping the interface the<br />
same. Within the application framework this is done by deriving new classes from one of the base<br />
classes:<br />
• DataObject<br />
• Algorithm<br />
• Converter<br />
In this chapter we will briefly consider the first two of these components and in particular the subject of<br />
the “separation” of data and algorithms. They will be covered in more depth in chapters 5 and 7. The<br />
third base class, Converter, exists more for technical necessity than anything else and will be discussed<br />
in Chapter 15. Following this we give a brief outline of the main components that a physicist developer<br />
will come into contact with.<br />
2.3 Data versus code<br />
page 14<br />
Broadly speaking, tasks such as physics analysis and event reconstruction consist of the manipulation<br />
of mathematical or physical quantities: points, vectors, matrices, hits, momenta, etc., by algorithms<br />
which are generally specified in terms of equations and natural language. The mapping of this type of<br />
task into a programming language such as FORTRAN is very natural, since there is a very clear<br />
distinction between “data” and “code”. Data consists of variables such as:<br />
integer n<br />
real p(3)<br />
and code which may consist of a simple statement or a set of statements collected together into a<br />
function or procedure:<br />
real function innerProduct(p1, p2)<br />
real p1(3),p2(3)
<strong>Athena</strong><br />
Chapter 2 The framework architecture Version/Issue: 2.0.0<br />
innerProduct = p1(1)*p2(1) + p1(2)*p2(2) + p1(3)*p2(3)<br />
end<br />
Thus the physical and mathematical quantities map to data and the algorithms map to a collection of<br />
functions.<br />
A priori, we see no reason why moving to a language which supports the idea of objects, such as C++,<br />
should change the way we think of doing physics analysis. Thus the idea of having essentially<br />
mathematical objects such as vectors, points etc. and these being distinct from the more complex beasts<br />
which manipulate them, e.g. fitting algorithms etc. is still valid. This is the reason why the Gaudi<br />
application framework makes a clear distinction between “data” objects and “algorithm” objects.<br />
Anything which has as its origin a concept such as hit, point, vector, trajectory, i.e. a clear<br />
“quantity-like” entity should be implemented by deriving a class from the DataObject base class.<br />
On the other hand anything which is essentially a “procedure”, i.e. a set of rules for performing<br />
transformations on more data-like objects, or for creating new data-like objects should be designed as a<br />
class derived from the Algorithm base class.<br />
Further more you should not have objects derived from DataObject performing long complex<br />
algorithmic procedures. The intention is that these objects are “small”.<br />
Tracks which fit themselves are of course possible: you could have a constructor which took a list of<br />
hits as a parameter; but they are silly. Every track object would now have to contain all of the<br />
parameters used to perform the track fit, making it far from a simple object. Track-fitting is an<br />
algorithmic procedure; a track is probably best represented by a point and a vector, or perhaps a set of<br />
points and vectors. They are different.<br />
2.4 Main components<br />
The principle functionality of an algorithm is to take input data, manipulate it and produce new output<br />
data. Figure 2.1 shows how a concrete algorithm object interacts with the rest of the application<br />
framework to achieve this.<br />
The figure shows the four main services that algorithm objects use:<br />
• The event data store<br />
• The detector data store<br />
• The histogram service<br />
• The message service<br />
The particle property service is an example of additional services that are available to an algorithm. The<br />
job options service (see Chapter 13) is used by the Algorithm base class, but is not usually explicitly<br />
seen by a concrete algorithm.<br />
page 15
<strong>Athena</strong> Chapter 2 The framework architecture Version/Issue: 2.0.0<br />
ISvcLocator<br />
ApplicationManager<br />
EventDataService<br />
IDataProviderSvc<br />
IAlgorithm<br />
IProperty<br />
DetectorDataService<br />
HistogramService<br />
IDataProviderSvc<br />
IHistogramSvc<br />
ConcreteAlgorithm<br />
MessageService<br />
IMessageSvc<br />
ObjectA<br />
ObjectB<br />
ParticlePropertySvc<br />
IParticlePropertySvc<br />
Figure 2.1 The main components of the framework as seen by an algorithm object.<br />
Each of these services is provided by a component and the use of these components is via an interface.<br />
The interface used by algorithm objects is shown in the figure, e.g. for both the event data and detector<br />
data stores it is the IDataProviderSvc interface. In general a component implements more than<br />
one interface. For example the event data store implements another interface: IDataManager which<br />
is used by the application manager to clear the store before a new event is read in.<br />
An algorithm’s access to data, whether the data is coming from or going to a persistent store or whether<br />
it is coming from or going to another algorithm is always via one of the data store components. The<br />
IDataProviderSvc interface allows algorithms to access data in the store and to add new data to<br />
the store. It is discussed further in Chapter 7 where we consider the data store components in more<br />
detail.<br />
page 16<br />
The histogram service is another type of data store intended for the storage of histograms and other<br />
“statistical” objects, i.e. data objects with a lifetime of longer than a single event. Access is via the<br />
IHistogramSvc which is an extension to the IDataProviderSvc interface, and is discussed in<br />
Chapter 11. The n-tuple service is similar, with access via the INtupleSvc extension to the<br />
IDataProviderSvc interface, as discussed in Chapter 12.
<strong>Athena</strong><br />
Chapter 2 The framework architecture Version/Issue: 2.0.0<br />
In general an algorithm will be configurable: It will require certain parameters, such as cut-offs, upper<br />
limits on the number of iterations, convergence criteria, etc., to be initialised before the algorithm may<br />
be executed. These parameters may be specified at run time via the job options mechanism. This is<br />
done by the job options service. Though it is not explicitly shown in the figure this component makes<br />
use of the IProperty interface which is implemented by the Algorithm base class.<br />
During its execution an algorithm may wish to make reports on its progress or on errors that occur. All<br />
communication with the outside world should go through the message service component via the<br />
IMessageSvc interface. Use of this interface is discussed in Chapter 13.<br />
As mentioned above, by virtue of its derivation from the Algorithm base class, any concrete<br />
algorithm class implements the IAlgorithm and IProperty interfaces, except for the three<br />
methods initialize(), execute(), and finalize() which must be explicitly implemented<br />
by the concrete algorithm. IAlgorithm is used by the application manager to control top-level<br />
algorithms. IProperty is usually used only by the job options service.<br />
The figure also shows that a concrete algorithm may make use of additional objects internally to aid it<br />
in its function. These private objects do not need to inherit from any particular base class so long as they<br />
are only used internally. These objects are under the complete control of the algorithm object itself and<br />
so care is required to avoid memory leaks etc.<br />
We have used the terms “interface” and “implements” quite freely above. Let us be more explicit about<br />
what we mean. We use the term interface to describe a pure virtual C++ class, i.e. a class with no data<br />
members, and no implementation of the methods that it declares. For example:<br />
class PureAbstractClass {<br />
virtual method1() = 0;<br />
virtual method2() = 0;<br />
}<br />
is a pure abstract class or interface. We say that a class implements such an interface if it is derived<br />
from it, for example:<br />
class ConcreteComponent: public PureAbstractClass {<br />
method1() { }<br />
method2() { }<br />
}<br />
A component which implements more than one interface does so via multiple inheritance, however,<br />
since the interfaces are pure abstract classes the usual problems associated with multiple inheritance do<br />
not occur. These interfaces are identified by a unique number which is available via a global constant of<br />
the form: IID_InterfaceType, such as for example IID_IDataProviderSvc. Using these it<br />
is possible to enquire what interfaces a particular component implements (as shown for example<br />
through the use queryInterface() in the finalize() method of the SimpleAnalysis<br />
example).<br />
page 17
<strong>Athena</strong> Chapter 2 The framework architecture Version/Issue: 2.0.0<br />
Within the framework every component, e.g. services and algorithms, has two qualities:<br />
• A concrete component class, e.g. TrackFinderAlgorithm or MessageSvc.<br />
• Its name, e.g. “KalmanFitAlgorithm” or “stdMessageService”.<br />
page 18
<strong>Athena</strong><br />
Chapter 3 Release notes Version/Issue: 2.0.0<br />
Chapter 3<br />
Release notes<br />
3.1 Overview<br />
These release notes identify changes since the previous release, focussing on new functionality,<br />
changes that are not backwards compatible, changes in external dependencies, and a brief summary of<br />
bugs that have been fixed, or are known to be outstanding.<br />
3.2 New Functionality<br />
3.3 Changes that are not backwards compatible<br />
3.4 Changed dependencies on external software<br />
1. ATLAS release 2.0.0 depends upon ATLAS GAUDI release 0.7.2.<br />
3.5 Bugs Fixed<br />
In general these should be referenced by the appropriate Remedy number, but this is not currently<br />
available.<br />
page 19
<strong>Athena</strong> Chapter 3 Release notes Version/Issue: 2.0.0<br />
3.6 Known Bugs<br />
None.<br />
page 20
<strong>Athena</strong><br />
Chapter 4 Establishing a development environment Version/Issue: 2.0.0<br />
Chapter 4<br />
Establishing a development environment<br />
4.1 Overview<br />
This Chapter describes how to establish an environment to allow a developer to modify or create<br />
<strong>Athena</strong>-based applications. The details of this will depend upon the particular site, on whether the user<br />
is using a CERN computer, and whether AFS is available. Consult your local system administrator for<br />
details of how to login and to create a minimal environment. What is described here is the appropriate<br />
setup procedures for a CERN user on a CERN machine.<br />
4.2 Establishing a login environment<br />
4.2.1 Commands to establish a bourne-shell or varient login environment<br />
The commands in Listing 4.1 establish a minimal login environment using the bourne shell or varients<br />
(sh, bash, zsh, etc.) and should be entered into the .profile or .bash_profile (?) file.<br />
Listing 4.1 Bourne shell and varients commands to establish an ATLAS login environment<br />
export ATLAS_ROOT=/afs/cern.ch/atlas<br />
export CVSROOT=:kserver:atlas-sw.cern.ch:/atlascvs<br />
if [ "$PATH" != "" ]; then<br />
export PATH=${PATH}:$ATLAS_ROOT/software/bin<br />
else<br />
export PATH=$ATLAS_ROOT/software/bin<br />
fi<br />
source ‘srt setup -s sh‘<br />
page 21
<strong>Athena</strong> Chapter 4 Establishing a development environment Version/Issue: 2.0.0<br />
4.2.2 Commands to establish a c-shell or varient login environment<br />
The commands in Listing 4.2 establish a minimal login environment using the c-shell or varients (csh,<br />
tcsh, etc.) and should be entered into the .login file.<br />
Listing 4.2 C shell and varients commands to establish an ATLAS login environment<br />
setenv ATLAS_ROOT /afs/cern.ch/atlas<br />
setenv CVSROOT :kserver:atlas-sw.cern.ch:/atlascvs<br />
if ( $?PATH ) then<br />
setenv PATH ${PATH}:$ATLAS_ROOT/software/bin<br />
else<br />
setenv PATH $ATLAS_ROOT/software/bin<br />
endif<br />
source ‘srt setup -s csh‘<br />
4.3 Using SRT to checkout ATLAS software packages<br />
ATLAS software is organized as a set of hierarchical packages, each package corresponding to a logical<br />
grouping of (typically) C++ classes. These packages are kept in a centralized code repository, managed<br />
by CVS [Ref]. Self-contained snaphots of the package hierarchy are created at frequent intervals, and<br />
executables and libraries are created from them. These snapshots are termed releases, and in many<br />
cases users can execute applications directly from a release of their choice. Each release is identified by<br />
a three-component identifier of the form ii.jj.kk (e.g. 1.3.2).<br />
page 22
<strong>Athena</strong><br />
Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
Chapter 5<br />
Writing algorithms<br />
5.1 Overview<br />
As mentioned previously the framework makes use of the inheritance mechanism for specialising the<br />
Algorithm component. In other words, a concrete algorithm class must inherit from (“be derived from”<br />
in C++ parlance, “extend” in Java) the Algorithm base class.<br />
In this chapter we first look at the base class itself. We then discuss what is involved in creating<br />
concrete algorithms: specifically how to declare properties, what to put into the methods of the<br />
IAlgorithm interface, the use of private objects and how to nest algorithms. Finally we look at how to<br />
set up sequences of algorithms and how to control processing through the use of branches and filters.<br />
5.2 Algorithm base class<br />
Since a concrete algorithm object is-an Algorithm object it may use all of the public methods of the<br />
Algorithm base class. The base class has no protected methods or data members, and no public data<br />
members, so in fact, these are the only methods that are available. Most of these methods are in fact<br />
provided solely to make the implementation of derived algorithms easier. The base class has two main<br />
responsibilities: the initialization of certain internal pointers and the management of the properties of<br />
derived algorithm classes.<br />
A part of the Algorithm base class definition is shown in Listing 5.1. Include directives, forward<br />
declarations and private member variables have all been suppressed. It declares a constructor and<br />
destructor; the three key methods of the IAlgorithm interface; several accessors to services that a<br />
concrete algorithm will almost certainly require; a method to create a sub algorithm, the two methods of<br />
the IProperty interface; and a whole series of methods for declaring properties.<br />
page 23
<strong>Athena</strong> Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
Listing 5.1 The definition of the Algorithm base class.<br />
1: class Algorithm : virtual public IAlgorithm,<br />
2: virtual public IProperty {<br />
3: public:<br />
4: // Constructor and destructor<br />
5: Algorithm( const std::string& name, ISvcLocator *svcloc );<br />
6: virtual ~Algorithm();<br />
7:<br />
8: StatusCode sysInitialize();<br />
9: StatusCode initialize();<br />
10: StatusCode sysExecute();<br />
11: StatusCode execute();<br />
12: StatusCode sysFinalize();<br />
13: StatusCode finalize();<br />
14: const std::string& name() const;<br />
15:<br />
16: virtual bool isExecuted() const;<br />
17: virtual StatusCode setExecuted( bool state );<br />
18: virtual StatusCode resetExecuted();<br />
19: virtual bool isEnabled() const;<br />
20: virtual bool filterPassed() const;<br />
21: virtual StatusCode setFilterPassed( bool state );<br />
22:<br />
23: template<br />
24: StatusCode service( const std::string& svcName, Svc*& theSvc );<br />
25: IMessageSvc* msgSvc();<br />
26: void setOutputLevel( int level );<br />
27: IDataProviderSvc* eventSvc();<br />
28: IConversionSvc* eventCnvSvc();<br />
29: IDataProviderSvc* detSvc();<br />
30: IConversionSvc* detCnvSvc();<br />
31: IHistogramSvc* histoSvc();<br />
32: INtupleSvc* ntupleSvc();<br />
33: IChronoStatSvc* chronoSvc();<br />
34: IRndmGenSvc* randSvc();<br />
35: ISvcLocator* serviceLocator();<br />
36:<br />
37: StatusCode createSubAlgorithm( const std::string& type,<br />
38: const std::string& name, Algorithm*& pSubAlg );<br />
39: std::vector* subAlgorithms() const;<br />
40:<br />
41: virtual StatusCode setProperty(const Property& p);<br />
42: virtual StatusCode getProperty(Property* p) const;<br />
43: const Property& getProperty( const std::string& name) const;<br />
44: const std::vector& getProperties() const;<br />
45: StatusCode setProperties();<br />
46:<br />
47: StatusCode declareProperty(const std::string& name, int& reference);<br />
48: StatusCode declareProperty(const std::string& name, float& reference);<br />
49: StatusCode declareProperty(const std::string& name, double& reference);<br />
50: StatusCode declareProperty(const std::string& name, bool& reference);<br />
51: StatusCode declareProperty(const std::string& name,<br />
52: std::string& reference);<br />
53: // Vectors of properties not shown<br />
page 24<br />
54: private:<br />
55: // Data members not shown<br />
56: Algorithm(const Algorithm& a); // NO COPY ALLOWED<br />
57: Algorithm& operator=(const Algorithm& rhs); // NO ASSIGNMENT ALLOWED};
<strong>Athena</strong><br />
Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
Constructor and Destructor The base class has a single constructor which takes two arguments: The<br />
first is the name that will identify the algorithm object being instantiated and the second is a pointer to<br />
one of the interfaces implemented by the application manager: ISvcLocator. This interface may be<br />
used to request special services that an algorithm may wish to use, but which are not available via the<br />
standard accessor methods (below).<br />
The IAlgorithm interface Principally this consists of the three pure virtual methods that must be<br />
implemented by a derived algorithm: initialize(), execute() and finalize(). These are<br />
where the algorithm does its useful work and discussed in more detail in section 5.3. Other methods of<br />
the interface are the accessor name() which returns the algorithm’s identifying name, and<br />
sysInitialize(), sysFinalize(), sysExecute() which are used internally by the<br />
framework. The latter three methods are not virtual and may not be overridden.<br />
Service accessor methods Lines 21 to 35 declare accessor methods which return pointers to key<br />
service interfaces. These methods are available for use only after the Algorithm base class has been<br />
initialized, i.e. they may not be used from within a concrete algorithm constructor, but may be used<br />
from within the initialize() method (see 5.3.3). The services and interface types to which they<br />
point are self explanatory (see also Chapter 2). Services may be located by name using the templated<br />
service() function in lines 21 and 22 or by using the serviceLocator() accessor method on<br />
line 35, as described in section 13.2 of Chapter 13. Line 26 declares a facility to modify the message<br />
output level from within the code (the message service is described in detail in section 13.4of Chapter<br />
13).<br />
Creation of sub algorithms The methods on lines 37 to 39 are intended to be used by a derived class<br />
to manage sub-algorithms, as discussed in section 5.4.<br />
Declaration and setting of properties As mentioned above, one of the responsibilities of the base<br />
class is the management of properties. The methods in lines 41 to 45 are used by the framework to set<br />
properties as defined in the job options file. The declareProperty methods (lines 47 to 53) are<br />
intended to be used by a derived class to declare its properties. This is discussed in more detail in<br />
section 5.3.2. and in Chapter 13.<br />
Filtering The methods in lines 14 to 19 are used by sequencers and filters to access the state of the<br />
algorithm, as discussed in section 5.5.<br />
5.3 Derived algorithm classes<br />
In order for an algorithm object to do anything useful it must be specialised, i.e. it must extend (inherit<br />
from, be derived from) the Algorithm base class. In general it will be necessary to implement the<br />
methods of the IAlgorithm interface, and declare the algorithm’s properties to the property<br />
management machinery of the Algorithm base class. Additionally there is one non-obvious technical<br />
matter to cover, namely algorithm factories.<br />
page 25
<strong>Athena</strong> Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
5.3.1 Creation (and algorithm factories)<br />
As mentioned before, a concrete algorithm class must specify a single constructor with the same<br />
parameter signature as the constructor of the base class.<br />
In addition to this a concrete algorithm factory must be provided. This is a technical matter which<br />
permits the application manager to create new algorithm objects without having to include all of the<br />
concrete algorithm header files. From the point of view of an algorithm developer it implies adding two<br />
lines into the implementation file, of the form:<br />
static const AlgFactory s_factory;<br />
const IAlgFactory& ConcreteAlgorithmFactory = s_factory;<br />
where “ConcreteAlgorithm” should be replaced by the name of the derived algorithm class (see<br />
for example lines 10 and 11 in Listing 5.2 below).<br />
5.3.2 Declaring properties<br />
In general a concrete algorithm class will have several data members which are used in the execution of<br />
the algorithm proper. These data members should of course be initialized in the constructor, but if this<br />
was the only mechanism available to set their value it would be necessary to recompile the code every<br />
time you wanted to run with different settings. In order to avoid this, the framework provides a<br />
mechanism for setting the values of member variables at run time.<br />
The mechanism comes in two parts: the declaration of properties and the setting of their values. As an<br />
example consider the class TriggerDecision in Listing 5.2 which has a number of variables whose<br />
value we would like to set at run time.<br />
The default values for the variables are set within the constructor (within an initialiser list) as per<br />
normal. To declare them as properties it suffices to call the declareProperty() method. This<br />
method is overloaded to take an std::string as the first parameter and a variety of different types<br />
for the second parameter. The first parameter is the name by which this member variable shall be<br />
referred to, and the second parameter is a reference to the member variable itself.<br />
page 26<br />
In the example we associate the name “PassAllMode” to the member variable m_passAllMode,<br />
and the name “MuonCandidateCut” to m_muonCandidateCut. The first is of type boolean and<br />
the second an integer. If the job options service (described in Chapter 13) finds an option in the job<br />
options file belonging to this algorithm and whose name matches one of the names associated with a<br />
member variable, then that member variable will be set to the value specified in the job options file.
<strong>Athena</strong><br />
Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
Listing 5.2 Declaring member variables as properties.<br />
1: //------- In the header file --------------------------------------//<br />
2: class TriggerDecision : public Algorithm {<br />
3:<br />
4: private:<br />
5: bool m_passAllMode;<br />
6: int m_muonCandidateCut;<br />
7: std::vector m_ECALEnergyCuts;<br />
8: }<br />
9: //------- In the implementation file -------------------------------//<br />
10: static const AlgFactory s_factory;<br />
11: const IAlgFactory& TriggerDecisionFactory = s_factory;<br />
12:<br />
13: TriggerDecision::TriggerDecision(std::string name, ISvcLocator *pSL) :<br />
14: Algorithm(name, pSL), m_passAllMode(false), m_muonCandidateCut(0) {<br />
15: m_ECALenergyCuts.push_back(0.0);<br />
16: m_ECALenergyCuts.push_back(0.6);<br />
17:<br />
18: declareProperty(“PassAllMode”, m_passAllMode);<br />
19: declareProperty(“MuonCandidateCut”, m_muonCandidateCut);<br />
20: declareProperty(“ECALEnergyCuts”, m_ECALEnergyCuts);<br />
21: }<br />
22:<br />
23: StatusCode TriggerDecision::initialize() {<br />
24: }<br />
5.3.3 Implementing IAlgorithm<br />
In order to implement IAlgorithm you must implement its three pure virtual methods<br />
initialize(), execute() and finalize(). For a top level algorithm, i.e. one controlled<br />
directly by the application manager, the methods are invoked as is described in section 4.6. This<br />
dictates what it is useful to put into each of the methods.<br />
Initialization In a standard job the application manager will initialize all top level algorithms exactly<br />
once before reading any event data. It does this by invoking the sysInitialize() method of each<br />
top-level algorithm in turn, in which the framework takes care of setting up internal references to<br />
standard services and to set the algorithm properties by calling the method setProperties(). This<br />
causes the job options service to make repeated calls to the setProperty() method of the<br />
algorithm, which actually assigns values to the member variables. Finally, sysInitialize() calls<br />
the initialize() method, which can be used to do such things as creating histograms, or creating<br />
sub-algorithms if required (see section 5.4). If an algorithm fails to initialize it should return<br />
StatusCode::FAILURE. This will cause the job to terminate.<br />
Figure 5.1 shows an example trace of the initialization phase. Sub-algorithms are discussed in section<br />
5.4.<br />
Execution The guts of the algorithm class is in the execute() method. For top level algorithms this<br />
will be called once per event for each algorithm object in the order in which they were declared to the<br />
page 27
<strong>Athena</strong> Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
Algorithm<br />
sysInitialize<br />
setProperties<br />
initialize<br />
createSubAlgorithm<br />
"create"<br />
SubAlgorithm<br />
createSubAlgorithm<br />
"create"<br />
SubAlgorithm<br />
Te<br />
xt<br />
sysInitialize<br />
setProperties<br />
initialize<br />
sysInitialize<br />
setProperties<br />
initialize<br />
Figure 5.1 Algorithm initialization.<br />
application manager. For sub-algorithms (see section 5.4) the control flow may be as you like: you may<br />
call the execute() method once, many times or not at all.<br />
page 28
<strong>Athena</strong><br />
Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
Just because an algorithm derives from the Algorithm base class does not mean that it is limited to<br />
using or overriding only the methods defined by the base class. In general, your code will be much<br />
better structured (i.e. understandable, maintainable, etc.) if you do not, for example, implement the<br />
execute() method as a single block of 100 lines, but instead define your own utility methods and<br />
classes to better structure the code.<br />
If an algorithm fails in some manner, e.g. a fit fails to converge, or its data is nonsense it should return<br />
from the execute() method with StatusCode::FAILURE. This will cause the application<br />
manager to stop processing events and end the job. This default behaviour can be modified by setting<br />
the .ErrorMax job option to something greater than 1. In this case a message will<br />
be printed, but the job will continue as if there had been no error, and just increment an error count. The<br />
job will only stop if the error count reaches the ErrorMax limit set in the job option.<br />
The framework (the Algorithm base class) calls the execute() method within a try/catch clause.<br />
This means that any exception not handled in the execution of an Algorithm will be caught at the level<br />
of sysExecute() implemented in the base class. The behaviour on these exceptions is identical to<br />
that described above for errors.<br />
Finalization The finalize() method is called at the end of the job. It can be used to analyse<br />
statistics, fit histograms, or whatever you like. Similarly to initialization, the framework invokes a<br />
sysFinalize() method which in turn invokes the finalize() method of the algorithm and of<br />
any sub-algorithms.<br />
Monitoring of the execution (e.g. cpu usage) of each Algorithm instance is performed by auditors under<br />
control of the Auditor service (described in Chapter 13). This monitoring can be turned on or off with<br />
the boolean properties AuditInitialize, AuditExecute, AuditFinalize.<br />
The following is a list of things to do when implementing an algorithm.<br />
• Derive your algorithm from the Algorithm base class.<br />
• Provide the appropriate constructor and the three methods initialize(), execute()<br />
and finalize().<br />
• Make sure you have implemented a factory by adding the magic two lines of code (see 5.3.1).<br />
5.4 Nesting algorithms<br />
The application manager is responsible for initializing, executing once per event, and finalizing the set<br />
of top level algorithms, i.e. the set of algorithms specified in the job options file. However such a<br />
simple linear structure is very limiting. You may wish to execute some algorithms only for specific<br />
types of event, or you may wish to “loop” over an algorithm’s execute method. Within the <strong>Athena</strong><br />
application framework the way to have such control is via the nesting of algorithms or through<br />
algorithm sequences (described in section 5.5). A nested (or sub-) algorithm is one which is created by,<br />
and thus belongs to and is controlled by, another algorithm (its parent) as opposed to the application<br />
manager. In this section we discuss a number of points which are specific to sub-algorithms.<br />
page 29
<strong>Athena</strong> Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
In the first place, the parent algorithm will need a member variable of type Algorithm* (see the code<br />
fragment below) in which to store a pointer to the sub-algorithm.<br />
Algorithm* m_pSubAlgorithm; // Pointer to the sub algorithm<br />
// Must be a member variable of the parent class<br />
std::string type;<br />
// Type of sub algorithm<br />
std::string name;<br />
// Name to be given to subAlgorithm<br />
StatusCode sc;<br />
// Status code returned by the call<br />
sc = createSubAlgorithm(type, name, Algorithm*& m_pSubAlgorithm);<br />
The sub-algorithm itself is created by invoking the createSubAlgorithm() method of the<br />
Algorithm base class. The parameters passed are the type of the algorithm, its name and a reference<br />
to the pointer which will be set to point to the newly created sub-algorithm. Note that the name passed<br />
into the createSubAlgorithm() method is the same name that should be used within the job<br />
options file for specifying algorithm properties.<br />
The algorithm type (i.e. class name) string is used by the application manager to decide which factory<br />
should create the algorithm object.<br />
The execution of the sub-algorithm is entirely the responsibility of the parent algorithm whereas the<br />
initialize() and finalize() methods are invoked automatically by the framework as shown<br />
in Figure 5.1. Similarly the properties of a sub-algorithm are also automatically set by the framework.<br />
Note that the createSubAlgorithm() method returns a pointer to an Algorithm object, not an<br />
IAlgorithm interface. This means that you have access to the methods of both the IAlgorithm<br />
and IProperty interfaces, and consequently as well as being able to call execute() etc. you can<br />
also explicitly call the setProperty(Property&) method of the sub-algorithm, as is done in the<br />
following code fragment. For this reason with nested algorithms you are not restricted to calling<br />
setProperty() only at initialization. You may also change the properties of a sub-algorithm during<br />
the main event loop.<br />
Algorithm *m_pSubAlgorithm;<br />
sc = createSubAlgorithm(type, name, Algorithm*& m_pSubAlgorithm);<br />
IntegerProperty p(“Counter”, 1024);<br />
m_pSubAlgorithm->setProperty(p);<br />
Note also that the vector of pointers to the sub-algorithms is available via the subAlgorithms()<br />
method.<br />
5.5 Algorithm sequences, branches and filters<br />
page 30<br />
A physics application may wish to execute different algorithms depending on the physics signature of<br />
each event, which might be determined at run-time as a result of some reconstruction. This capability is<br />
supported in <strong>Athena</strong> through sequences, branches and filters. A sequence is a list of Algorithms. Each
<strong>Athena</strong><br />
Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
Algorithm may make a filter decision, based on some characteristics of the event, which can either<br />
allow or bypass processing of the downstream algorithms in the sequence. The filter decision may also<br />
cause a branch whereby a different downstream sequence of Algorithms will be executed for events<br />
that pass the filter decision relative to those that fail it. Eventually the particular set of sequences, filters<br />
and branches might be used to determine which of multiple output destinations each event is written to<br />
(if at all). This capability is not yet implemented but is planned for a future release of <strong>Athena</strong>.<br />
A Sequencer class is available in the GaudiAlg package which manages algorithm sequences<br />
using filtering and branching protocols which are implemented in the Algorithm class itself. The list<br />
of Algorithms in a Sequencer is specified through the Members property. Algorithms can call<br />
setFilterPassed( true/false ) during their execute() function. Algorithms in the<br />
membership list downstream of one that sets this flag to false will not be executed, unless the<br />
StopOverride property of the Sequencer has been set, or the filtering algorithm itself is of type<br />
Sequencer and its BranchMembers property specifies a branch with downstream members. Please<br />
note that, if a sub-algorithm is of type Sequencer, the parent algorithm must call the<br />
resetExecuted() method of the sub-algorithm before calling the execute() method, otherwise<br />
the sequence will only be executed once in the lifetime of the job!<br />
An algorithm instance is executed only once per event, even if it appears in multiple sequences. It may<br />
also be enabled or disabled, being enabled by default. This is controlled by the Enable property.<br />
Enabling and disabling of algorithm instances is a capability that is designed for use with the Python<br />
interactive scripting language support within <strong>Athena</strong>.<br />
The filter passed or failed logic for a particular Algorithm instance in a sequence may be inverted by<br />
specifying the :invert optional flag in the Members list for the Sequencer in the job options file.<br />
A Sequencer will report filter success if either of its main and branch member lists succeed. The two<br />
cases may be differentiated using the Sequencer branchFilterPassed() boolean function. If<br />
this is set true, then the branch filter was passed, otherwise both it and the main sequence indicated<br />
failure.<br />
The following examples illustrate the use of sequences with filtering and branching.<br />
5.5.1 Filtering example<br />
Listing 5.3a and Listing 5.3b show extracts of the job options file and Python script of the<br />
AlgSequencer example: a Sequencer instance is created (line 2) with two members (line 5); each<br />
member is itself a Sequencer, implementing the sequences set up in lines 7 and 8, which consist of<br />
Prescaler, EventCounter and HelloWorld algorithms. The StopOverride property of the<br />
TopSequence is set to true, which causes both sequences to be executed, even if the first one<br />
indicates a filter failure.<br />
The Prescaler and EventCounter classes are example algorithms distributed with the<br />
GaudiAlg package. The Prescaler class acts as a filter, passing the fraction of events specified by<br />
the PercentPass property (as a percentage). The EventCounter class just prints each event as it<br />
is encountered, and summarizes at the end of job how many events were seen. Thus at the end of job,<br />
page 31
<strong>Athena</strong> Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
the Counter1 instance will report seeing 50% of the events, while the Counter2 instance will<br />
report seeing 10%.<br />
Note the same instance of the HelloWorld class appears in both sequences. It will be executed in<br />
Sequence1 if Prescaler1 passes the event. It will be executed in Sequence2 if Prescaler2<br />
passes the event only if Prescaler1 failed it.<br />
Listing 5.3a Example job options using Sequencers demonstrating filtering<br />
1: ApplicationMgr.DLLs += { "GaudiAlg" };<br />
2: ApplicationMgr.TopAlg = { "Sequencer/TopSequence" };<br />
3:<br />
4: // Setup the next level sequencers and their members<br />
5: TopSequence.Members = {"Sequencer/Sequence1", "Sequencer/Sequence2"};<br />
6: TopSequence.StopOverride = true;<br />
7: Sequence1.Members = {"Prescaler/Prescaler1", "HelloWorld",<br />
"EventCounter/Counter1"};<br />
8: Sequence2.Members = {"Prescaler/Prescaler2", "HelloWorld",<br />
"EventCounter/Counter2"};<br />
9:<br />
10: Prescaler1.PercentPass = 50.;<br />
11: Prescaler2.PercentPass = 10.;<br />
Listing 5.3b Example Python script using Sequencers demonstrating filtering<br />
1: theApp.DLLs = [ "GaudiAlg" ]<br />
2: theApp.TopAlg = [ "Sequencer/TopSequence" ]<br />
3: TopSequence = Algorithm( "TopSequence" )<br />
4:<br />
5: # Setup the next level sequencers and their members<br />
6: TopSequence.Members = [ "Sequencer/Sequence1", "Sequencer/Sequence2"]<br />
7: Sequence1 = Algorithm( "Sequence1" )<br />
8: Sequence2 = Algorithm( "Sequence2" )<br />
9: TopSequence.StopOverride = true<br />
10: Sequence1.Members = [ "Prescaler/Prescaler1", "HelloWorld",<br />
"EventCounter/Counter1" ]<br />
11: Prescaler1 = Algorithm( "Prescaler1" )<br />
12: Counter1 = Algorithm( "Counter1" )<br />
13: Sequence2.Members = [ "Prescaler/Prescaler2", "HelloWorld",<br />
"EventCounter/Counter2" ]<br />
14: Prescaler2 = Algorithm( "Prescaler2" )<br />
15: Counter2 = Algorithm( "Counter2" )<br />
16:<br />
17: Prescaler1.PercentPass = 50.<br />
18: Prescaler2.PercentPass = 10.<br />
page 32<br />
Sequence branching<br />
Listing 5.4a and Listing 5.4b illustrate the use of explicit branching. The BranchMembers property<br />
of the Sequencer specifies some algorithms to be executed if the algorithm that is the first member of<br />
the branch (which is common to both the main and branch membership lists) indicates a filter failure. In
<strong>Athena</strong><br />
Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
this example the EventCounter instance Counter1 will report seeing 80% of the events, whereas<br />
Counter2 will report seeing 20%.<br />
Listing 5.4a Example job options using Sequencers demonstrating branching<br />
1: ApplicationMgr.DLLs += { "GaudiAlg" };<br />
2: ApplicationMgr.TopAlg = { "Sequencer" };<br />
3:<br />
4: // Setup the next level sequencers and their members<br />
5: Sequencer.Members = {"HelloWorld", "Prescaler",<br />
"EventCounter/Counter1"};<br />
6: Sequencer.BranchMembers = {"Prescaler", "EventCounter/Counter2"};<br />
7:<br />
8: Prescaler.PercentPass = 80.;<br />
Listing 5.4b Example Oython script using Sequencers demonstrating branching<br />
1: theApp.DLLs = [ "GaudiAlg" ]<br />
2: theApp.TopAlg = [ "Sequencer" ]<br />
3: Sequencer = Algorithm( "Sequencer" )<br />
4:<br />
5: # Setup the next level sequencers and their members<br />
6: Sequencer.Members = [ "HelloWorld", "Prescaler",<br />
"EventCounter/Counter1" ]<br />
7: HelloWorld = Algorithm( "HelloWorld" )<br />
8: Prescaler = Algorithm( "Prescaler" )<br />
9: Counter1 = Algorithm( "Counter1" )<br />
10: Sequencer.BranchMembers = [ "Prescaler", "EventCounter/Counter2"]<br />
11: Counter2 = Algorithm( "Couner2" )<br />
12:<br />
13: Prescaler.PercentPass = 80.<br />
Listing 5.5a and Listing 5.5b illustrate the use of inverted logic. They achieve the same goal as the<br />
example in Listing 5.4a and Listing 5.4b through use of two sequences with the same instance of a<br />
Prescaler filter, but where the second sequence contains inverted logic for the single instance.<br />
Listing 5.5a Example job options using Sequencers demonstrating inverted logic<br />
1: ApplicationMgr.DLLs += { "GaudiAlg" };<br />
2: ApplicationMgr.TopAlg = { "Sequencer/Seq1", "Sequencer/Seq2" };<br />
3:<br />
4: // Setup the next level sequencers and their members<br />
5: Seq1.Members = {"HelloWorld", "Prescaler", "EventCounter/Counter1"};<br />
6: Seq2.Members = {"HelloWorld", "Prescaler:invert",<br />
"EventCounter/Counter2"};<br />
7:<br />
8: Prescaler.PercentPass = 80.;<br />
page 33
<strong>Athena</strong> Chapter 5 Writing algorithms Version/Issue: 2.0.0<br />
Listing 5.5b Example Python script using Sequencers demonstrating inverted logic<br />
1: theApp.DLLs = [ "GaudiAlg" ]<br />
2: theApp.TopAlg = [ "Sequencer/Seq1", "Sequencer/Seq2" ]<br />
3: Seq1 = Algorithm( "Seq1" )<br />
4: Seq2 = Algorithm( "Seq2" )<br />
5:<br />
6: # Setup the next level sequencers and their members<br />
7: Seq1.Members = ["HelloWorld", "Prescaler", "EventCounter/Counter1"]<br />
8: Seq2.Members = ["HelloWorld", "Prescaler:invert",<br />
"EventCounter/Counter2"]<br />
9: HelloWorld = Algorithm( "HelloWorld" )<br />
10: Prescaler = Algorithm( "Prescaler" )<br />
11: Counter1 = Algorithm( "Counter1" )<br />
12: Counter2 = Algorithm( "Counter2" )<br />
13:<br />
14: Prescaler.PercentPass = 80.<br />
page 34
<strong>Athena</strong><br />
Chapter 6 Scripting Version/Issue: 2.0.0<br />
Chapter 6<br />
Scripting<br />
6.1 Overview<br />
<strong>Athena</strong> scripting support is available in prototype form.The functionality is likely to change rapidly, so<br />
users should check with the latest release notes for changes or new functionality that might not be<br />
documented here.<br />
6.2 Python scripting service<br />
In keeping with the design philosophy of <strong>Athena</strong> and the underlying GAUDI architecture, scripting is<br />
defined by an abstract scripting service interface, with the possibility of there being several different<br />
implementations. A prototype implementation is available based upon the Python[4] scripting<br />
language. The Python scripting language will not be described in detail here, but only a brief overview<br />
will be presented.<br />
6.3 Python overview<br />
This section is in preparation.<br />
page 35
<strong>Athena</strong> Chapter 6 Scripting Version/Issue: 2.0.0<br />
6.4 How to enable Python scripting<br />
Two different mechanisms are available for enabling Python scripting.<br />
1. Replace the job options text file by a Python script that is specified on the command line.<br />
2. Use a job options text file which hands control over to the Python shell once the initial<br />
configuration has been established.<br />
6.4.1 Using a Python script for configuration and control<br />
The necessity for using a job options text file for configuration can be avoided by specifying a Python<br />
script as a command line argument as shown in Listing 6.1.<br />
Listing 6.1 Using a Python script for job configuration<br />
athena MyPythonScript.py [1]<br />
Notes:<br />
1. The file extension .py is used to identify the job options file as a Python script.All other<br />
extensions are assumed to be job options text files.<br />
This approach may be used in two modes. The first uses such a script to establish the configuration, but<br />
results in the job being left at the Python shell prompt. This supports interactive sessions. The second<br />
specifies a complete configuration and control sequence and thus supports a batch style of processing.<br />
The particular mode is controlled by the presence or absence of <strong>Athena</strong>-specific Python commands<br />
described in Section 6.8.<br />
6.4.2 Using a job options text file for configuration with a Python interactive shell<br />
Python scripting is enabled when using a job options text file for job configuration by adding the lines<br />
shown in Listing 6.2 to the job options file.<br />
Listing 6.2 Job Options text file entries to enable Python scripting<br />
ApplicationMgr.DLLs += { "SIPython" }; [1]<br />
ApplicationMgr.ExtSvc += { "PythonScriptingSvc/ScriptingSvc" }; [2]<br />
page 36<br />
Notes:
<strong>Athena</strong><br />
Chapter 6 Scripting Version/Issue: 2.0.0<br />
1. This entry specifies the component library that implements Python scripting. Care should be<br />
taken to use the “+=” syntax in order not to overwrite other component libraries that might be<br />
specified elsewhere.<br />
2. This entry specifies the Python scripting implementation of the abstract Scripting service. As<br />
with the previous line, care should be taken to use the “+=” syntax in order not to override<br />
other services that might be specified elsewhere.<br />
Once the initial configuration has been established by the job options text file, control will be handed<br />
over to the Python shell.<br />
It is possible to specify a specific job options configuration file at the command line as shown in Listing<br />
6.3.<br />
Listing 6.3 Specifying a job options file for application execution<br />
athena [job options file] [1]<br />
Notes:<br />
1. The job options text file command line argument is optional. The file jobOptions.txt is<br />
assumed by default.<br />
2. The file extension .py is used to identify the job options file as a Python script. All other<br />
extensions are assumed to be job options text files. The use of a Python script for<br />
configuration and control is described in Section 6.4.1.<br />
6.5 Prototype functionality<br />
The functionality of the prototype is limited to the following capabilities. This list will be added to as<br />
new capabilities are added:<br />
1. The ability to read and store basic Properties for framework components (Algorithms,<br />
Services, Auditors) and the main ApplicationMgr that controls the application. Basic<br />
properties are basic type data members (int, float, etc.) or SimpleProperties of the components<br />
that are declared as Properties via the declareProperty() function.<br />
2. The ability to retrieve and store individual elements of array properties.<br />
3. The ability to specify a new set of top level Algorithms.<br />
4. The ability to add new services and component libraries and access their capabilities<br />
5. The ability to specify a new set of members or branch members for Sequencer algorithms.<br />
6. The ability to specify a new set of output streams.<br />
7. The ability to specify a new set of "AcceptAlgs", "RequireAlgs", or "VetoAlgs" properties for<br />
output streams.<br />
page 37
<strong>Athena</strong> Chapter 6 Scripting Version/Issue: 2.0.0<br />
6.6 Property manipulation<br />
An illustration of the use of the scripting language to display and set component properties is shown in<br />
Listing 6.4:<br />
Listing 6.4 Property manipulation from the Python interactive shell<br />
>>>Algorithm.names [1][2]<br />
('TopSequence', 'Sequence1', 'Sequence2')<br />
>>> Service.names [3]<br />
('MessageSvc', 'JobOptionsSvc', 'EventDataSvc', 'EventPersistencySvc',<br />
'DetectorDataSvc', 'DetectorPersistencySvc', 'HistogramDataSvc',<br />
'NTupleSvc', 'IncidentSvc', 'ToolSvc', 'HistogramPersistencySvc',<br />
'ParticlePropertySvc', 'ChronoStatSvc', 'RndmGenSvc', 'AuditorSvc',<br />
'ScriptingSvc', 'RndmGenSvc.Engine')<br />
>>> TopSequence.properties [4]<br />
{'ErrorCount': 0, 'OutputLevel': 0, 'BranchMembers': [],<br />
'AuditExecute': 1, 'AuditInitialize': 0, 'Members':<br />
['Sequencer/Sequence1', 'Sequencer/Sequence2'], 'StopOverride': 1,<br />
'Enable': 1, 'AuditFinalize': 0, 'ErrorMax': 1}<br />
>>> TopSequence.OutputLevel [5]<br />
'OutputLevel': 0<br />
>>> TopSequence.OutputLevel=1 [6]<br />
>>> TopSequence.Members=['Sequencer/NewSeq1', 'Sequencer/NewSeq1'] [7]<br />
>>> TopSequence.properties<br />
{'ErrorCount': 0, 'OutputLevel': 1, 'BranchMembers': [],<br />
'AuditExecute': 1, 'AuditInitialize': 0, 'Members':<br />
['Sequencer/NewSeq1', 'Sequencer/NewSeq1'], 'StopOverride': 1,<br />
'Enable': 1, 'AuditFinalize': 0, 'ErrorMax': 1}<br />
>>> theApp.properties [8]<br />
{'JobOptionsType': 'FILE', 'EvtMax': 100, 'DetDbLocation': 'empty',<br />
'Dlls': ['HbookCnv', 'SI_Python'], 'DetDbRootName': 'empty',<br />
'JobOptionsPath': 'jobOptions.txt', 'OutStream': [],<br />
'HistogramPersistency': 'HBOOK', 'EvtSel': 'NONE', 'ExtSvc':<br />
['PythonScriptingSvc/ScriptingSvc'], 'DetStorageType': 0, 'TopAlg':<br />
['Sequencer/TopSequence']}<br />
>>><br />
page 38<br />
Notes:<br />
1. The ">>>" is the Python shell prompt.<br />
2. The set of existing Algorithms is given by the Algorithm.names command.<br />
3. The set of existing Services is given by the Service.names command.
<strong>Athena</strong><br />
Chapter 6 Scripting Version/Issue: 2.0.0<br />
4. The values of the properties for an Algorithm or Service may be displayed using the<br />
.properties command, where is the name of the desired Algorithm or<br />
Service.<br />
5. The value of a single Property may be displayed (or used in a Python expression) using the<br />
. syntax, where is the name of the desired Algorithm or Service,<br />
and is the name of the desired Property.<br />
6. Single valued properties (e.g. IntegerProperty) may be set using an assignment<br />
statement. Boolean properties use integer values of 0 (or FALSE) and 1 (or TRUE). Strings<br />
are enclosed in "’" characters (single-quotes) or """ characters (double-quotes).<br />
7. Multi-valued properties (e.g. StringArrayProperty) are set using "[...]" as the array<br />
delimiters.<br />
8. The theApp object corresponds to the ApplicationMgr and may be used to access its<br />
properties.<br />
6.7 Synchronization between Python and <strong>Athena</strong><br />
It is possible to create new Algorithms or Services as a result of a scripting command. Examples of this<br />
are shown in Listing 6.5:<br />
Listing 6.5 Examples of Python commands that create new Algorithms or Services<br />
>>> theApp.ExtSvc = [ "ANewService" ]<br />
>>> theApp.TopAlg = [ "TopSequencer/Sequencer" ]<br />
If the specified Algorihm or Service already exists then its properties can immediately be accessed.<br />
However, in the prototype the properties of newly created objects cannot be accessed until an<br />
equivalent Python object is also created. This restriction will be removed in a future release.<br />
This synchronization mechanism for creation of Python Algorithms and Services is illustrated in<br />
Listing 6.6:<br />
Listing 6.6 Examples of Python commands that create new Algorithms or Services<br />
>>> theApp.ExtSvc = [ "ANewService" ]<br />
>>> ANewService = Service( "ANewService" ) [1]<br />
>>> theApp.TopAlg = [ "TopSequencer/Sequencer" ]<br />
>>> TopSequencer = Algorithm( "TopSequencer" ) [2]<br />
>>> TopSequencer.properties<br />
Notes:<br />
page 39
<strong>Athena</strong> Chapter 6 Scripting Version/Issue: 2.0.0<br />
1. This creates a new Python object of type Sequencer, having the same name as the newly<br />
created <strong>Athena</strong> Sequencer.<br />
2. This creates a new Python object of type Algorithm, having the same name as the newly<br />
created <strong>Athena</strong> Algorithm.<br />
The Python commands that might require a subsequent synchronization are shown in Listing 6.7:<br />
Listing 6.7 Examples of Python commands that might create new Algorithms or Services<br />
theApp.ExtSvc = [...]<br />
theApp.TopAlg = [...]<br />
Sequencer.Members = [...]<br />
Sequencer.BranchMembers = [...]<br />
OutStream.AcceptAlgs = [...]<br />
OutStream.RequireAlgs = [...]<br />
OutStream.VetoAlgs = [...]<br />
6.8 Controlling job execution<br />
This is very limited in the prototype, and will be replaced in a future release by the ability to call<br />
functions on the Python objects corresponding to the ApplicationMgr (theApp), Algorithms, and<br />
Services.<br />
In the prototype, control is returned from the Python shell to the <strong>Athena</strong> environment by the command<br />
in Listing 6.8:<br />
Listing 6.8 Python command to resume <strong>Athena</strong> execution<br />
>>> theApp.Go [1]<br />
Notes:<br />
1. This is a temporary command that will be replaced in a future release by a more flexible<br />
ability to access more functions of the ApplicationMgr.<br />
This will cause the currently configured event loop to be executed, after which control will be returned<br />
to the Python shell.<br />
Typing Ctrl-D (holding down the Ctrl key while striking the D key) at the Python shell prompt will<br />
cause an orderly termination of the job. Althernatively, the command shown in Listing 6.9 will also<br />
cause an orderly application termination.<br />
Listing 6.9 Python command to terminate <strong>Athena</strong> execution<br />
>>> theApp.Exit [1]<br />
page 40
<strong>Athena</strong><br />
Chapter 6 Scripting Version/Issue: 2.0.0<br />
This command, used in conjunction with the theApp.Go command, can be used to execute a Python<br />
script in batch rather than interactive mode. This provides equivalent functionality to a job options text<br />
file, but using the Python syntax. An example of such a batch Python script is shown in Listing 6.10:<br />
Listing 6.10 Python batch script<br />
>>> theApp.TopAlg = [ "HelloWorld" ]<br />
[other configuration commands]<br />
>>> theApp.Go<br />
>>> theApp.Exit<br />
page 41
<strong>Athena</strong> Chapter 6 Scripting Version/Issue: 2.0.0<br />
page 42
<strong>Athena</strong><br />
Chapter 7 Accessing data Version/Issue: 2.0.0<br />
Chapter 7<br />
Accessing data<br />
7.1 Overview<br />
The data stores are a key component in the application framework. All data which comes from<br />
persistent storage, or which is transferred between algorithms, or which is to be made persistent must<br />
reside within a data store. In this chapter we use a trivial event data model to look at how to access data<br />
within the stores, and also at the DataObject base class and some container classes related to it.<br />
We also cover how to define your own data types and the steps necessary to save newly created objects<br />
to disk files. The writing of the converters necessary for the latter is covered in Chapter 15.<br />
7.2 Using the data stores<br />
There are four data stores currently implemented within the<br />
Listing 7.1 Example job options using Sequencers demonstrating inverted logic<br />
1: ApplicationMgr.DLLs += { "GaudiAlg" };<br />
2: ApplicationMgr.TopAlg = { "Sequencer/Seq1", "Sequencer/Seq2" };<br />
3:<br />
4: // Setup the next level sequencers and their members<br />
5: Seq1.Members = {"HelloWorld", "Prescaler", "EventCounter/Counter1"};<br />
6: Seq2.Members = {"HelloWorld", "Prescaler:invert",<br />
"EventCounter/Counter2"};<br />
7:<br />
8: Prescaler.PercentPass = 80.;<br />
page 43
<strong>Athena</strong> Chapter 7 Accessing data Version/Issue: 2.0.0<br />
framework: the event data store, the detector data store, the histogram store and the n-tuple store. Event<br />
data is the subject of this chapter. The other data stores are described in chapters 10, 11 and 12<br />
respectively. The stores themselves are no more than logical constructs with the actual access to the<br />
data being via the corresponding services. Both the event data service and the detector data service<br />
implement the same IDataProviderSvc interface, which can be used by algorithms to retrieve and<br />
store data. The histogram and n-tuple services implement extended versions of this interface<br />
(IHistogramSvc, INTupleSvc) which offer methods for creating and manipulating histograms<br />
and n-tuples, in addition to the data access methods provided by the other two stores.<br />
Only objects of a type derived from the DataObject base class may be placed directly within a data<br />
store. Within the store the objects are arranged in a tree structure, just like a Unix file system. As an<br />
example consider Figure 7.1 which shows the trivial transient event data model of the RootIO<br />
example. An object is identified by its position in the tree expressed as a string such as: “/Event”, or<br />
“/Event/MyTracks”. In principle the structure of the tree, i.e. the set of all valid paths, may be<br />
deduced at run time by making repeated queries to the event data service, but this is unlikely to be<br />
useful in general since the structure will be largely fixed.<br />
Event<br />
ObjectVector<br />
Event<br />
MyTracks<br />
Figure 7.1 The structure the event data model of the RootIO example.<br />
All interactions with the data stores should be via the IDataProviderSvc interface. The key<br />
methods for this interface are shown in Listing 7.2.<br />
Listing 7.2 Some of the key methods of the IDataProviderSvc interface.<br />
StatusCode findObject(const std::string& path, DataObject*& pObject);<br />
StatusCode findObject(DataObject* node, const std::string& path,<br />
DataObject*& pObject);<br />
StatusCode retrieveObject(const std::string& path, DataObject*& pObject);<br />
StatusCode retrieveObject(DataObject* node, const std::string& path,<br />
DataObject*& pObject);<br />
StatusCode registerObject(const std::string path, DataObject*& pObject);<br />
StatusCode registerObject(DataObject *node, DataObject*& pObject);<br />
page 44
<strong>Athena</strong><br />
Chapter 7 Accessing data Version/Issue: 2.0.0<br />
The first four methods are for retrieving a pointer to an object that is already in the store. How the<br />
object got into the store, whether it has been read in from a persistent store or added to the store by an<br />
algorithm, is irrelevant.<br />
The find and retrieve methods come in two versions: one version uses a full path name as an<br />
object identifier, the other takes a pointer to a previously retrieved object and the name of the object to<br />
look for below that node in the tree.<br />
Additionally the find and retrieve methods differ in one important respect: the find method will<br />
look in the store to see if the object is present (i.e. in memory) and if it is not will return a null pointer.<br />
The retrieve method, however, will attempt to load the object from a persistent store (database or<br />
file) if it is not found in memory. Only if it is not found in the persistent data store will the method<br />
return a null pointer (and a bad status code of course).<br />
7.3 Using data objects<br />
Whatever the concrete type of the object you have retrieved from the store the pointer which you have<br />
is a pointer to a DataObject, so before you can do anything useful with that object you must cast it to<br />
the correct type, for example:<br />
1: typedef ObjectVector MyTrackVector;<br />
2: DataObject *pObject;<br />
3:<br />
4: StatusCode sc = eventSvc()->retrieveObject(“/Event/MyTracks”,pObject);<br />
5: if( sc.isFailure() )<br />
6: return sc;<br />
7:<br />
8: MyTrackVector *tv = 0;<br />
9: try {<br />
10: tv = dynamic_cast (pObject);<br />
11: } catch(...) {<br />
12: // Print out an error message and return<br />
13: }<br />
14: // tv may now be manipulated.<br />
The typedef on line 1 is just to save typing: in what follows we will use the two syntaxes<br />
interchangeably. After the dynamic_cast on line 10 all of the methods of the MyTrackVector<br />
class become available. If the object which is returned from the store does not match the type to which<br />
you try to cast it, an exception will be thrown. If you do not catch this exception it will be caught by the<br />
algorithm base class, and the program will stop, probably with an obscure message. A more elegant<br />
way to retrieve the data involves the use of Smart Pointers - this is discussed in section 7.8<br />
As mentioned earlier a certain amount of run-time investigation may be done into what data is available<br />
in the store. For example, suppose that we have various sets of testbeam data and each data set was<br />
taken with a different number of detectors. If the raw data is saved on a per-detector basis the number of<br />
page 45
<strong>Athena</strong> Chapter 7 Accessing data Version/Issue: 2.0.0<br />
sets will vary. The code fragment in Listing 7.3 illustrates how an algorithm may loop over the data sets<br />
without knowing a priori how many there are.<br />
Listing 7.3 Code fragment for accessing an object from the store<br />
1: std::string objectPath = “Event/RawData”;<br />
2: DataObject* pObject;<br />
3: StatusCode sc;<br />
4:<br />
5: sc = eventSvc()->retrieveObject(objectPath, pObject);<br />
6:<br />
7: IdataDirectory *dir = pObject->directory();<br />
8: IdataDirectory::DirIterator it;<br />
9: for(it = dir->begin(); it != dir->end(); it++) {<br />
10:<br />
11: DataObject *pDo;<br />
12: sc = retrieveObject(pObject, (*it)->localPath(), pDo);<br />
13:<br />
14: // Do something with pDo<br />
15: }<br />
The last two methods shown in Listing 7.2 are for registering objects into the store. Suppose that an<br />
algorithm creates objects of type UDO from, say, objects of type MyTrack and wishes to place these<br />
into the store for use by other algorithms. Code to do this might look something like:<br />
Listing 7.4 Registering of objects into the event data store<br />
1: UDO* pO; // Pointer to an object of type UDO (derived from DataObject)<br />
2: StatusCode sc;<br />
3:<br />
4: pO = new UDO;<br />
5: sc = eventSvc()->registerObject(“/Event/tmp”,”OK”, pO);<br />
6:<br />
7: // THE NEXT LINE IS AN ERROR, THE OBJECT NOW BELONGS TO THE STORE<br />
8: delete pO;<br />
9:<br />
10: UDO autopO;<br />
11: // ERROR: AUTOMATIC OBJECTS MAY NOT BE REGISTERED<br />
12: sc = eventSvc()->registerObject(“/Event/tmp”, “notOK”, autopO);<br />
Once an object is registered into the store, the algorithm which created it relinquishes ownership. In<br />
other words the object should not be deleted. This is also true for objects which are contained within<br />
other objects, such as those derived from or instantiated from the ObjectVector class (see the<br />
following section). Furthermore objects which are to be registered into the store must be created on the<br />
heap, i.e. they must be created with the new operator.<br />
page 46
<strong>Athena</strong><br />
Chapter 7 Accessing data Version/Issue: 2.0.0<br />
7.4 Object containers<br />
As mentioned before, all objects which can be placed directly within one of the stores must be derived<br />
from the DataObject class. There is, however, another (indirect) way to store objects within a store.<br />
This is by putting a set of objects (themselves not derived from DataObject and thus not directly<br />
storable) into an object which is derived from DataObject and which may thus be registered into a<br />
store.<br />
An object container base class is implemented within the framework and a number of templated object<br />
container classes may be implemented in the future. For the moment, two “concrete” container classes<br />
are implemented: ObjectVector and ObjectList. These classes are based upon the<br />
STL classes and provide mostly the same interface. Unlike the STL containers which are essentially<br />
designed to hold objects, the container classes within the framework contain only pointers to objects,<br />
thus avoiding a lot of memory to memory copying.<br />
A further difference with the STL containers is that the type T cannot be anything you like. It must be a<br />
type derived from the ContainedObject base class, see Figure 7.1. In this way all “contained”<br />
objects have a pointer back to their containing object. This is required, in particular, by the converters<br />
for dealing with links between objects. A ramification of this is that container objects may not contain<br />
other container objects (without the use of multiple inheritance).<br />
DataObject<br />
ObjectContainerBase<br />
parent<br />
ContainedObject<br />
T<br />
ObjectVector<br />
T<br />
Figure 7.1 The relationship between the DataObject, ObjectVector and ContainedObject classes.<br />
As mentioned above, objects which are contained within one of these container objects may not be<br />
located, or registered, individually within the store. Only the container object may be located via a call<br />
to findObject() or retrieveObject(). Thus with regard to interaction with the data stores a<br />
container object and the objects that it contains behave as a single object.<br />
page 47
<strong>Athena</strong> Chapter 7 Accessing data Version/Issue: 2.0.0<br />
The intention is that “small” objects such as clusters, hits, tracks, etc. are derived from the<br />
ContainedObject base class and that in general algorithms will take object containers as their input<br />
data and produce new object containers of a different type as their output.<br />
The reason behind this is essentially one of optimization. If all objects were treated on an equal footing,<br />
then there would be many more accesses to the persistent store to retrieve very small objects. By<br />
grouping objects together like this we are able to have fewer accesses, with each access retrieving<br />
bigger objects.<br />
7.5 Using object containers<br />
The code fragment below shows the creation of an object container. This container can contain pointers<br />
to objects of type MyTrack and only to objects of this type (including derived types). An object of the<br />
required type is created on the heap (i.e. via a call to new) and is added to the container with the<br />
standard STL call.<br />
ObjectVector trackContainer;<br />
MyTrack* h1 = new MyTrack;<br />
trackContainer.push_back(h1);<br />
After the call to push_back() the MyTrack object “belongs” to the container. If the container is<br />
registered into the store, the hits that it contains will go with it. Note in particular that if you delete the<br />
container you will also delete its contents, i.e. all of the objects pointed to by the pointers in the<br />
container.<br />
Removing an object from a container may be done in two semantically different ways. The difference<br />
being whether on removal from a container the object is also deleted or not. Removal with deletion may<br />
be achieved in several ways (following previous code fragment):<br />
trackContainer.pop_back();<br />
trackContainer.erase( end() );<br />
delete h1;<br />
page 48<br />
The method pop_back() removes the last element in the container, whereas erase() maybe used<br />
to remove any other element via an iterator. In the code fragment above it is used to remove the last<br />
element also.<br />
Deleting a contained object, the third option above, will automatically trigger its removal from the<br />
container. This is done by the destructor of the ContainedObject base class.<br />
If you wish to remove an object from the container without destroying it (the second possible semantic)<br />
use the release() method:
<strong>Athena</strong><br />
Chapter 7 Accessing data Version/Issue: 2.0.0<br />
trackContainer.release(h1);<br />
Since the fate of a contained object is so closely tied to that of its container life would become more<br />
complex if objects could belong to more than one container. Suppose that an object belonged to two<br />
containers, one of which was deleted. Should the object be deleted and removed from the second<br />
container, or not deleted? To avoid such issues an object is allowed to belong to a single container only.<br />
If you wish to move an object from one container to another, you must first remove it from one and then<br />
add to the other. However, the first operation is done implicitly for you when you try to add an object to<br />
a second container:<br />
container1.push_back(h1); // Add to fist container<br />
container2.push_back(h1); // Move to second container<br />
// Internally invokes release().<br />
Since the object h1 has a link back to its container, the push_back() method is able to first follow<br />
this link and invoke the release() method to remove the object from the first container, before<br />
adding it into the second.<br />
In general your first exposure to object containers is likely to be when retrieving data from the event<br />
data store. The sample code in Listing 7.5 shows how, once you have retrieved an object container from<br />
the store you may iterate over its contents, just as with an STL vector.<br />
Listing 7.5 Use of the ObjectVector templated class.<br />
1: typedef ObjectVector MyTrackVector;<br />
2: MyTrackVector *tracks;<br />
3: MyTrackVector::iterator it;<br />
4:<br />
5: for( it = tracks->begin(); it != tracks->end(); it++ ) {<br />
6: // Get the energy of the track and histogram it<br />
7: double energy = (*it)->fourMomentum().e();<br />
8: m_hEnergyDist->fill( energy, 1. );<br />
9: }<br />
The variable tracks is set to point to an object in the event data store of type:<br />
ObjectVector with a dynamic cast (not shown above). An iterator (i.e. a pointer-like<br />
object for looping over the contents of the container) is defined on line 3 and this is used within the loop<br />
to point consecutively to each of the contained objects. In this case the objects contained within the<br />
ObjectVector are of type “pointer to MyTrack”. The iterator returns each object in turn and in the<br />
example, the energy of the object is used to fill a histogram.<br />
page 49
<strong>Athena</strong> Chapter 7 Accessing data Version/Issue: 2.0.0<br />
7.6 Data access checklist<br />
A little reminder:<br />
• Do not delete objects that you have registered.<br />
• Do not delete objects that are contained within an object that you have registered.<br />
• Do not register local objects, i.e. objects NOT created with the new operator.<br />
• Do not delete objects which you got from the store via findObject() or<br />
retrieveObject().<br />
• Do delete objects which you create on the heap, i.e. by a call to new, and which you do not<br />
register into a store.<br />
7.7 Defining new data types<br />
Most of the data types which will be used within <strong>Athena</strong> will be used by everybody and thus packaged<br />
and documented centrally. However, for your own private development work you may wish to create<br />
objects of your own types which of course you can always do with C++ (or Java) . However, if you<br />
wish to place these objects within a store, either so as to pass them between algorithms or to have them<br />
later saved into a database or file, then you must derive your type from either the DataObject or<br />
ContainedObject base class.<br />
Consider the example below:<br />
const static CLID CLID_UDO = 135; // Collaboration wide Unique number<br />
class UDO : public DataObject {<br />
public:<br />
UDO() : DataObject(), m_n(0) {<br />
}<br />
static const CLID& classID() { return CLID_UDO; }<br />
virtual const CLID& clID() const { return classID(); }<br />
int n(){ return m_n; }<br />
void setN(int n){ m_n = n; }<br />
private:<br />
int m_n;<br />
}<br />
page 50<br />
This defines a class UDO which since it derives from DataObject may be registered into, say, the<br />
event data store. (The class itself is not very useful as its sole attribute is a single integer and it has no<br />
behaviour).
<strong>Athena</strong><br />
Chapter 7 Accessing data Version/Issue: 2.0.0<br />
The thing to note here is that if the appropriate converter is supplied, as discussed in Chapter 15, then<br />
this class may also be saved into a persistent store (e.g. a ROOT file or an Objectivity database) and<br />
read back at a later date. In order for the persistency to work the following are required: the unique class<br />
identifier number (CLID_UDO in the example), and the clID() and classID() methods which<br />
return this identifier.<br />
The procedure for allocating unique class identifiers is, for the time being, experiment specific.<br />
Types which are derived from ContainedObject are implemented in the same way, and must have<br />
a CLID in the range of an unsigned short. Contained objects may only reside in the store when<br />
they belong to a container, e.g. an ObjectVector which is registered into the store. The class<br />
identifier of a concrete object container class is calculated (at run time) from the type of the objects<br />
which it contains, by setting bit 16. The static classID() method is required because the container<br />
may be empty.<br />
7.8 The SmartDataPtr/SmartDataLocator utilities<br />
The usage of the data services is simple, but extensive status checking and other things tend to make the<br />
code difficult to read. It would be more convenient to access data items in the store in a similar way to<br />
accessing objects with a C++ pointer. This is achieved with smart pointers, which hide the internals of<br />
the data services.<br />
7.8.1 Using SmartDataPtr/SmartDataLocator objects<br />
The SmartDataPtr and a SmartDataLocator are smart pointers that differ by the access to the<br />
data store. SmartDataPtr first checks whether the requested object is present in the transient store<br />
and loads it if necessary (similar to the retrieveObject method of IDataProviderSvc).<br />
SmartDataLocator only checks for the presence of the object but does not attempt to load it<br />
(similar to findObject).<br />
Both SmartDataPtr and SmartDataLocator objects use the data service to get hold of the<br />
requested object and deliver it to the user. Since both objects have similar behaviour and the same user<br />
interface, in the following only the SmartDataPtr is discussed.<br />
page 51
<strong>Athena</strong> Chapter 7 Accessing data Version/Issue: 2.0.0<br />
An example use of the SmartDataPtr class is shown below.<br />
Listing 7.6 Use of a SmartDataPtr object.<br />
1: StatusCode myAlgo::execute() {<br />
2: MsgStream log(msgSvc(), name());<br />
3: SmartDataPtr evt(eventSvc(),”/Event”);<br />
4: if ( evt ) {<br />
5: // Print the event number<br />
6: log
<strong>Athena</strong><br />
Chapter 7 Accessing data Version/Issue: 2.0.0<br />
Smart references and Smart reference vectors are declared inside a class as:<br />
#include "GaudiKernel/SmartRef.h"<br />
#include "GaudiKernel/SmartRefVector.h"<br />
class MCParticle {<br />
private:<br />
/// Smart reference to origin vertex<br />
SmartRef m_originMCVertex;<br />
/// Vector of smart references to decay vertices<br />
SmartRefVector m_decayMCVertices;<br />
public:<br />
/// Access the origin Vertex<br />
/// Note: When the smart reference is converted to MCVertex* the object<br />
/// will be loaded from the persistent medium.<br />
MCVertex* originMCVertex() { return m_originMCVertex; }<br />
}<br />
The syntax of usage of smart references is identical to plain C++ pointers. The Algorithm only sees a<br />
pointer to the MCVertex object:<br />
#include "GaudiKernel/SmartDataPtr.h"<br />
// Use a SmartDataPtr to get the MC particles from the event store<br />
SmartDataPtr particles(eventSvc(),"/Event/MC/MCParticles");<br />
MCParticleVector::const_iterator iter;<br />
// Loop over the particles to access the MCVertex via the SmartRef<br />
for( iter = particles->begin(); iter != particles->end(); iter++ ) {<br />
MCVertex* originVtx = (*iter)->originMCVertex();<br />
if( 0 != originVtx ) {<br />
std::cout
<strong>Athena</strong> Chapter 7 Accessing data Version/Issue: 2.0.0<br />
• Put some instructions (i.e. options) into the job option file (see Listing 7.7)<br />
• Register your object in the store us usual, typically in the execute() method of your<br />
algorithm.<br />
// myAlg implementation file<br />
StatusCode myAlg::execute() {<br />
// Create a UDO object and register it into the event data store<br />
UDO* p = new UDO();<br />
eventSvc->registerObject(“/Event/myStuff/myUDO”, p);<br />
}<br />
In order to actually trigger the conversion and saving of the objects at the end of the current event<br />
processing it is necessary to inform the application manager. This requires some options to be specified<br />
in the job options file:<br />
Listing 7.7 Job options for output to persistent storage<br />
ApplicationMgr.OutStream = { "DstWriter" };<br />
DstWriter.ItemList<br />
DstWriter.EvtDataSvc<br />
DstWriter.Output<br />
= { "/Event#1", "/Event/MyTracks#1"};<br />
= "EventDataSvc";<br />
= "DATAFILE='result.root' TYP='ROOT'";<br />
ApplicationMgr.DLLs<br />
+= { "DbConverters", "RootDb"};<br />
ApplicationMgr.ExtSvc += { "DbEventCnvSvc/RootEvtCnvSvc" };<br />
EventPersistencySvc.CnvServices += { "RootEvtCnvSvc" };<br />
RootEvtCnvSvc.DbType<br />
= "ROOT";<br />
The first option tells the application manager that you wish to create an output stream called<br />
“DstWriter”. You may create as many output streams as you like and give them whatever name you<br />
prefer.<br />
For each output stream object which you create you must set several properties. The ItemList option<br />
specifies the list of paths to the objects which you wish to write to this output stream. The number after<br />
the “#” symbol denotes the number of directory levels below the specified path which should be<br />
traversed. The (optional) EvtDataSvc option specifies in which transient data service the output<br />
stream should search for the objects in the ItemList, the default is the standard transient event data<br />
service EventDataSvc. The Output option specifies the name of the output data file and the type of<br />
persistency technology, ROOT in this example. The last three options are needed to tell the Application<br />
manager to instantiate the RootEvtCnvSvc and to associate the ROOT persistency type to this<br />
service.<br />
An example of saving data to a ROOT persistent data store is available in the RootIO example<br />
distributed with the framework.<br />
page 54
<strong>Athena</strong><br />
Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
Chapter 8<br />
StoreGate - the event data access model<br />
8.1 Overview<br />
The event data access model (EDM) is a crucial element of the overall infrastructure that defines the<br />
management and use of DataObjects in its transient state. Based on the experience in using the Gaudi<br />
infrastructure and subsequent discussions within the EDM working group, we are developing the<br />
StoreGate to satisfy the ATLAS requirements outlined in the next section. We have implemented and<br />
tested some of the proposed design features stressing on the interface to the client's access to Data<br />
Objects while currently maintaining the underlying Gaudi infrastructure. This Chapter describes some<br />
of the proposed elements of the StoreGate design and provides detailed documentation on the use of the<br />
prototype.<br />
8.2 The StoreGate design<br />
The ATLAS software architecture belongs to the blackboard family: data objects produced by<br />
knowledge objects (e.g. reconstruction modules) are posted to a common in-memory database from<br />
where other modules can access them and produce new data objects. This model greatly reduces the<br />
coupling between knowledge objects containing the algorithmic code for analysis and reconstruction. A<br />
knowledge object does not need to know which specific module can produce the information it needs,<br />
nor which protocol it must use to obtain it. Algorithmic code is known to be the least stable component<br />
of HEP software systems and the blackboard approach has been very effective at reducing the impact of<br />
this instability, from the Zebra system of the Fortran days to the InfoBus Java components architecture.<br />
The trade-off for the data/knowledge objects separation is that knowledge objects have to identify data<br />
objects they want to post or retrieve from the blackboard. It is crucial to develop a data model optimized<br />
for the required access patterns and yet flexible enough to accommodate the unexpected ones. Starting<br />
from the recent Event Data Model workshops and discussions as well as from the experience of other<br />
page 55
<strong>Athena</strong> Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
HENP systems such as BaBar, CDF, CLEO, D0, we identified some requirements that would not be<br />
readily satisfied using the existing Gaudi Event Data Model:<br />
• Identify (collections of) DataObjects based on their type.<br />
• Identify (collections of) DataObjects based on the identifier of the Algorithm which added<br />
them to the Transient Data Store (TDS).<br />
• A scheme that allows developers optionally to define key classes tailored to their DataObjects<br />
and to use the keys to store and retrieve the objects from the TDS.<br />
• A well-defined and supported access control policy: as will be discussed later on, objects<br />
stored into the TDS will be “almost read-only” with the adoption of a lock mechanism.<br />
• A mechanism to hide as much as possible the details of the TDS access from the algorithmic<br />
code, using the standard C++ iterator and/or pointer syntax.<br />
• The same mechanism should be used to express associations among Data Objects.<br />
• A mechanism to group related DataObjects into a developer-defined view that provides a<br />
high-level alternative access scheme to the store.<br />
• A mechanism to define a flexible “cache-fault policy” (that is to say a way to create an object<br />
requested by the user and not yet in the TDS). This should include the functionality to<br />
reconstruct a DataObject on demand.<br />
8.2.1 System Features and Entities<br />
page 56<br />
A StoreGate will allow algorithmic code to interact with the TDS. When an Algorithm invokes the<br />
StoreGate retrieve method, it is returned a DataHandle which points to the desired DataObject in the<br />
Transient Data Store. The DataHandle defines the interface to access the DataObjects retrieved. From<br />
the user perspective a DataHandle behaves as a C++ pointer/iterator, and its first implementation<br />
pointer is indeed a C++ const pointer. It will have to evolve into a more complex object to satisfy some<br />
of the requirements mentioned above. A complete implementation of the DataHandle should include:<br />
• Begin/end methods providing iterators over contained objects,<br />
• Enforcement of an almost-const access policy, whereby the DataHandle checks, upon<br />
dereference, if the client is authorized to modify a DataObject in the TDS.<br />
• An update method used (presumably by the TDS) to mark the current Data Object the<br />
DataHandle refers to as “obsolete” when, for example, a new event is read in.<br />
• Support for persistable object associations (e.g. track associatedHits),<br />
• The ability for the client to view the data in different ways.<br />
• A user-defined Key can be used while recording/retrieving the DataObject. In the prototype,<br />
the key object can be a simple string passed to the StoreGate record method or a more<br />
complex object that defines the necessary operators to return a string. Behind the scenes, the<br />
string is added to the Event Data Service object “pathname” thus providing the backward<br />
Gaudi compatibility.
<strong>Athena</strong><br />
Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
• A user-defined Selector can be provided that can either select amongst several DataObjects of<br />
the same type OR select on the ContainedObjects of a DataObject (provided the DataObject is<br />
a collection of ContainedObjects).<br />
• A virtual Proxy mechanism is used to represent Data Object instances that are not yet in the<br />
TDS. The Proxy will define the procedure to create these instances, either by reading them<br />
from persistent storage or by reconstructing them on demand. The Proxy can also be used, in<br />
conjunction with the DataHandle, to implement, if required, complex access control policies<br />
for the Data Objects.<br />
8.2.2 Characteristics of the Transient Data Store<br />
All objects in the transient data store inherit from a common base class (DataObject). The TDS supports<br />
storage of either collections (of several contained objects) or single objects. It is, however, assumed that<br />
in most instances, clients will store collections in the Transient Store; for example a collection of Track<br />
Objects (TrackCollection) or a collection of electron candidates. A collection contains objects of only<br />
one type. However, several types of objects can inherit from a base-type and be stored in the collection<br />
as the base-type. Multiple instances of the same collection or objects can be recorded with the TDS.<br />
The TDS contains DataObjects that are either persistable or required for communicating between<br />
Algorithms. An almost const-access policy will be established for accessing data objects in the<br />
Transient Store. The DataObject recorded to the TDS can be modified until locked by the client. Once<br />
locked, it becomes a read-only DataObject. Refer to Section Appendix 8.7 for details on how clients are<br />
expected to adhere to the Store Access Policy.<br />
Each collection or object in the TDS will be associated with a History object. The History object will<br />
contain specific information on which Algorithms created the object and the configuration of these as<br />
defined by their Properties and other environmental information such as calibration and alignment<br />
information. The purpose of the History object is two-fold: it will precisely define how the object was<br />
created such that it is reproducible at a later stage, and the client can select on the basis of the<br />
information in the History Object. An example of the latter is as follows:<br />
“Clusters in the EM calorimeter are found in two different ways in the reconstruction<br />
process, using a cone algorithm and a nearest neighbour algorithm. In each instance,<br />
cluster objects are stored in a ClusterCollection. Hence there will be two<br />
ClusterCollections (same type) in the TDS each simple reflecting a different algorithm.<br />
Since the History Object carries this information, a downstream client will be able to<br />
request EM cluster made using a specific algorithm.”<br />
Tagging an object in the TDS by a downstream client will be forbidden. An example of a tag may be to<br />
simply mark it as having been used for the convenience of a single client or associating it with objects<br />
which have been produced at a later time in the reconstruction chain (so called forward pointing of<br />
objects). That is a “hit object” can not be explicitly tagged as used or modified to reflect which track<br />
(constructed later in time) it is associated with. This is simply because the same hit collection may be<br />
used by other clients for whom such tagging may be irrelevant. However tagging or forward pointing of<br />
DataObjects must be supported in other ways such as with association objects.<br />
page 57
<strong>Athena</strong> Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
8.3 The StoreGate Prototype<br />
The StoreGate defines the interface through which the algorithmic code will access the Transient Data<br />
Store (TDS). The prototype StoreGate extends the existing Gaudi TDS, providing the following extra<br />
functionalities:<br />
• Type-safe access to DataObjects.<br />
• Keyed access to DataObjects.<br />
• Controlled access to DataObjects<br />
• Simultaneous access to all DataObjects of a given type in the TDS<br />
• Since in the medium term the StoreGate may replace or be merged with the existing Gaudi<br />
EventDataService, it is important to maintain compatibility with the current interface. In the<br />
prototype, a new <strong>Athena</strong> Service - called StoreGateSvc, provides templated methods to record<br />
and retrieve Data Objects by type. It is implemented over the existing Gaudi<br />
EventDataService.<br />
Additional functionalities, such as AutoHandles, user defined data views, Smart inter object<br />
relationships, as mentioned in the StoreGate Design Document [5] are not yet available. This document<br />
will be updated as and when such functionalities are introduced.<br />
• DataObjects in the TDS are organized by type. A DataObject can be assigned a client-defined<br />
key while recording. When an object is recorded with StoreGate, the TDS becomes the owner<br />
of the data object. The delete method of the Data Object must not be called by the client. See<br />
section 8.5 for more information.<br />
• For multiple objects of the same type recorded in the TDS, the key (if one is assigned) must be<br />
unique. That is the same key can not be assigned to two objects in the TDS of the same type.<br />
Note that the key is optional - you need not assign a key while recording.<br />
• There are several ways to retrieve data objects from the TDS. Either a single DataObject (the<br />
last object entered in the store) or a Keyed DataObject or a list of DataObjects of the same<br />
type can be retrieved from the TDS. See the section on how to retrieve DataObjects from the<br />
Transient Data Store for more details.<br />
• The granularity of the object registered in the TDS is a DataObject. The DataObject can be a<br />
Collection of ContainedObjects or a single Object.<br />
page 58<br />
The StoreGate WriteData example demonstrates how to create a DataObject (MyDataObj) and record it<br />
in the TDS with and without a key. It records 3 data objects of the same type, one with a key. In<br />
addition, it also shows how to create a collection of contained objects (MyContObj) and record it in the<br />
TDS. The ReadData algorithm example shows how to retrieve these DataObjects in different ways<br />
from the TDS.
<strong>Athena</strong><br />
Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
8.4 Creating a DataObject<br />
The DataObject to be recorded in the TDS may either be a single object or a Collection. For the latter<br />
case, the collection consists of several ContainedObjects. The following examples show how to create a<br />
DataObject (single or Collection) and a ContainedObject.<br />
8.4.1 Creating a single DataObject<br />
Listing 8.1 is an example on how to create a single DataObject called “MyDataObj”.<br />
The code sample below and the following ones are taken from the ReadData and WriteData example<br />
algorithms of StoreGate package, available in the Atlas repository at<br />
offline/Control/StoreGate/example.<br />
• The single object to be recorded to the TDS, MyDataObj, must inherit from DataObject.<br />
• It must also define a static constant Class ID, which must be unique for each class.<br />
• The following DataObject has a single data member (m_dat). Note that it provides an accessor<br />
method val() to the data member. It is important that these methods are declared “const” (as<br />
shown below) to allow access to these data members when the Store Access Policy comes into<br />
effect. Const methods are methods that do not change the Object but simply allow access to<br />
the information in the object. Thus these methods are in effect “readonly” methods.<br />
Listing 8.1 Example DataObject<br />
#include "Gaudi/Kernel/DataObject.h"<br />
const static CLID CLID_MDO = 214222;<br />
class MyDataObj : public DataObject<br />
{<br />
public:<br />
MyDataObj() : DataObject () { };<br />
CLID& classID()<br />
{ return CLID_MDO;}<br />
virtual CLID& clID() { return CLID_MDO; }<br />
int val() const<br />
void val(int i)<br />
virtual ~MyDataObj() { };<br />
{return m_dat;}<br />
{ m_dat = i;}<br />
};<br />
private:<br />
int m_dat;<br />
page 59
<strong>Athena</strong> Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
8.4.2 Creating a Collection of ContainedObjects<br />
Listing 8.2 shows an example on how to create a DataObject called “MyDataObj” that is a collection of<br />
"MyContObj" objects which inherit from ContainedObject. The ContainedObject in this example is<br />
“MyContObj” - an example of which is shown in Section 8.4.3.<br />
• The DataObject to be recorded to the TDS, MyDataObj, must inherit from a templated<br />
Collection Base Class that Gaudi provides. There are two such base classes available:<br />
ObjectVector and ObjectList. Both are templated in the type of the ContainedObject (In this<br />
case “MyContObj”).<br />
• As with the single object, your collection DataObject must define a static constant Class ID,<br />
which must be unique for each class.<br />
• Note that the DataObject itself may have data members, in addition to being a collection of<br />
ContainedObjects. Listing 8.2 shows how to create a MyDataObj which has a single data<br />
member (m_dat). It provides a const accessor method val() which returns the data member. It<br />
is important that these methods are declared “const” (as shown below) to allow access to these<br />
data members when the Store Access Policy comes into effect. Const methods are methods<br />
that do not change the Object but simply allow access to the information in the object. Thus<br />
these methods are in effect “readonly” methods.<br />
• See Section 8.6.1 on retrieving data objects on how to access the contained objects within a<br />
collection.<br />
Listing 8.2 Example Collection of Contained Objects<br />
#include "Gaudi/Kernel/DataObject.h"<br />
#include "Gaudi/Kernel/ObjectVector.h"<br />
const static CLID CLID_MDO = 214234;<br />
class MyContObj;<br />
class MyDataObj : public ObjectVector<br />
{<br />
public:<br />
MyDataObj() : DataObject () { };<br />
CLID& classID()<br />
{ return CLID_MDO;}<br />
virtual CLID& clID() { return CLID_MDO; }<br />
int val() const<br />
void val(int i)<br />
virtual ~MyDataObj() { };<br />
{return m_dat;}<br />
{ m_dat = i;}<br />
};<br />
private:<br />
int m_dat;<br />
page 60
<strong>Athena</strong><br />
Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
8.4.3 Creating a ContainedObject<br />
In the previous sub-section, we created a DataObject (“MyDataObj”) that was a collection of<br />
ContainedObjects (“MyContObj”). The following example shows you how to create the<br />
ContainedObject. Note that the ContainedObject is never directly inserted into the TDS, but must be<br />
“pushed back” into the Collection “MyDataObj”.<br />
• The ContainedObject shown in Listing 8.3 has two data members: time and ID, a set method<br />
(which is non-const and initializes the data members), and const accessor methods time(), id()<br />
methods to the data members. As indicated earlier, declare all accessor methods as “const”.<br />
• Unlike the “MyDataObj”, it inherits from ContainedObject. Like “MyDataObj”, it also<br />
provides a static const class ID and methods to retrieve them.<br />
Listing 8.3 Example ContainedObject<br />
#include ``Gaudi/Kernel/ContainedObject.h''<br />
class MyContObj: public ContainedObject {<br />
public:<br />
MyContObj(): ContainedObject(){};<br />
~MyContObj(){};<br />
static const CLID& classID();<br />
virtual const CLID& clID() const;<br />
void set(float t, int i) { m_time = t; m_channelID = i; }<br />
float time() const { return m_time; }<br />
int id() const { return m_channelID; }<br />
private:<br />
};<br />
float m_time;<br />
int m_channelID;<br />
//MyContObj.cxx<br />
#include ``MyContObj.h''<br />
static const CLID CLID_MYCONTOBJ = 214215;<br />
static const CLID& MyContObj::classID() { return CLID_MYCONTOBJ; }<br />
const CLID& MyContObj::clID() const { return CLID_MYCONTOBJ; }<br />
page 61
<strong>Athena</strong> Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
8.5 Recording a DataObject<br />
A DataObject may be recorded to the TransientStore using the StoreGateSvc. Each Algorithm has<br />
access to the storeGateSvc() and can therefore invoke the record method. Once recorded, the TDS has<br />
ownership of the DataObject. Hence clients must not call the delete of the DataObject.<br />
All recorded DataObjects are organized according to their type. A key can be optionally associated to a<br />
DataObject when this is recorded. At present it is the client responsibility to explicitly lock (set<br />
constant) the DataObjects which should no longer be modified.<br />
8.5.1 Recording DataObjects without keys<br />
In the execute method of WriteData, create a DataObject of type MyDataObj, and set its member data:<br />
Listing 8.4 Fragment 1 of WriteData Algorithm<br />
MyDataObj* dobj = new MyDataObj();<br />
dobj -> val(10);<br />
Record it in the TDS as shown in Listing 8.5: (Note that the pointer to storeGateSvc() is available to<br />
you if you are an “Algorithm”<br />
Listing 8.5 Fragment 2 of WriteData Algorithm<br />
StatusCode sc = storeGateSvc()->record(dobj);<br />
if (sc.isFailure())<br />
{<br />
log
<strong>Athena</strong><br />
Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
8.5.2 Recording DataObjects with keys<br />
Record a DataObject using a string key as shown in Listing 8.7:<br />
Listing 8.7 Record a DataObject with a string key<br />
std::string keystring = "MyDobjName";<br />
StatusCode sc = storeGateSvc()->record(dobj, keystring);<br />
if (sc.isFailure())<br />
{<br />
log
<strong>Athena</strong> Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
• By providing a key, in which case there will be a unique match. Note that a keyed access is<br />
possible only if you recorded theDataObject with a key. Since two DataObjects of the same<br />
type cannot have the same key, you are assured of a unique DataObject (provided of course it<br />
exists).<br />
• You can ask for a list of DataObjects of a given type. You will be returned a begin and end<br />
iterator over all the DataObjects of that type (whether or not it was recorded with a key).<br />
For each of these three modes you have the option to require const access to the retrieved objects or non<br />
constant access. In the latter case, if the DataObject has already been locked, as it will normally be the<br />
case, the retrieve operation will not return the locked object (and will fail if no unlocked matching<br />
objects are available).<br />
8.6.1 Retrieving DataObjects without a key<br />
In the execute method of ReadData, get the last MyDataObj recorded in the TDS (recall that we<br />
recorded three MyDataObj in TDS, one with a key). The example shown in Listing 8.9 will retrieve the<br />
last of the three MyDataObj from the TDS:<br />
Listing 8.9 Retrieve a DataObject without a key<br />
const DataHandle dhandle;<br />
StatusCode sc = storeGateSvc()->retrieve(dhandle); [1]<br />
if (sc.isFailure())<br />
{<br />
log
<strong>Athena</strong><br />
Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
Typically MyDataObj will be a collection containing several objects. You may iterate over these<br />
Contained Objects using the code shown in Listing 8.10:<br />
Listing 8.10 Iterate over Contained Objects<br />
MyDataObj::iterator first = dhandle->begin();<br />
MyDataObj::iterator last = dhandle->end();<br />
for (; first != last; ++first)<br />
{<br />
(*first)->use_me();<br />
}<br />
where (*first) is the pointer to the Contained Object; use_me() is a method in this Contained Object<br />
and ++first iterates over the Contained Objects.<br />
8.6.2 Retrieving a keyed DataObject<br />
In the WriteData example, three MyDataObj were recorded in the TDS, but only one with a key. The<br />
key was a string key containing “MyDataObjName”. Here we will show how to retrieve that<br />
DataObject. Note that since no other MyDataObj can be registered with a key “MyDataObjName”, you<br />
will retrieve at most one object. Note further that this is not necessarily the last DataObject of this type<br />
recorded in the TDS.<br />
Listing 8.11 Retrieve a keyed DataObject<br />
std::string keystring = "MyDataObjName";<br />
DataHandle dhandle; //non-const access required. May fail!!!<br />
StatusCode sc = storeGateSvc()->retrieve(dhandle,keystring);<br />
if (sc.isFailure())<br />
{<br />
log
<strong>Athena</strong> Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
8.6.3 Retrieving all DataObjects of a given type<br />
Recall that there were three MyDataObj recorded in the example: WriteData.cxx, one with a key. We<br />
can retrieve all three of these DataObjects as shown in Listing 8.12:<br />
Listing 8.12 Retrieve all DataObjects of a given type<br />
const DataHandle dbegin;<br />
const DataHandle dend;<br />
StatusCode sc = storeGateSvc()->retrieve(dbegin, dend);<br />
if (sc.isFailure())<br />
{<br />
log invoke_method_in_DataObject();<br />
}<br />
Note again that dbegin always points to a DataObject, never a ContainedObject. To iterate over the<br />
ContainedObjects, do as before (this piece of code is obviously within the loop over DataObject):<br />
Listing 8.14 Iterate over Contained Objects<br />
MyDataObj::iterator first = dbegin->begin();<br />
MyDataObj::iterator last = dbegin->end();<br />
for (; first != last; ++first)<br />
{<br />
(*first)->invoke_method_in_ContObj();<br />
}<br />
page 66
<strong>Athena</strong><br />
Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
8.7 Store Access Policy<br />
The current release (Release 1.3.2) has some of the tools to implement a flexible access policy to<br />
objects stored in the TDS. The TDS owns the DataObjects that have been recorded into it. A<br />
DataObject in the store may not be deleted by the client code. Client code is allowed to modify objects<br />
in the store but only until the DataObject is declared ``complete'' and “set constant” (locked) by its<br />
creator.<br />
Locking an object prevents the clients downstream of the creator to modify the reconstructed objects<br />
by mistake. For example, a tracking algorithm that retrieves a hit collection from the store does not own<br />
those hits and therefore may not modify them. Remember that these hits can be used later by another<br />
tracking algorithm for which your modifications may be meaningless or even plain wrong. We do not<br />
want the results of the second tracking algorithm to change when you run the second algorithm before<br />
or after the first! If we want to have reproducible results from several million lines of reconstruction<br />
code it is vital to preserve the state of every DataObject which has been reconstructed and ``published''<br />
in the TDS to be used by others. However, well controlled modifications are desirable. The access<br />
policy allows multiple algorithms to collaborate in reconstructing a DataObject. As an example,<br />
consider a Calorimeter Clustering package makes cluster objects and subsequently may make<br />
corrections, based on the Calorimeter information alone that are independent of any downstream<br />
analysis. Since cluster finding and corrections are more or less independent, it may be desirable to have<br />
the two decoupled: a cluster-finding sub-algorithm and a correction sub-algorithm executed in some<br />
Cluster-Maker Algorithm environment. In such a scenario:<br />
• The cluster-finding sub-algorithm will create the cluster object and record it in the TDS.<br />
• The correction sub-algorithm will retrieve the cluster object from the TDS and perform some<br />
correction (hence modifying the object in the TDS).<br />
• The main Cluster-Maker algorithm that coordinates the various factions of the calorimeter<br />
clustering will lock the object before returning control to the framework<br />
• A downstream electron-finding algorithm can use these cluster objects in a readonly mode and<br />
create a new electron object. Additional corrections may be determined and included in the<br />
calculation of quantities of the final electron object, but the data of the original cluster object<br />
cannot be changed. These additional correction, if necessary, must be preserved elsewhere.<br />
• Note that it may be necessary for downstream algorithms to redo the clustering based on new<br />
information that has become available only at a later stage. This is indeed possible - it may do<br />
so, by creating a new cluster collection in the TDS. Since cluster collections in TDS can be<br />
keyed (and later can also be associated with a history object), these are uniquely identifiable<br />
objects in the TDS.<br />
page 67
<strong>Athena</strong> Chapter 8 StoreGate - the event data access model Version/Issue: 2.0.0<br />
In general, the strategy that is proposed to be adopted by algorithms creating DataObjects to be<br />
recorded in the TDS is shown in Listing 8.15:<br />
Listing 8.15 Example of store access strategy<br />
MyMainAlgorithm::execute() {<br />
// Create a New Data Object:<br />
MyDataObj* dobj = new MyDataObj();<br />
// Record it in the Transient Data Store:<br />
StatusCode sc = storeGateSvc()->record(dobj);<br />
}<br />
dobj->modify();<br />
sub_Algorithm_A->execute();<br />
sub_Algorithm_B->execute();<br />
dobj->modify_more();<br />
dobj->setLocked();<br />
dobj->modify_after_lock();<br />
// OK, can modify data object<br />
// OK, sub Algorithms may<br />
// also modify "MyDataObj"<br />
// OK, more changes if necessary<br />
// Lock Data Object<br />
// WRONG, NOT ALLOWED HERE.<br />
Note that in addition to the main algorithm that created the Data Object, sub_Algorithm_A and<br />
sub_Algorithm_B can retrieve “MyDataObj” from the TDS and modify it as it has not yet been<br />
locked. Once the DataObject is locked, it can not be modified by any algorithm. We therefore expect:<br />
• A specific package is responsible for the creation of DataObject(s). For example, a track<br />
fitting package makes use of Hit Collections and creates a Tracking Collection in the TDS. A<br />
Calorimeter Clustering package creates Cluster Collections in TDS. An electron-id package<br />
creates. an electron collection containing electron candidates in the TDS.<br />
• Each package may choose to break its algorithms into finer sub-algorithms, several of these<br />
sub-algorithms jointly responsible for the final output. The main algorithm will therefore<br />
create the collection and must lock it before it returns its control to the framework. All<br />
modifications to the Data Object are done either within the body of the main algorithm or by<br />
sub-algorithms executed before the lock has been established.<br />
• If the client chooses to use a Collection created by a different package (such as the Track<br />
Fitting package using a Hit Collection made by a TrackHit package), it must access it in a<br />
readonly mode.<br />
• To allow downstream algorithms to access your data objects in “readonly” mode, the<br />
dataobjects must supply “const” accessor methods as discussed in Section 8.4.<br />
• An error condition will occur if the client attempts to retrieve a locked DataObject in a<br />
non-const mode and the DataObject is locked.<br />
With the locking mechanism in place, we are now in a position to provide a default locking mechanism<br />
to enforce an experiment-wide policy. Whatever this policy will be (currently we are discussing<br />
whether the locking should happen at the end of an algorithm and/or sequence execution) it is good<br />
practice for the developers to explicitly add a “setConst” method invocation when they are finished<br />
working on a DataObject. This will document the author intent, and it will also insulate their codes<br />
from any future change to the default access policy.<br />
page 68
<strong>Athena</strong><br />
Chapter 9 Data dictionary Version/Issue: 2.0.0<br />
Chapter 9<br />
Data dictionary<br />
9.1 Overview<br />
In contrast to the other Chapters in this User <strong>Guide</strong>, this one does not describe existing functionality,<br />
but rather gives an preview to new functionality that will be made available in a future release. It<br />
describes the role, scope, and implementation of a "Data Dictionary" in the <strong>Athena</strong> Architecture. It is a<br />
condensation of a snapshot of a separate working document.<br />
9.1.1 Definition of Terms<br />
The term data dictionary is being used in ATLAS as a catch phrase for several related, but distinct<br />
concepts and techniques. We categorize these concepts into three general categories:<br />
• Introspection/Reflection/Object Description/Run-Time Typing<br />
This refers to objects in program memory with the ability to describe themselves in a<br />
programmatic way through a public API such that they can be manipulated without a priori<br />
knowledge of the specific class/type of the object.<br />
• Code Generation<br />
This refers to a process of generating code for performing a specific task from a generic<br />
description/input file.<br />
• Self-Describing External Data Representation (e.g. Data Files)<br />
This refers to external data representations (e.g. file formats, on-wire data formats) which<br />
contain metadata describing the payload of the data file, etc.<br />
Each of these concepts plays a different set of roles in an architecture dependent upon a data dictionary.<br />
NB: Throughout this document we will use the word object without (necessarily)<br />
referring to an actual instance of a C++/Java/other class. A potentially better phrase<br />
page 69
<strong>Athena</strong> Chapter 9 Data dictionary Version/Issue: 2.0.0<br />
might be programmatic entity or construct (which could denote components, structs,<br />
common blocks, streams, files, etc.). However, since it is almost certain that most, if<br />
not all, such entities in <strong>Athena</strong> will in fact correspond to real objects, we will not dwell<br />
on this distinction.<br />
NB: The term data dictionary connotes a certain technical implementation which we do<br />
not (necessarily) advocate. This term implies a central repository containing the<br />
information about those objects being described. This is one possible implementation<br />
of some of the concepts in this paper. However, it is also possible (perhaps even<br />
desirable for some purposes) that the information resides internal to the object (for<br />
example). In this Chapter we use the term Data Dictionary as a generic term denoting<br />
the broad concept and not as an indication of where the object description resides.<br />
9.1.2 Roles of a Data Dictionary (DD)<br />
The motivation for implementing a DD in <strong>Athena</strong> can be illustrated by listing the potential roles<br />
that a DD can play within the Architecture. These roles include:<br />
• Data Tools Integration<br />
Tools needed to perform many tasks within the overall system` which are not specific to a<br />
particular data object type, including browsing and editing, visualization, simple or standard<br />
transformations, etc.<br />
• Interactive Data Queries<br />
The ability to interrogate a data object as to its shape and content from the interactive user<br />
interface (e.g. scripting language interface).<br />
• Automatic/Semiautomatic Persistency<br />
The ability to apply generic converters and/or to automatically generate specialized converters<br />
and/or to generate converter skeletons for subsequent, manual customization.<br />
• Schema Evolution (Version-Safe Persistency)<br />
Most data objects in the Event Data Model (EDM) typically change multiple times throughout<br />
their lifecycle within a HEP experiment. Support for evolution of the EDM schema is critical.<br />
• Multi-Language Support<br />
Components written in multiple programming languages (e.g. C++, Java, FORTRAN) will be<br />
used in the context of the <strong>Athena</strong> framework. These components need to exchange data in a<br />
language independent interchange representation.<br />
• Component Independence, Stability, & Robustness<br />
The ability to write code to a reflection API such that changes external to a component do not<br />
necessitate any code changes within the component leads naturally to code stability and,<br />
consequently, code robustness. Architectural attention to physical interdependencies of<br />
framework components also benefits from the use of such an API.<br />
page 70
<strong>Athena</strong><br />
Chapter 9 Data dictionary Version/Issue: 2.0.0<br />
9.1.3 Implementation Strategy<br />
Because the Data Dictionary can be used for a wide variety of purposes, a strategic choice must be<br />
made for the implementation plan. The choice can be stated as: Do we first implement only a single DD<br />
function (such as binding to a particular persistency service) as fully as possible? Or do we first<br />
implement multiple DD functions (e.g. multiple persistency services, multi-language support, binding<br />
to multiple general tools, etc) but with each one minimally functional?<br />
There are good arguments for both approaches. However, we believe that the first option is the most<br />
attractive for two reasons.<br />
1. The full range of possible functions of the Data Dictionary cannot be defined at the beginning<br />
of the development of the Data Dictionary.<br />
Though we have enumerated above some of the roles that the DD can play, most are general<br />
roles and not specific tasks. Implementing these functions one-by-one helps to ensure that we<br />
do not lock ourselves into an approach which is difficult to extend to other, unforeseen DD<br />
functions.<br />
2. Providing a single, fully functional (or almost fully functional) tool will encourage physicists<br />
to begin using the tool immediately.<br />
If we initially provide multiple functions which illustrate the eventual functionality of the DD,<br />
but which are not fleshed out enough to be immediately useful to end-users, we run the risk of<br />
discouraging such physicists from using the DD until it reaches a more mature stage of<br />
development. If, on the other hand, we provide a single DD function and make it sufficiently<br />
complete to actually improve end-users' productivity, we both demonstrate the utility of the<br />
DD and provide immediate help to users doing work today. Thus encouraging immediate<br />
adoption of the DD by users.<br />
9.1.4 Data Dictionary Language<br />
One of the most visible implementation decisions of a DD for <strong>Athena</strong> is the choice of the computer<br />
language used in the dictionary. A very non-comprehensive list of choices includes:<br />
• Declarative C++ (i.e. C++ headers)<br />
• IDL - Interface Definition Language (http://www.omg.org/)<br />
• Declarative Java (http://java.sun.com/)<br />
• ODL - Object Definition Language (http://www.odmg.org/)<br />
• DDL - Data Definition Language (http://www.objectivity.com/)<br />
• ISL - Interface Specification Language (ftp://ftp.parc.xerox.com/pub/ilu/ilu.html)<br />
• XML - Extensible Markup Language (http://www.w3.org/)<br />
• Home grown solution<br />
Each of these choices have Pros and Cons. Although the choice of Data Dictionary Language (In this<br />
document we will use ADL generically for <strong>Athena</strong> Dictionary Language without implying a concrete<br />
page 71
<strong>Athena</strong> Chapter 9 Data dictionary Version/Issue: 2.0.0<br />
choice.) is an important decision, it is arguably not a make-or-break decision. However, because of the<br />
extreme visibility of such a choice and the likelihood that any choice will draw criticism (warranted or<br />
not) from some quarter, the decision-making process must be very well documented and technically<br />
motivated.<br />
9.1.5 Code Generation<br />
Code generation tools are parser-based tools which process the ADL, construct an Abstract Syntax Tree<br />
(AST), and drive compiler backends (emitters). Often a code generation tool can be used to eliminate<br />
tedious and error-prone rote programming.<br />
With the choice of a real computer language as the basis of the Data Dictionary, it becomes imperative<br />
that a real parser be used to compile the DD language and realize the DD functionality. Experience has<br />
shown that multiple back-ends (emitters) for the parser are necessary (see Figure 1). The reality of a<br />
possible evolution of ADL suggests that the compiler front-end should be replaceable.<br />
Figure 9.1 ADL Parser with multiple back-ends<br />
9.1.6 Data Dictionary Design<br />
page 72<br />
The Data Dictionary will be used at multiple ATLAS sites by a large number of people. Scalability,<br />
distributability, and ease of use of the DD are important design criteria.<br />
The ease of use of the Data Dictionary depends largely upon the target audience. For the typical<br />
physicist, the DD must be easy to use and must have clear benefits or it will not be used. For more<br />
sophisticated users (e.g. Core Programmers), the burden of learning a new system or language must<br />
be outweighed by the long-term benefits.<br />
One objection which should guide our thinking on ease of use is the sometimes heard statement:<br />
"Physicists do not want to learn a new language.". This argument, out of context, can be<br />
compelling. Physicists don't want to learn new, complicated languages to do the same thing they<br />
can do with an already familiar language. Some of this resistance can be seen in the slower than<br />
expected move to C++ in ATLAS.<br />
However, experience has shown that if the new language is sufficiently simple and/or intuitive, and<br />
the benefits are obvious, a new language will quickly become popular and widely used by those
<strong>Athena</strong><br />
Chapter 9 Data dictionary Version/Issue: 2.0.0<br />
physicists most active in software development. Once this happens, the barrier for subsequent<br />
physicists becomes quite low as there are many experts to whom she can go for help and advice and<br />
many examples from which to learn.<br />
9.1.7 Time Line & Milestones<br />
The development and deployment plan for the <strong>Athena</strong> Data Dictionary is:<br />
• January 2001<br />
• Evaluation of ADL Candidates & Tools Complete<br />
• <strong>Athena</strong> Dictionary Language (ADL) Chosen<br />
• ADL Compiler Front End (CFE) Chosen<br />
• Subset of ADL Compiler Back Ends (CBEs) Defined<br />
• April 2001<br />
• Prototype DD Implementation<br />
• Standalone ADL CFE Functional<br />
• One ADL CBE Implemented & Associated Functionality Integrated in <strong>Athena</strong><br />
• <strong>Athena</strong> Dictionary Language (ADL) Frozen<br />
• September 2001<br />
• Data Dictionary Deployed<br />
• ADL CFE Integrated with ATLAS Release Tools<br />
• ADL Reflection API Available in <strong>Athena</strong><br />
• Multiple ADL CBEs Implemented & Available & Integrated in <strong>Athena</strong><br />
9.1.8 Bibliography<br />
http://www.javaworld.com/<br />
ftp://ftp.parc.xerox.com/pub/ilu/ilu.html<br />
http://iago.lbl.gov/dsl/<br />
http://www.aps.anl.gov/asd/oag/manuals/SDDStoolkit/SDDStoolkit.html<br />
http://www.swig.org/<br />
http://electra.lbl.gov/papers/CDFDBCodeGen.html<br />
http://www-cdf.fnal.gov/upgrades/computing/calibration_db/CodeGen.html<br />
http://electra.lbl.gov/talks/Cdf_SWDec98_codegen.ppt<br />
page 73
<strong>Athena</strong> Chapter 9 Data dictionary Version/Issue: 2.0.0<br />
http://electra.lbl.gov/talks/Atlas_LBLSWMay99_codegen.ppt<br />
http://www.hep.net/chep95/html/abstract/abs_18.htm<br />
http://www.hep.net/chep95/html/slides/t78/index.htm<br />
http://www.hep.net/chep95/html/abstract/abs_78.htm<br />
http://www.ifh.de/CHEP97/paper/322.ps<br />
http://cmsdoc.cern.ch/cms/cristal/html/newcristal.html<br />
http://sources.redhat.com/sourcenav/<br />
page 74
<strong>Athena</strong><br />
Chapter 10 Detector Description Version/Issue: 2.0.0<br />
Chapter 10<br />
Detector Description<br />
10.1 Overview<br />
This chapter is a place holder for documenting how to access the detector description data in the <strong>Athena</strong><br />
transient detector data store. A detector description implementation based on XML exists in the LHCb<br />
extensions to Gaudi but it is not distributed with the framework.<br />
The Gaudi architecture aims to shield the applications from the details of the persistent detector<br />
description and calibration databases. Ideally, the detector will be described in a logically unique<br />
detector description database (DDDB), containing data from many sources (e.g. editors and CAD tools<br />
for geometry data, calibration and alignment programs, detector control system for environmental data)<br />
as shown in Figure 10.1. The job of the Gaudi detector data service is to populate the transient detector<br />
data store with a snapshot of the detector description, which is valid for the event currently being<br />
analysed. Conversion services can be invoked to provide different transient representations of the same<br />
persistent data, appropriate to the specific application. For example, detector simulation, reconstruction<br />
and event display all require a geometry description of the detector, but with different levels of detail.<br />
In the Gaudi architecture it is possible to have a single, generic, persistent geometry description, from<br />
which a set of different representations can be extracted and made available to the data processing<br />
applications..<br />
The LHCb implementation of the detector description database describes the logical structure of the<br />
detector in terms of a hierarchy of detector elements and the basic geometry in terms of volumes, solids<br />
and materials, and provides facilities for customizing the generic description to many specific detector<br />
needs. This should allow to develop detector specific code which can provide geometry answers to<br />
questions from the physics algorithms. The persistent representation of the LHCb detector description<br />
is based on text files in XML format. An XML editor that understands the detector description<br />
semantics has been developed.<br />
page 75
<strong>Athena</strong> Chapter 10 Detector Description Version/Issue: 2.0.0<br />
Figure 10.1 Overview of the Detector Description model.<br />
page 76
<strong>Athena</strong><br />
Chapter 11 Histogram facilities Version/Issue: 2.0.0<br />
Chapter 11<br />
Histogram facilities<br />
11.1 Overview<br />
The histogram data store is one of the data stores discussed in Chapter 2. Its purpose is to store statistics<br />
based data and user created objects that have a lifetime of more than a single event (e.g. histograms).<br />
As with the other data stores, all access to data is via a service interface. In this case it is via the<br />
IHistogramSvc interface, which is derived from the IDataProviderSvc interface discussed in<br />
Chapter 7. The user asks the Histogram Service to book a histogram and register it in the histogram data<br />
store. The service returns a pointer to the histogram, which can then be used to fill and manipulate the<br />
histogram<br />
The histograms themselves are booked and manipulated using four interfaces as defined by the AIDA<br />
(Abstract Interfaces for Data Analysis) project. These interfaces are documented on the AIDA web<br />
pages: http://wwwinfo.cern.ch/asd/lhc++/AIDA/. The <strong>Athena</strong> implementation uses the transient part of<br />
HTL (Histogram Template Library, http://wwwinfo.cern.ch/asd/lhc++/HTL/), provided by LHC++.<br />
The histogram data model is shown in Figure 11.1. The interface IHistogram is a base class, which<br />
is used for management purposes. It is not a complete histogram interface, it should not be used by the<br />
users. Both interfaces IHistogram1D and IHistogram2D are derived from IHistogram, and<br />
use by reference the IAxis interface. Users can book their 1D or 2D histograms in the histogram data<br />
store in the same type of tree structure as the event data. Concrete 1D and 2D histograms derive from<br />
the DataObject in order to be storable.<br />
page 77
<strong>Athena</strong> Chapter 11 Histogram facilities Version/Issue: 2.0.0<br />
Histograms<br />
IH is to gra m<br />
IHistogram1D<br />
IHistogram 2D<br />
TYPE<br />
1D<br />
GenHisto1D<br />
DataObject<br />
TYPE<br />
2D<br />
GenHisto2D<br />
1<br />
1<br />
1<br />
1<br />
IA x i s<br />
1<br />
TYPE1D<br />
1<br />
Axis<br />
2<br />
1<br />
TYPE2D<br />
H1D and H1DVar are currently<br />
the only two im plem ented<br />
specializations of G enH isto1D<br />
H1D<br />
H2D is currently the only one<br />
im plem ented specialization of<br />
GenHisto2D<br />
H2D<br />
H1DVar<br />
Figure 11.1 Histograms data model.<br />
11.2 The Histogram service.<br />
An instance of the histogram data service is created by the application manager. After the service has<br />
been initialised, the histogram data store will contain a root directory “/stat” in which users may<br />
book histograms and/or create sub-directories (for example, in the code fragment below, the histogram<br />
is stored in the subdirectory “/stat/simple“). A suggested naming convention for the<br />
sub-directories is given in Section 1.2.3.<br />
As discussed in Section 5.2, the Algorithm base class defines a member function<br />
IHistogramSvc* histoSvc()<br />
page 78
<strong>Athena</strong><br />
Chapter 11 Histogram facilities Version/Issue: 2.0.0<br />
which returns a pointer to the IHistogramSvc interface of the standard histogram data service.<br />
Access to any other non-standard histogram data service (if one exists) must be sought via the<br />
ISvcLocator interface of the application manager as discussed in section 13.2.<br />
11.3 Using histograms and the histogram service<br />
An example code fragment illustrating how to book a 1D histogram and place it in a directory within<br />
the histogram data store, and a simple statement which fills that histogram is shown here:<br />
// Book 1D histogram in the histogram data store<br />
m_hTrackCount= histoSvc()-><br />
book( "/stat/simple", 1, “TrackCount“, 100, 0., 3000. );<br />
SmartDataPtr particles( eventSvc(),“/Event/MyTracks” )<br />
if ( 0 != particles ) {<br />
// Filling the track count histogram<br />
m_hTrackCount->fill(particles->size(), 1.);<br />
}<br />
The parameters of the book function are the directory in which to store the histogram in the data store,<br />
the histogram identifier, the histogram title, the number of bins and the lower and upper limits of the X<br />
axis. 1D histograms with fixed and variable binning are available. In the case of 2D histograms, the<br />
book method requires in addition the number of bins and lower and upper limits of the Y axis.<br />
If using HBOOK for persistency, the histogram identifier should be a valid HBOOK histogram<br />
identifier (number), must be unique and, in particular, must be different from any n-tuple number. Even<br />
if using another persistency solution (e.g. ROOT) it is recommended to comply with the HBOOK<br />
constraints in order to make the code independent of the persistency choice.<br />
The call to histoSvc()->book(...) returns a pointer to an object of type IHistogram1D (or<br />
IHistogram2D in the case of a 2D histogram). All the methods of this interface can be used to<br />
further manipulate the histogram, and in particular to fill it, as shown in the example. Note that this<br />
pointer is guaranteed to be non-null, the algorithm would have failed the initialisation step if the<br />
histogram data service could not be found. On the contrary the user variable particles may be null<br />
(in case of absence of tracks in the transient data store and in the persistent storage), and the fill<br />
statement would fail - so the value of particles must be checked before using it.<br />
Algorithms that create histograms will in general keep pointers to those histograms, which they may<br />
use for filling operations. However it may be that you wish to share histograms between different<br />
algorithms. Maybe one algorithm is responsible for filling the histogram and another algorithm is<br />
responsible for fitting it at the end of the job. In this case it may be necessary to look for histograms<br />
within the store. The mechanism for doing this is identical to the method for locating event data objects<br />
within the event data store, namely via the use of smart pointers, as discussed in section 7.8.<br />
page 79
<strong>Athena</strong> Chapter 11 Histogram facilities Version/Issue: 2.0.0<br />
SmartDataPtr hist1D( histoSvc(), "/stat/simple/1" );<br />
if( 0 != hist1D ) {<br />
// Print the found histogram<br />
histoSvc()->print( hist1D );<br />
}<br />
11.4 Persistent storage of histograms<br />
By default, <strong>Athena</strong> does not produce a persistent histogram output. The options exist to write out<br />
histograms either in HBOOK or in ROOT format.<br />
11.4.1 HBOOK persistency<br />
The HBOOK conversion service converts objects of types IHistogram1D and IHistogram2D<br />
into a form suitable for storage in a standard HBOOK file. In order to use it you first need to tell <strong>Athena</strong><br />
where to find the HbookCnv shared library. If you are using CMT, this is done by adding the following<br />
line to the CMT requirements file:<br />
use HbookCnv v9*<br />
You then have to tell the application manager to load this shared library and to create the HBOOK<br />
conversion service, by adding the following lines to your job options file:<br />
ApplicationMgr.DLLs += {"HbookCnv"};<br />
ApplicationMgr.HistogramPersistency = "HBOOK";<br />
Finally, you have to tell the histogram persistency service the name of the output file:<br />
HistogramPersistencySvc.OuputFile = "histo.hbook";<br />
11.4.2 ROOT persistency<br />
The ROOT conversion service converts objects of types IHistogram1D and IHistogram2D into<br />
a form suitable for storage in a standard ROOT file. In order to use it you first need to tell <strong>Athena</strong> where<br />
to find the RootHistCnv shared library. If you are using CMT, this is done by adding the following<br />
line to the CMT requirements file:<br />
use RootHistCnv v3*<br />
page 80
<strong>Athena</strong><br />
Chapter 11 Histogram facilities Version/Issue: 2.0.0<br />
You then have to tell the application manager to load this shared library and to create the HBOOK<br />
conversion service, by adding the following lines to your job options file:<br />
ApplicationMgr.DLLs += {"RootHistCnv"};<br />
ApplicationMgr.HistogramPersistency = "ROOT";<br />
Finally, you have to tell the histogram persistency service the name of the output file:<br />
HistogramPersistencySvc.OuputFile = "histo.rt";<br />
page 81
<strong>Athena</strong> Chapter 11 Histogram facilities Version/Issue: 2.0.0<br />
page 82
<strong>Athena</strong><br />
Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
Chapter 12<br />
N-tuple and Event Collection facilities<br />
12.1 Overview<br />
In this chapter we describe facilities available in <strong>Athena</strong> to create and retrieve n-tuples. We discuss how<br />
Event Collections, which can be considered an extension of n-tuples, can be used to make preselections<br />
of event data. Finally, we explore some possible tools for the interactive analysis of n-tuples.<br />
12.2 N-tuples and the N-tuple Service<br />
User data - so called n-tuples - are very similar to event data. Of course, the scope may be different: a<br />
row of an n-tuple may correspond to a track, an event or complete runs. Nevertheless, user data must be<br />
accessible by interactive tools such as PAW or ROOT.<br />
<strong>Athena</strong> n-tuples allow to freely format structures. Later, during the running phase of the program, data<br />
are accumulated and written to disk.<br />
The transient image of an n-tuple is stored in a <strong>Athena</strong> data store which is connected to the n-tuple<br />
service. Its purpose is to store user created objects that have a lifetime of more than a single event.<br />
As with the other data stores, all access to data is via a service interface. In this case it is via the<br />
INTupleSvc interface which extends the IDataProviderSvc interface. In addition the interface<br />
to the n-tuple service provides methods for creating n-tuples, saving the current row of an n-tuple or<br />
retrieving n-tuples from a file. The n-tuples are derived from DataObject in order to be storable, and<br />
are stored in the same type of tree structure as the event data. This inheritance allows to load and locate<br />
n-tuples on the store with the same smart pointer mechanism as is available for event data items (c.f.<br />
Chapter 7).<br />
page 83
<strong>Athena</strong> Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
12.2.1 Access to the N-tuple Service from an Algorithm.<br />
The Algorithm base class defines a member functionwhich returns a pointer to the INTupleSvc<br />
interface .<br />
INTupleSvc* ntupleSvc()<br />
The n-tuple service provides methods for the creation and manipulation of n-tuples and the location of<br />
n-tuples within the persistent store.<br />
The top level directory of the n-tuple transient data store is called “/NTUPLES”. The next directory<br />
layer is connected to the different output streams: e.g. “/NTUPLES/FILE1”, where FILE1 is the logical<br />
name of the requested output file for a given stream. There can be several output streams connected to<br />
the service. In case of persistency using HBOOK, “FILE1” corresponds to the top level RZ directory of<br />
the file (...the name given to HROPEN). From then on the tree structure is reflected with normal RZ<br />
directories (caveat: HBOOK only accepts directory names with less than 8 characters! It is<br />
recommended to keep directory names to less than 8 characters even when using another technology<br />
(e.g. ROOT) for persistency, to make the code independent of the persistency choice.).<br />
12.2.2 Using the N-tuple Service.<br />
When defining an n-tuple the following steps must be performed:<br />
• The n-tuple tags must be defined.<br />
• The n-tuple must be booked and the tags must be declared to the n-tuple.<br />
• The n-tuple entries have to be filled.<br />
• The filled row of the n-tuple must be committed.<br />
• Persistent aspects are steered by the job options.<br />
In the following an attempt is made to explain the different steps. Please note that when using HBOOK<br />
for persistency, the n-tuple number must be unique and, in particular, that it must be different from any<br />
histogram number. This is a limitation imposed by HBOOK. It is recommended to keep this number<br />
unique even when using another technology (e.g. ROOT) for persistency, to make the code independent<br />
of the persistency choice.<br />
12.2.2.1 Defining N-tuple tags<br />
page 84<br />
When creating an n-tuple it is necessary to first define the tags to be filled in the n-tuple. Typically the<br />
tags belong to the filling algorithm and hence should be provided in the Algorithm’s header file.<br />
Currently the following data types are supported: bool, long, float and double. double types<br />
(Fortran REAL*8) need special attention if using HBOOK for persistency: the n-tuple structure must be
<strong>Athena</strong><br />
Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
defined in a way that aligns double types to 8 byte boundaries, otherwise HBOOK will complain. In<br />
addition PAW cannot understand double types. Listing 12.1 illustrates how to define n-tuple tags:<br />
Listing 12.1 Definition of n-tuple tags from the Ntuples.WriteAlg.h example header file.<br />
1: NTuple::Item m_ntrk; // A scalar item (number)<br />
2: NTuple::Array m_flag; // Vector items<br />
3: NTuple::Array m_index;<br />
4: NTuple::Array m_px, m_py, m_pz;<br />
5: NTuple::Matrix m_hits; // Two dimensional tag<br />
12.2.2.2 Booking and Declaring Tags to the N-tuple<br />
When booking the n-tuple, the previously defined tags must be declared to the n-tuple. Before booking,<br />
the proper output stream (file) must be accessed. The target directory is defined automatically.<br />
Listing 12.2 Creation of an n-tuple in a specified directory and file.<br />
1: // Access the output file<br />
2: NTupleFilePtr file1(ntupleSvc(), "/NTUPLES/FILE1");<br />
3: if ( file1 ) {<br />
4: // First: A column wise N tuple<br />
5: NTuplePtr nt(ntupleSvc(), "/NTUPLES/FILE1/MC/1");<br />
6: if ( !nt ) { // Check if already booked<br />
7: nt=ntupleSvc()->book("/NTUPLES/FILE1/MC",1,CLID_ColumnWiseTuple,<br />
"Hello World");<br />
8: if ( nt ) {<br />
9: // Add an index column<br />
10: status = nt->addItem ("Ntrack", m_ntrk, 0, 5000 );<br />
11: // Add a variable size column of type float (length=length of index col)<br />
12: status = nt->addItem ("px", m_ntrk, m_px);<br />
13: status = nt->addItem ("py", m_ntrk, m_py);<br />
14: status = nt->addItem ("pz", m_ntrk, m_pz);<br />
15: // Another one, but this time of type bool<br />
16: status = nt->addItem ("flg",m_ntrk, m_flag);<br />
17: // Another one, type integer, numerical numbers must be within [0, 5000]<br />
18: status = nt->addItem ("idx",m_ntrk, m_index, 0, 5000 );<br />
19: // Add 2-dim column: [0:m_ntrk][0:2]; numerical numbers within [0, 8]<br />
20: status = nt->addItem ("hit",m_ntrk, m_hits, 2, 0, 8 );<br />
21: }<br />
22: else { // did not manage to book the N tuple....<br />
23: return StatusCode::FAILURE;<br />
24: }<br />
25: }<br />
Tags which are not declared to the n-tuple are invalid and will cause an access violation at run-time.<br />
12.2.2.3 Filling the N-tuple<br />
Tags are usable just like normal data items, where<br />
page 85
<strong>Athena</strong> Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
• Items are the equivalent of numbers: bool, long, float.<br />
• Array are equivalent to 1 dimensional arrays: bool[size], long[size],<br />
float[size]<br />
• Matrix are equivalent to an array of arrays or matrix: bool[dim1][dim2].<br />
There is no implicit bounds checking possible without a rather big overhead at run-time. Hence it is up<br />
to the user to ensure the arrays do not overflow.<br />
When all entries are filled, the row must be committed, i.e. the record of the n-tuple must be written.<br />
Listing 12.3 Filling an n-tuple.<br />
1: m_ntrk = 0;<br />
2: for( MyTrackVector::iterator i = mytracks->begin(); i !=<br />
mytracks->end(); i++ ) {<br />
3: const HepLorentzVector& mom4 = (*i)->fourMomentum();<br />
4: m_px[m_ntrk] = mom4.px();<br />
5: m_py[m_ntrk] = mom4.py();<br />
6: m_pz[m_ntrk] = mom4.pz();<br />
7: m_index[m_ntrk] = cnt;<br />
8: m_flag[m_ntrk] = (m_ntrk%2 == 0) ? true : false;<br />
9: m_hits[m_ntrk][0] = 0;<br />
10: m_hits[m_ntrk][1] = 1;<br />
11: m_ntrk++;<br />
12: // Make sure the array(s) do not overflow.<br />
13: if ( m_ntrk > m_ntrk->range().distance() ) break;<br />
14: }<br />
15: // Commit N tuple row.<br />
16: status = ntupleSvc()->writeRecord("/NTUPLES/FILE1/MC/1");<br />
17: if ( !status.isSuccess() ) {<br />
18: log
<strong>Athena</strong><br />
Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
Listing 12.4 Reading an n-tuple.<br />
1: NTuplePtr nt(ntupleSvc(), "/NTUPLES/FILE1/ROW_WISE/2");<br />
2: if ( nt ) {<br />
3: long count = 0;<br />
4: NTuple::Item px, py, pz;<br />
5: status = nt->item("px", px);<br />
6: status = nt->item("py", py);<br />
7: status = nt->item("pz", pz);<br />
8: nt->attachSelector(new SelectStatement("pz>0 And px>0"));<br />
9: // Access the N tuple row by row and print the first 10 tracks<br />
10: while ( ntupleSvc()->readRecord(nt.ptr()).isSuccess() ) {<br />
11: log
<strong>Athena</strong> Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
• MS Access: Write as a Microsoft Access database 1 .<br />
• OPT=’’<br />
There is also weak support for the following database types 1 :<br />
• SQL Server<br />
• MySQL<br />
• Oracle ODBC<br />
These database technologies are supported through their ODBC interface.<br />
They were tested privately on local installations. However all these types<br />
need special setup to grant access to the database.<br />
Except for the HBOOK data format, you need to specify the use of the technology<br />
specific persistency package (i.e. GaudiRootDb) in your CMT requirements file<br />
and to load explicitly in the job options the DLLs containing the generic (GaudiDb)<br />
and technology specific (GaudiRootDb) implementations of the database access<br />
drivers:<br />
ApplicationMgr.DLLs += { "GaudiDb", "GaudiRootDb" };<br />
• NEW, CREATE, WRITE: Create a new data file. Not all implementations allow to<br />
over-write existing files.<br />
• OLD, READ: Access an existing file for read purposes<br />
• UPDATE: Open an existing file and add records. It is not possible to update already<br />
existing records.<br />
• SVC=’’ (optional)<br />
Connect this file directly to an existing conversion service. This option however needs special<br />
care. It should only be used to replace default services.<br />
• AUTHENTICATION=’’ (optional)<br />
For protected datafiles (e.g. Microsoft Access) it can happen that the file is password<br />
protected. In this case the authentication string allows to connect to these databases. The<br />
connection string in this case is the string that must be passed to ODBC, for example:<br />
AUTH=’SERVER=server_host;UID=user_name;PWD=my_password;’<br />
• All other options are passed without any interpretation directly to the conversion service<br />
responsible to handle the specified output file.<br />
For all options at most three leading characters are significant: DATAFILE=, DATABASE=<br />
or simply DATA= would lead to the same result.<br />
The handling of row wise n-tuples does not differ. However, only individual items (class<br />
NTuple::Item) can be filled, no arrays and no matrices. Since the persistent representation of row<br />
1. The implementation for MS Access and other ODBC compliant databases is available in the LHCb extensions to Gaudi.<br />
It is not distributed with <strong>Athena</strong><br />
page 88
<strong>Athena</strong><br />
Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
wise n-tuples in HBOOK is done by floats only, the first row of each row wise n-tuple contains the type<br />
information - when looking at a row wise n-tuple with PAW make sure to start at the second event!<br />
12.3 Event Collections<br />
Event collections or, to be more precise, event tag collections, are used to minimize data access by<br />
performing preselections based on small amounts of data. Event tag data contain flexible event<br />
classification information according to the physics needs. This information could either be stored as<br />
flags indicating that the particular event has passed some preselection criteria, or as a small set of<br />
parameters which describe basic attributes of the event. Fast access is required for this type of event<br />
data.<br />
Event tag collections can exist in several versions:<br />
• Collections recorded during event processing stages from the online, reconstruction,<br />
reprocessing etc.<br />
• Event collections defined by analysis groups with pre-computed items of special interest to a<br />
given group.<br />
• Private user defined event collections.<br />
Starting from this definition an event tag collection can be interpreted as an n-tuple which allows to<br />
access the data used to create the n-tuple. Using this approach any n-tuple which allows access to the<br />
data is an event collection.<br />
Event collections allow pre-selections of event data. These pre-selections depend on the underlying<br />
storage technology.<br />
First stage pre-selections based on scalar components of the event collection. First stage preselection<br />
is not necessarily executed on your computer but on a database server e.g. when using ORACLE. Only<br />
the accessed columns are read from the event collection. If the criteria are fulfilled, the n-tuple data are<br />
returned to the user process. Preselection criteria are set through a job options, as described in section<br />
12.3.2.<br />
The second stage pre-selection is triggered for all items which passed the first stage pre-selection<br />
criteria. For this pre-selection, which is performed on the client computer, all data in the n-tuple can be<br />
used. The further preselection is implemented in a user defined function object (functor) as described in<br />
section 12.3.2. <strong>Athena</strong> algorithms are called only when this pre-selector also accepts the event, and<br />
normal event processing can start.<br />
12.3.1 Writing Event Collections<br />
Event collections are written to the data file using a <strong>Athena</strong> sequencer. A sequencer calls a series of<br />
algorithms, as discussed in section 5.2. The execution of these algorithms may terminate at any point of<br />
page 89
<strong>Athena</strong> Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
the series (and the event not selected for the collection) if one of the algorithms in the sequence fails to<br />
pass a filter.<br />
12.3.1.1 Defining the Address Tag<br />
The event data is accessed using a special n-tuple tag of the type<br />
NTuple::Item m_evtAddress<br />
It is defined in the algorithm’s header file in addition to any other ordinary n-tuple tags, as described in<br />
section 12.2.2.1. When booking the n-tuple, the address tag must be declared like any other tag, as<br />
shown in Listing 12.1. It is recommended to use the name "Address" for this tag.<br />
Listing 12.1 Connecting an address tag to an n-tuple.<br />
1: NTuplePtr nt(ntupleSvc(), "/NTUPLES/EvtColl/Collection");<br />
1: ... Book N-tuple<br />
2: // Add an event address column<br />
3: StatusCode status = nt->addItem ("Address", m_evtAddress);<br />
The usage of this tag is identical to any other tag except that it only accepts variables of type<br />
IOpaqueAddress - the information necessary to retrieve the event data.<br />
12.3.1.2 Filling the Event Collection<br />
At fill time the address of the event must be supplied to the Address item. Otherwise the n-tuple may<br />
be written, but the information to retrieve the corresponding event data later will be lost. Listing 12.2<br />
also demonstrates the setting of a filter to steer whether the event is written out to the event collection.<br />
Listing 12.2 Fill the address tag of an n-tuple at execution time:<br />
1: SmartDataPtr evt(eventSvc(),"/Event");<br />
2: if ( evt ) {<br />
3: ... Some data analysis deciding whether to keep the event or not<br />
4: // keep_event=true if event should be written to event collection<br />
5: setFilterPassed( keep_event );<br />
6: m_evtAddrColl = evt->address();<br />
7: }<br />
12.3.1.3 Writing out the Event Collection<br />
page 90<br />
The event collection is written out by an EvtCollectionStream, which is the last member of the<br />
event collection Sequencer. Listing 12.3 (which is taken from the job options of EvtCollection<br />
example), shows how to set up such a sequence consisting of a user written Selector algorithm<br />
(which could for example contain the code in Listing 12.2), and of the EvtCollectionStream.
<strong>Athena</strong><br />
Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
Listing 12.3 Job options for writing out an event collection<br />
1: ApplicationMgr.OutStream = { "Sequencer/EvtCollection" };<br />
2:<br />
3: EvtCollection.Members = { "EvtCollectionWrite/Selector",<br />
"EvtCollectionStream/Writer"};<br />
4: Writer.ItemList = { "/NTUPLES/EvtColl/Collection" };<br />
5: NTupleSvc.Output = { "EvtColl DATAFILE='MyEvtCollection.root'<br />
OPT='NEW' TYP='ROOT'" };<br />
12.3.2 Reading Events using Event Collections<br />
Reading event collections as the input for further event processing in <strong>Athena</strong> is transparent. The main<br />
change is the specification of the input data to the event selector:<br />
Listing 12.4 Connecting an address tag to an n-tuple.<br />
1: EventSelector.Input = {<br />
2: "COLLECTION='Collection' ADDRESS=’Address’<br />
DATAFILE='MyEvtCollection.root' TYP='ROOT' SEL='(Ntrack>80)'<br />
FUN='EvtCollectionSelector'"<br />
3: };<br />
These tags need some explanation:<br />
• COLLECTION<br />
Specifies the sub-path of the n-tuple used to write the collection. If the n-tuple which was<br />
written was called e.g. "/NTUPLES/FILE1/Collection", the value of this tag must be<br />
"Collection".<br />
• ADDRESS (optional)<br />
Specifies the name of the n-tuple tag which was used to store the opaque address to be used to<br />
retrieve the event data later. This is an optional tag, the default value is "Address". Please<br />
use this default value when writing, conventions are useful!<br />
• SEL (optional):<br />
Specifies the selection string used for the first stage pre-selection. The syntax depends on the<br />
database implementation; it can be:<br />
• SQL like, if the event collection was written using ODBC.<br />
Example: (NTrack>200 AND Energy>200)<br />
• C++ like, if the event collection was written using ROOT.<br />
Example: (NTrack>200 && Energy>200).<br />
Note that event collections written with ROOT also accept the SQL operators ’AND’<br />
instead of ’&&’ as well as ’OR’ instead of ’||’. Other SQL operators are not<br />
supported.<br />
page 91
<strong>Athena</strong> Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
• FUN (optional)<br />
Specifies the name of a function object used for the second-stage preselection. An example of<br />
a such a function object is shown in Listing 12.5. Note that the factory declaration on line 16<br />
is mandatory in order to allow <strong>Athena</strong> to instantiate the function object.<br />
• The DATAFILE and TYP tags, as well as additional optional tags, have the same meaning<br />
and syntax as for n-tuples, as described in section 12.2.3.1.<br />
Listing 12.5 Example of a function object for second stage pre-selections.<br />
1: class EvtCollectionSelector : public NTuple::Selector {<br />
2: NTuple::Item m_ntrack;<br />
3: public:<br />
4: EvtCollectionSelector(IInterface* svc) : NTuple::Selector(svc) { }<br />
5: virtual ~EvtCollectionSelector() { }<br />
6: /// Initialization<br />
7: virtual StatusCode initialize(NTuple::Tuple* nt) {<br />
8: return nt->item("Ntrack", m_ntrack);<br />
9: }<br />
10: /// Specialized callback for NTuples<br />
11: virtual bool operator()(NTuple::Tuple* nt) {<br />
12: return m_ntrack>cut;<br />
13: }<br />
14: };<br />
15:<br />
16: ObjectFactory EvtCollectionSelectorFactory<br />
12.4 Interactive Analysis using N-tuples<br />
n-tuples are of special interest to the end-user, because they can be accessed using commonly known<br />
tools such as PAW, ROOT or Java Analysis Studio (JAS). In the past it was not a particular strength of<br />
the software used in HEP to plug into many possible persistent data representations. Except for JAS,<br />
only proprietary data formats are understood. For this reason the choice of the output format of the data<br />
depends on the preferred analysis tool/viewer. In the following an overview is given over the possible<br />
data formats.<br />
In the examples below the output of the GaudiExample/NTuple.write program was used.<br />
12.4.1 HBOOK<br />
This data format is used by PAW. PAW can understand this and only this data format. Files of this type<br />
can be converted to the ROOT format using the h2root data conversion program. The use of PAW in<br />
the long term is deprecated.<br />
page 92
<strong>Athena</strong><br />
Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
12.4.2 ROOT<br />
This data format is used by the interactive ROOT program. Using the helper library TBlob located in<br />
the package GaudiRootDb it is possible to interactively analyse the n-tuples written in ROOT format.<br />
However, access is only possible to scalar items (int, float, ...) not to arrays.<br />
Analysis is possible through directly plotting variables:<br />
root [1] gSystem->Load("D:/mycmt/GaudiRootDb/v3/Win32Debug/TBlob");<br />
root [2] TFile* f = new TFile("tuple.root");<br />
root [3] TTree* t = (TTree*)f->Get("_MC_ROW_WISE_2");<br />
root [4] t->Draw("pz");<br />
or using a ROOT macro interpreted by ROOT’s C/C++ interpreter (see for example the code fragment<br />
interactive.C shown in Listing 12.6):<br />
root [0] gSystem->Load("D:/mycmt/GaudiRootDb/v3/Win32Debug/TBlob");<br />
root [1] .L ./v8/NTuples/interactive.C<br />
root [2] interactive("./v8/NTuples/tuple.root");<br />
More detailed explanations can be found in the ROOT tutorials (http://root.cern.ch).<br />
Listing 12.6 Interactive analysis of ROOT n-tuples: interactive.C<br />
1: void interactive(const char* fname) {<br />
2: TFile *input = new TFile(fname);<br />
3: TTree *tree = (TTree*)input->Get("_MC_ROW_WISE_2");<br />
4: if ( 0 == tree ) {<br />
5: printf("Cannot find the requested tree in the root file!\n");<br />
6: return;<br />
7: }<br />
8: Int_t ID, OBJSIZE, NUMLINK, NUMSYMB;<br />
9: TBlob *BUFFER = 0;<br />
10: Float_t px, py, pz;<br />
11: tree->SetBranchAddress("ID",&ID);<br />
12: tree->SetBranchAddress("OBJSIZE",&OBJSIZE);<br />
13: tree->SetBranchAddress("NUMLINK",&NUMLINK);<br />
14: tree->SetBranchAddress("NUMSYMB",&NUMSYMB);<br />
15: tree->SetBranchAddress("BUFFER", &BUFFER);<br />
16: tree->SetBranchAddress("px",&px);<br />
17: tree->SetBranchAddress("py",&py);<br />
18: tree->SetBranchAddress("pz",&pz);<br />
19: Int_t nbytes = 0;<br />
20: for (Int_t i = 0, nentries = tree->GetEntries(); iGetEntry(i);<br />
22: printf("Trk#=%d PX=%f PY=%f PZ=%f\n",i,px,py,pz);<br />
23: }<br />
24: printf("I have read a total of %d Bytes.\n", nbytes);<br />
25: delete input;<br />
26: }<br />
page 93
<strong>Athena</strong> Chapter 12 N-tuple and Event Collection facilities Version/Issue: 2.0.0<br />
•<br />
page 94
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
Chapter 13<br />
Framework services<br />
13.1 Overview<br />
Services are generally sizeable components that are setup and initialized once at the beginning of the<br />
job by the framework and used by many algorithms as often as they are needed. It is not desirable in<br />
general to require more than one instance of each service. Services cannot have a “state” because there<br />
are many potential users of them so it would not be possible to guarantee that the state is preserved in<br />
between calls.<br />
In this chapter we describe how services are created and accessed, and then give an overview of the<br />
various services, other than the data access services, which are available for use within the <strong>Athena</strong><br />
framework. The Job Options service, the Message service, the Particle Properties service, the Chrono<br />
& Stat service, the Auditor service, the Random Numbers service and the Incident service are available<br />
in this release. The Tools service is described in Chapter 14.<br />
We also describe how to implement new services for use within the <strong>Athena</strong> environment. We look at<br />
how to code a service, what facilities the Service base class provides and how a service is managed<br />
by the application manager.<br />
13.2 Requesting and accessing services<br />
The Application manager creates a certain number of services by default. These are the default data<br />
access services (EventDataSvc, DetectorDataSvc, HistogramDataSvc,<br />
NTupleSvc), the default data persistency services (EventPersistencySvc,<br />
DetectorPersistencySvc, HistogramPersistencySvc) and the framework services<br />
described in this chapter and in Chapter 14 (JobOptionsSvc, MessageSvc,<br />
page 95
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
ParticlePropertySvc, ChronoStatSvc, AuditorSvc, RndmGenSvc,<br />
IncidentSvc, ToolSvc).<br />
Additional services can be requested via the job options file, using the property<br />
ApplicationMgr.ExtSvc. In the example below this option is used to create a specific type of<br />
persistency service.:<br />
Listing 13.1 Job Option to create additional services<br />
ApplicationMgr.ExtSvc += { "DbEventCnvSvc/RootEvtCnvSvc" };<br />
Once created, services must be accessed via their interface. The Algorithm base class provides a<br />
number of accessor methods for the standard framework services, listed on lines 25 to 35 of Listing 5.1<br />
on page 24. Other services can be located using the templated service function. In the example<br />
below we use this function to return the IParticlePropertySvc interface of the Particle<br />
Properties Service:<br />
Listing 13.2 Code to access the IParticlePropertySvc interface from an Algorithm<br />
#include "GaudiKernel/IParticlePropertySvc.h"<br />
...<br />
IParticlePropertySvc* m_ppSvc;<br />
StatusCode sc = service( "ParticlePropertySvc", m_ppSvc );<br />
if ( sc.isFailure) {<br />
...<br />
In components other than Algorithms, which do not provide the service function, you can locate a<br />
service using the serviceLocator function:<br />
Listing 13.3<br />
#include "GaudiKernel/IParticlePropertySvc.h"<br />
...<br />
IParticlePropertySvc* m_ppSvc;<br />
StatusCode sc = serviceLocator()->getService(<br />
"ParticlePropertySvc",<br />
IID_IParticlePropertySvc,<br />
reinterpret_cast(m_ppSvc));<br />
if ( sc.isFailure) {<br />
...<br />
page 96
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.3 The Job Options Service<br />
The Job Options Service is a mechanism which allows to configure an application at run time, without<br />
the need to recompile or relink. The options, or properties, are set via a job options file, which is read in<br />
when the Job Options Service is initialised by the Application Manager. In what follows we describe<br />
the format of the job options file, including some examples.<br />
13.3.1 Algorithm, Tool and Service Properties<br />
In general a concrete Algorithm, Tool or Service will have several data members which are used to<br />
control execution. These data members can be of a basic data type (int, float, etc.) or class<br />
(Property) encapsulating some common behaviour and higher level of functionality.<br />
13.3.1.1 SimpleProperties<br />
Simple properties are a set of classes that act as properties directly in their associated Algorithm, Tool<br />
or Service, replacing the corresponding basic data type instance. The primary motivation for this is to<br />
allow optional bounds checking to be applied, and to ensure that the Algorithm, Tool or Service itself<br />
doesn't violate those bounds. Available SimpleProperties are:<br />
• int ==> IntegerProperty or SimpleProperty<br />
• double ==> DoubleProperty or SimpleProperty<br />
• bool ==> BooleanProperty or SimpleProperty)<br />
• std::string ==> StringProperty or SimpleProperty<br />
and the equivalent vector classes<br />
• std::vector ==> IntegerArrayProperty or<br />
SimpleProperty<br />
• etc.<br />
page 97
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
The use of these classes is illustrated by the EventCounter class.<br />
Listing 13.4 EventCounter.h<br />
1: #include "GaudiKernel/Algorithm.h"<br />
2: #include "GaudiKernel/Property.h"<br />
3: class EventCounter : public Algorithm {<br />
4: public:<br />
5: EventCounter( const std::string& name, ISvcLocator* pSvcLocator );<br />
6: ~EventCounter( );<br />
7: StatusCode initialize();<br />
8: StatusCode execute();<br />
9: StatusCode finalize();<br />
10: private:<br />
11: IntegerProperty m_frequency;<br />
12: int m_skip;<br />
13: int m_total;<br />
14: };<br />
Listing 13.5 EventCounter.cpp<br />
1: #include "GaudiAlg/EventCounter.h"<br />
2:<br />
3: #include "GaudiKernel/MsgStream.h"<br />
4: #include "GaudiKernel/AlgFactory.h"<br />
5:<br />
6: static const AlgFactory Factory;<br />
7: const IAlgFactory& EventCounterFactory = Factory;<br />
8:<br />
9: EventCounter::EventCounter(const std::string& name, ISvcLocator*<br />
10: pSvcLocator) :<br />
11: Algorithm(name, pSvcLocator),<br />
12: m_skip ( 0 ),<br />
13: m_total( 0 )<br />
14: {<br />
15: declareProperty( "Frequency", m_frequency=1 ); // [1]<br />
16: m_frequency.setBounds( 0, 1000 ); // [2]<br />
17: }<br />
18:<br />
19: StatusCode<br />
20: EventCounter::initialize()<br />
21: {<br />
22: MsgStream log(msgSvc(), name());<br />
23: log
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
1. A default value may be specified when the property is declared.<br />
2. Optional upper and lower bounds may be set (see later).<br />
3. The value of the property is accessible directly using the property itself.<br />
In the Algorithm constructor, when calling declareProperty, you can optionally set the bounds<br />
using any of:<br />
setBounds( const T& lower, const T& upper );<br />
setLower ( const T& lower );<br />
setUpper ( const T& upper );<br />
There are similar selectors and modifiers to determine whether a bound has been set etc., or to clear a<br />
bound.<br />
bool hasLower( )<br />
bool hasUpper( )<br />
T lower( )<br />
T upper( )<br />
void clearBounds( )<br />
void clearLower( )<br />
void clearUpper( )<br />
You can set the value using the "=" operator or the set functions<br />
bool set( const T& value )<br />
bool setValue( const T& value )<br />
The function value indicates whether the new value was within any bounds and was therefore<br />
successfully updated. In order to access the value of the property, use:<br />
m_property.value( );<br />
In addition there's a cast operator, so you can also use m_property directly instead of<br />
m_property.value().<br />
13.3.1.2 CommandProperty<br />
CommandProperty is a subclass of StringProperty that has a handler that is called whenever<br />
the value of the property is changed. Currently that can happen only during the job initialization so it is<br />
not terribly useful. Alternatively, an Algorithm could set the property of one of its sub-algorithms.<br />
However, it is envisaged that <strong>Athena</strong> will be extended with a scripting language such that properties can<br />
be modified during the course of execution.<br />
The relevant portion of the interface to CommandProperty is:<br />
class CommandProperty : public StringProperty {<br />
public:<br />
[...]<br />
virtual void handler( const std::string& value ) = 0;<br />
[...]<br />
};<br />
page 99
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
Thus subclasses should override the handler() member function, which will be called whenever the<br />
property value changes. A future development is expected to be a ParsableProperty (or something<br />
similar) that would offer support for parsing the string.<br />
13.3.2 Job options file format<br />
The job options file has a well-defined syntax (similar to a simplified C++-Syntax) without data types.<br />
The data types are recognised by the “Job Options Compiler”, which interprets the job options file<br />
according to the syntax (described in Appendix C together with possible compiler error codes).<br />
The job options file is an ASCII-File, composed logically of a series of statements. The end of a<br />
statement is signalled by a semicolon “;“ - as in C++.<br />
Comments are the same as in C++, with ’//’ until the end of the line, or between ’/*’ and ’*/’.<br />
There are four constructs which can be used in a job options file:<br />
• Assignment statement<br />
• Append statement<br />
• Include directive<br />
• Platform dependent execution directive<br />
13.3.2.1 Assignment statement<br />
An assignment statement assigns a certain value (or a vector of values) to a property of an object or<br />
identifier. An assignment statement has the following structure:<br />
. < Propertyname > = < value >;<br />
page 100<br />
The first token (Object / Identifier) specifies the name of the object whose property is to be<br />
set. This must be followed by a dot (’.’)<br />
The next token (Propertyname) is the name of the option to be set, as declared in the<br />
declareProperty() method of the IProperty interface. This must be followed by an assign<br />
symbol (’=’).<br />
The final token (value) is the value to be assigned to the property. It can be a vector of values, in<br />
which case the values are enclosed in array brackets (’{‘,’}‘), and separated by commas (,). The token<br />
must be terminated by a semicolon (’;’).<br />
The type of the value(s) must match that of the variable whose value is to be set, as declared in<br />
declareProperty(). The following types are recognised:
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
Boolean-type, written as true or false.<br />
e.g. true; false;<br />
Integer-type, written as an integer value (containing one or more of the digits ’0’, ’1’, ’2’, ’3’, ’4’,<br />
’5’, ’6’, ’7’, ’8’, ’9’)<br />
e.g.: 123; -923; or in scientific notation, e.g.: 12e2;<br />
Real-type (similar to double in C++), written as a real value (containing one or more of the<br />
digits ’0’, ’1’, ’2’, ’3’, ’4’, ’5’, ’6’, ’7’, ’8’, ’9’ followed by a dot ’.’ and optionally one or more of digits<br />
again)<br />
e.g.: 123.; -123.45; or in scientific notation, e.g. 12.5e7;<br />
String type, written within a pair of double quotes (‘ ” ’)<br />
e.g.: “I am a string”; (Note: strings without double quotes are not allowed!)<br />
Vector of the types above, within array-brackets (’{’, ’}’), separated by a comma (’,’)<br />
e.g.: {true, false, true};<br />
e.g.: {124, -124, 135e2};<br />
e.g.: {123.53, -23.53, 123., 12.5e2};<br />
e.g.: {“String 1”, “String 2”, “String 3”};<br />
A single element which should be stored in a vector must be within array-brackets without<br />
a comma<br />
e.g. {true};<br />
e.g. {“String”};<br />
A vector which has already been defined earlier in the file (or in included files) can be<br />
reset to an empty vector<br />
e.g. {};<br />
13.3.2.2 Append Statement<br />
Because of the possibility of including other job option files (see below), it is sometimes necessary to<br />
extend a vector of values already defined in the other job option file. This functionality is provided be<br />
the append statement.<br />
An append statement has the following syntax:<br />
. < Propertyname > += < value >;<br />
The only difference from the assignment statement is that the append statement requires the ’+=’<br />
symbol instead of the ‘=’ symbol to separate the Propertyname and value tokens.<br />
page 101
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
The value must be an array of one or more values<br />
e.g. {true};<br />
e.g. {“String”};<br />
e.g.: {true, false, true};<br />
e.g.: {124, -124, 135e2};<br />
e.g.: {123.53, -23.53, 123., 12.5e2};<br />
e.g.: {“String 1”, “String 2”, “String 3”};<br />
The job options compiler itself tests if the object or identifier already exists (i.e. has already been<br />
defined in an included file) and the type of the existing property. If the type is compatible and the object<br />
exists the compiler appends the value to the existing property. If the property does not exists then the<br />
append operation "+=" behaves as assignment operation “=”.<br />
13.3.2.3 Including other Job Option Files<br />
It is possible to include other job option files in order to use pre-defined options for certain objects. This<br />
is done using the #include directive:<br />
#include “filename.ext”<br />
The “filename.ext” can also contain the path where this file is located. The include directive can<br />
be placed anywhere in the job option file, but it is strongly recommended to place it at the very top of<br />
the file (as in C++).<br />
It is possible to use environment variables in the #include statement, either standalone or as part of a<br />
string. Both Unix style (“$environmentvariable”) and Windows style<br />
(“%environmentvariable%”) are understood (on both platforms!)<br />
As mentioned above, you can append values to vectors defined in an included job option file. The<br />
interpreter creates these vectors at the moment he interprets the included file, so you can only append<br />
elements defined in a file included before the append-statement!<br />
As in C/C++, an included job option file can include other job option files. The compiler checks itself<br />
whether the include file is already included or not, so there is no need for #ifndef statements as in C<br />
or C++ to check for multiple including.<br />
Sometimes it is necessary to over-ride a value defined previously (maybe in an include file). This is<br />
done by using an assign statement with the same object and Propertyname. The last value assigned is<br />
the valid value!<br />
page 102
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.3.2.4 Platform dependent execution<br />
The possibility exists to execute statements only according to the used platform. Statements within<br />
platform dependent clauses are only executed if they are asserted to the current used platform.:<br />
#ifdef WIN32<br />
(Platform-Dependent Statement)<br />
#else (optional)<br />
(Platform-Dependent Statement)<br />
#endif<br />
Only the variable WIN32 is defined! An #ifdef WIN32 will check if the used platform is a<br />
Windows platform. If so, it will execute the statements until an #endif or an optional #else. On<br />
non-Windows platforms it will execute the code within #else and #endif. Alternatively one<br />
directly can check for a non-Windows platform by using the #ifndef WIN32 clause.<br />
13.3.3 Example<br />
We have already seen an example of a job options file in Listing 4.2 on page 24. The use of the<br />
#include statement is demonstrated on line 2: the logical name $STDOPTS is defined in the<br />
GaudiExamples package, which contains a number of standard job options include files that can be<br />
used by applications.<br />
page 103
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.4 The Standard Message Service<br />
One of the components directly visible to an algorithm object is the message service. The purpose of<br />
this service is to provide facilities for the logging of information, warnings, errors etc. The advantage of<br />
introducing such a component, as opposed to using the standard std::cout and std::cerr<br />
streams available in C++ is that we have more control over what is printed and where it is printed.<br />
These considerations are particularly important in an online environment.<br />
The Message Service is configurable via the job options file to only output messages if their “activation<br />
level” is equal to or above a given “output level”. The output level can be configured with a global<br />
default for the whole application:<br />
// Set output level threshold (2=DEBUG, 3=INFO, 4=WARNING, 5=ERROR, 6=FATAL)<br />
MessageSvc.OutputLevel = 4;<br />
and/or locally for a given client object (e.g. myAlgorithm):<br />
myAlgorithm.OutputLevel = 2;<br />
Any object wishing to print some output should (must) use the message service. A pointer to the<br />
IMessageSvc interface of the message service is available to an algorithm via the accessor method<br />
msgSvc(), see section 5.2. It is of course possible to use this interface directly, but a utility class<br />
called MsgStream is provided which should be used instead.<br />
13.4.1 The MsgStream utility<br />
page 104<br />
The MsgStream class is responsible for constructing a Message object which it then passes onto the<br />
message service. Where the message is ultimately sent to is decided by the message service.<br />
In order to avoid formatting messages which will not be sent because the verboseness level is too high,<br />
a MsgStream object first checks to see that a message will be printed before actually constructing it.<br />
However the threshold for a MsgStream object is not dynamic, i.e. it is set at creation time and<br />
remains the same. Thus in order to keep synchronized with the message service, which in principle<br />
could change its printout level at any time, MsgStream objects should be made locally on the stack<br />
when needed. For example, if you look at the listing of the HelloWorld class (see also Listing 13.1<br />
below) you will note that MsgStream objects are instantiated locally (i.e. not using new) in all three<br />
of the IAlgorithm methods and thus are destructed when the methods return. If this is not done<br />
messages may be lost, or too many messages may be printed.<br />
The MsgStream class has been designed to resemble closely a normal stream class such as<br />
std::cout, and in fact internally uses an ostrstream object. All of the MsgStream member<br />
functions write unformatted data; formatted output is handled by the insertion operators.
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
An example use of the MsgStream class is shown below.<br />
Listing 13.1 Use of a MsgStream object.<br />
1: #include “GaudiKernel/MgsStream.h”<br />
2:<br />
3: StatusCode myAlgo::finalize() {<br />
4: StatusCode status = Algorithm::finalise();<br />
5: MsgStream log(msgSvc(), name());<br />
6: if ( status.isFailure() ) {<br />
7: // Print a two line message in case of failure.<br />
8: log
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
MsgStream& operator
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.5 The Particle Properties Service<br />
The Particle Property service is a utility to find information about a named particle’s Geant3 ID,<br />
Jetset/Pythia ID, Geant3 tracking type, charge, mass or lifetime. The database used by the service can<br />
be changed, but by default is the same as that used by the LHCb SICB program. Note that the units<br />
conform to the CLHEP convention, in particular MeV for masses and ns for lifetimes. Any comment to<br />
the contrary in the code is just a leftover which has been overlooked!<br />
13.5.1 Initialising and Accessing the Service<br />
This service is created by adding the following line in the Job Options file::<br />
// Create the particle properties service<br />
ApplicationMgr.ExtSvc += { "ParticlePropertySvc" };<br />
Listing 13.2 on page 96 shows how to access this service from within an algorithm.<br />
13.5.2 Service Properties<br />
The Particle Property Service currently only has one property: ParticlePropertiesFile. This<br />
string property is the name of the database file that should be used by the service to build up its list of<br />
particle properties. The default value of this property, on all platforms, is<br />
$LHCBDBASE/cdf/particle.cdf 1<br />
13.5.3 Service Interface<br />
The service implements the IParticlePropertySvc interface. In order to use it, clients must<br />
include the file GaudiKernel/IParticlePropertySvc.h.<br />
The service itself consists of one STL vector to access all of the existing particle properties, and three<br />
STL maps, one to map particles by name, one to map particles by Geant3 ID and one to map particles<br />
by stdHep ID.<br />
Although there are three maps, there is only one copy of each particle property and thus each property<br />
must have a unique particle name and a unique Geant3 ID. The third map does not contain all particles<br />
contained in the other two maps; this is because there are particles known to Geant but not to stdHep,<br />
such as Deuteron or Cerenkov. Although retrieving particles by name should be sufficient, the second<br />
and third maps are there because most often generated data stores a particle’s Geant3 ID or stdHep ID,<br />
and not the particle’s name. These maps speed up searches using the IDs.<br />
1. This is an LHCb specific file. A generic implementation will be available in a future release of <strong>Athena</strong><br />
page 107
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
The IParticlePropertySvc interface provides the following functions:<br />
Listing 13.2 The IParticlePropertySvc interface.<br />
// IParticlePropertySvc interface:<br />
// Create a new particle property.<br />
// Input: particle, String name of the particle.<br />
// Input: geantId, Geant ID of the particle.<br />
// Input: jetsetId, Jetset ID of the particle.<br />
// Input: type, Particle type.<br />
// Input: charge, Particle charge (/e).<br />
// Input: mass, Particle mass (MeV).<br />
// Input: tlife, Particle lifetime (ns).<br />
// Return: StatusCode - SUCCESS if the particle property was added.<br />
virtual StatusCode push_back( const std::string& particle, int geantId, int<br />
jetsetId, int type, double charge, double mass, double tlife );<br />
// Create a new particle property.<br />
// Input: pp, a particle property class.<br />
// Return: StatusCode - SUCCESS if the particle property was added.<br />
virtual StatusCode push_back( ParticleProperty* pp );<br />
// Get a const reference to the begining of the map.<br />
virtual const_iterator begin() const;<br />
// Get a const reference to the end of the map.<br />
virtual const_iterator end() const;<br />
// Get the number of properties in the map.<br />
virtual int size() const;<br />
// Retrieve a property by geant id.<br />
// Pointer is 0 if no property found.<br />
virtual ParticleProperty* find( int geantId );<br />
// Retrieve a property by particle name.<br />
// Pointer is 0 if no property found.<br />
virtual ParticleProperty* find( const std::string& name );<br />
// Retrieve a property by StdHep id<br />
// Pointer is 0 if no property found.<br />
virtual ParticleProperty* findByStdHepID( int stdHepId );<br />
// Erase a property by geant id.<br />
virtual StatusCode erase( int geantId );<br />
// Erase a property by particle name.<br />
virtual StatusCode erase( const std::string& name );<br />
// Erase a property by StdHep id<br />
virtual StatusCode eraseByStdHepID( int stdHepId );<br />
page 108
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
The IParticlePropertySvc interface also provides some typedefs for easier coding:<br />
typedef ParticleProperty* mapped_type;<br />
typedef std::map< int, mapped_type, std::less > MapID;<br />
typedef std::map< std::string, mapped_type, std::less > MapName;<br />
typedef std::map< int, mapped_type, std::less > MapStdHepID;<br />
typedef IParticlePropertySvc::VectPP VectPP;<br />
typedef IParticlePropertySvc::const_iterator const_iterator;<br />
typedef IParticlePropertySvc::iterator iterator;<br />
13.5.4 Examples<br />
Below are some extracts of code from the LHCb ParticleProperties example to show how one<br />
might use the service:<br />
Listing 13.3 Code fragment to find particle properties by particle name.<br />
// Try finding particles by the different methods<br />
log
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.6 The Chrono & Stat service<br />
The Chrono & Stat service provides a facility to do time profiling of code (Chrono part) and to do some<br />
statistical monitoring of simple quantities (Stat part). The service is created by default by the<br />
Application Manager, with the name “ChronoStatSvc” and service ID extern const CLID&<br />
IID_IChronoStatSvc To access the service from inside an algorithm, the member function<br />
chronoSvc() is provided. The job options to configure this service are described in Appendix B,<br />
Table B.19.<br />
13.6.1 Code profiling<br />
Profiling is performed by using the chronoStart() and chronoStop() methods inside the codes<br />
to be profiled, e.g:<br />
/// ...<br />
IChronoStatSvc* svc = chronoSvc();<br />
/// start<br />
svc->chronoStart( "Some Tag" );<br />
/// here some user code are placed:<br />
...<br />
/// stop<br />
svc->chronoStop( "SomeTag" );<br />
The profiling information accumulates under the tag name given as argument to these methods. The<br />
service measures the time elapsed between subsequent calls of chronoStart() and<br />
chronoStop() with the same tag. The latter is important, since in the sequence of calls below, only<br />
the elapsed time between lines 3 and 5 lines and between lines 7 and 9 lines would be accumulated.:<br />
1: svc->chronoStop("Tag");<br />
2: svc->chronoStop("Tag");<br />
3: svc->chronoStart("Tag");<br />
4: svc->chronoStart("Tag");<br />
5: svc->chronoStop("Tag");<br />
6: svc->chronoStop("Tag");<br />
7: svc->chronoStart("Tag");<br />
8: svc->chronoStart("Tag");<br />
9: svc->chronoStop("Tag");<br />
The profiling information could be printed either directly using the chronoPrint() method of the<br />
service, or in the summary table of profiling information at the end of the job.<br />
page 110<br />
Note that this method of code profiling should be used only for fine grained monitoring inside<br />
algorithms. To profile a complete algorithm you should use the Auditor service, as described in section<br />
13.7.
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.6.2 Statistical monitoring<br />
Statistical monitoring is performed by using the stat() method inside user code:<br />
1: /// ... Flag and Weight to be accumulated:<br />
2: svc->stat( " Number of Tracks " , Flag , Weight );<br />
The statistical information contains the "accumulated" flag, which is the sum of all Flags for the given<br />
tag, and the "accumulated" weight, which is the product of all Weights for the given tag. The<br />
information is printed in the final table of statistics.<br />
In some sense the profiling could be considered as statistical monitoring, where the variable Flag<br />
equals the elapsed time of the process.<br />
13.6.3 Chrono and Stat helper classes<br />
To simplify the usage of the Chrono & Stat Service, two helper classes were developed: class<br />
Chrono and class Stat. Using these utilities, one hides the communications with Chrono & Stat<br />
Service and provides a more friendly environment.<br />
13.6.3.1 Chrono<br />
Chrono is a small helper class which invokes the chronoStart() method in the constructor and<br />
the chronoStop() method in the destructor. It must be used as an automatic local object.<br />
It performs the profiling of the code between its own creation and the end of the current scope, e.g:<br />
1: #include GaudiKernel/Chrono.h<br />
2: /// ...<br />
3: { // begin of the scope<br />
4: Chrono chrono( chronoSvc() , "ChronoTag" ) ;<br />
5: /// some codes:<br />
6: ...<br />
7: ///<br />
8: } // end of the scope<br />
9: /// ...<br />
If the Chrono & Stat Service is not accessible, the Chrono object does nothing<br />
page 111
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.6.3.2 Stat<br />
Stat is a small helper class, which invokes the stat() method in the constructor.<br />
1: GaudiKernel/Stat.h<br />
2: /// ...<br />
3: Stat stat( chronoSvc() , "StatTag" , Flag , Weight ) ;<br />
4: /// ...<br />
If the Chrono & Stat Service is not accessible, the Stat object does nothing.<br />
13.6.4 Performance considerations<br />
The implementation of the Chrono & Stat Service uses two std::map containers and could generate<br />
a performance penalty for very frequent calls. Usually the penalty is small relative to the elapsed time<br />
of algorithms, but it is worth avoiding both the direct usage of the Chrono & Stat Service as well as the<br />
usage of it through the Chrono or Stat utilities inside internal loops:<br />
1: /// ...<br />
2: { /// begin of the scope<br />
3: Chrono chrono( chronoSvc() , "Good Chrono"); /// OK<br />
4: long double a = 0 ;<br />
5: for( long i = 0 ; i < 1000000 ; ++i )<br />
6: {<br />
7: Chrono chrono( svc , "Bad Chrono"); /// not OK<br />
8: /// some codes :<br />
9: a += sin( cos( sin( cos( (long double) i ) ) ) );<br />
10: /// end of codes<br />
11: Stat stat ( svc , "Bad Stat", a ); /// not OK<br />
12: }<br />
13: Stat stat ( svc , "Good Stat", a); /// OK<br />
14: } /// end of the scope!<br />
15: /// ...<br />
page 112
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.7 The Auditor Service<br />
The Auditor Service provides a set of auditors that can be used to provide monitoring of various<br />
characteristics of the execution of Algorithms. Each auditor is called immediately before and after each<br />
call to each Algorithm instance, and can track some resource usage of the Algorithm. Calls that are thus<br />
monitored are initialize(), execute() and finalize(), although monitoring can be<br />
disabled for any of these for particular Algorithm instances. Only the execute() function monitoring<br />
is enabled by default.<br />
Several examples of auditors are provided. These are:<br />
• NameAuditor. This just emits the name of the Algorithm to the Standard Message Service<br />
immediately before and after each call. It therefore acts as a diagnostic tool to trace program<br />
execution.<br />
• ChronoAuditor. This monitors the cpu usage of each algorithm and reports both the total and<br />
per event average at the end of job.<br />
• MemoryAuditor. This monitors the state of memory usage during execution of each<br />
Algorithm, and will warn when memory is allocated within a call without being released on<br />
exit. Unfortunately this will in fact be the general case for Algorithms that are creating new<br />
data and registering them with the various transient stores. Such Algorithms will therefore<br />
cause warning messages to be emitted. However, for Algorithms that are just reading data<br />
from the transient stores, these warnings will provide an indication of a possible memory leak.<br />
Note that currently the MemoryAuditor is only available for Linux.<br />
• MemStatAuditor. The same as MemoryAudotor, but prints a table of memory usage statistics<br />
at the end.<br />
13.7.1 Enabling the Auditor Service and specifying the enabled Auditors<br />
The Auditor Service is enabled by the following line in the Job Options file:<br />
// Enable the Auditor Service<br />
ApplicationMgr.DLLs += { "GaudiAud" };<br />
Specifying which auditors are enabled is illustrated by the following example:<br />
// Enable the NameAuditor and ChronoAuditor<br />
AuditorSvc.Auditors = { "NameAuditor", "ChronoAuditor" };<br />
page 113
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.7.2 Overriding the default Algorithm monitoring<br />
By default, only monitoring of the Algorithm execute() function is enabled by default. This default<br />
can be overridden for individual Algorithms by use of the following Algorithm properties:<br />
// Enable initialize and finalize auditing & disable execute auditing<br />
// for the myAlgorithm Algorithm<br />
myAlgorithm.AuditInitialize = true;<br />
myAlgorithm.AuditExecute = false;<br />
myAlgorithm.AuditFinalize = true;<br />
13.7.3 Implementing new Auditors<br />
The relevant portion of the IAuditor abstract interface is shown below:<br />
virtual StatusCode beforeInitialize( IAlgorithm* theAlg ) = 0;<br />
virtual StatusCode afterInitialize ( IAlgorithm* theAlg ) = 0;<br />
virtual StatusCode beforeExecute ( IAlgorithm* theAlg ) = 0;<br />
virtual StatusCode afterExecute ( IAlgorithm* theAlg ) = 0;<br />
virtual StatusCode beforeFinalize ( IAlgorithm* theAlg ) = 0;<br />
virtual StatusCode afterFinalize ( IAlgorithm* theAlg ) = 0;<br />
A new Auditor should inherit from the Auditor base class and override the appropriate functions from<br />
the IAuditor abstract interface. The following code fragment is taken from the ChronoAuditor:<br />
#include "GaudiKernel/Auditor.h"<br />
class ChronoAuditor : virtual public Auditor {<br />
public:<br />
ChronoAuditor(const std::string& name, ISvcLocator* pSvcLocator);<br />
virtual ~ChronoAuditor();<br />
virtual StatusCode beforeInitialize(IAlgorithm* alg);<br />
virtual StatusCode afterInitialize(IAlgorithm* alg);<br />
virtual StatusCode beforeExecute(IAlgorithm* alg);<br />
virtual StatusCode afterExecute(IAlgorithm* alg);<br />
virtual StatusCode beforeFinalize(IAlgorithm* alg);<br />
virtual StatusCode afterFinalize(IAlgorithm* alg);<br />
};<br />
page 114
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.8 The Random Numbers Service<br />
When generating random numbers two issues must be considered:<br />
• reproducibility and<br />
• randomness of the generated numbers.<br />
In order to ensure both, <strong>Athena</strong> implements a single service ensuring that these criteria are met. The<br />
encapsulation of the actual random generator into a service has several advantages:<br />
• Random seeds are set by the framework. When debugging the detector simulation, the<br />
program could start at any event independent of the events simulated before. Unlike the<br />
random number generators that were known from CERNLIB, the state of modern generators<br />
is no longer defined by one or two numbers, but rather by a fairly large set of numbers. To<br />
ensure reproducibility the random number generator must be initialized for every event.<br />
• The distribution of the random numbers generated is independent of the random number<br />
engine behind. Any distribution can be generated starting from a flat distribution.<br />
• The actual number generator can easily be replaced if at some time in the future better<br />
generators become available, without affecting any user code.<br />
The implementation of both generators and random number engines are taken from CLHEP. The<br />
default random number engine used by <strong>Athena</strong> is the RanLux engine of CLHEP with a luxury level of<br />
3, which is also the default for Geant4, so as to use the same mechanism to generate random numbers as<br />
the detector simulation.<br />
Figure 13.1 shows the general architecture of the <strong>Athena</strong> random number service. The client interacts<br />
with the service in the following way:<br />
• The client requests a generator from the service, which is able to produce a generator<br />
according to a requested distribution. The client then retrieves the requested generator.<br />
• Behind the scenes, the generator service creates the requested generator and initializes the<br />
object according to the parameters. The service also supplies the shared random number<br />
engine to the generator.<br />
• After the client has finished using the generator, the object must be released in order to inhibit<br />
resource leaks<br />
Distribution:<br />
Gauss<br />
RndmGenSvc<br />
owns & initializes<br />
owns<br />
RndmGen<br />
uses<br />
RndmEngine<br />
Figure 13.1 The architecture of the random number service. The client requests from the service a random<br />
number generator satisfying certain criteria<br />
page 115
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
There are many different distributions available. The shape of the distribution must be supplied as a<br />
parameter when the generator is requested by the user.<br />
Currently implemented distributions include the following. See also the header file<br />
GaudiKernel/RndmGenerators.h for a description of the parameters to be supplied.<br />
• Generate random bit patterns with parameters Rndm::Bit()<br />
• Generate a flat distribution with boundaries [min, max] with parameters:<br />
Rndm::Flat(double min, double max)<br />
• Generate a gaussian distribution with parameters: Rndm::Gauss(double mean,<br />
double sigma)<br />
• Generate a poissonian distribution with parameters: Rndm::Poisson(double mean)<br />
• Generate a binomial distribution according to n tests with a probability p with parameters:<br />
Rndm::Binomial(long n, double p)<br />
• Generate an exponential distribution with parameters: Rndm::Exponential(double<br />
mean)<br />
• Generate a Chi**2 distribution with n_dof degrees of freedom with parameters:<br />
Rndm::Chi2(long n_dof)<br />
• Generate a Breit-Wigner distribution with parameters:<br />
Rndm::BreitWigner(double mean, double gamma)<br />
• Generate a Breit-Wigner distribution with a cut-off with parameters:<br />
Rndm::BreitWignerCutOff (mean, gamma, cut-off)<br />
• Generate a Landau distribution with parameters:<br />
Rndm::Landau(double mean, double sigma)<br />
• Generate a user defined distribution. The probability density function is given by a set of<br />
descrete points passed as a vector of doubles:<br />
Rndm::DefinedPdf(const std::vector& pdf, long intpol)<br />
Clearly the supplied list of possible parameters is not exhaustive, but probably represents most needs.<br />
The list only represents the present content of generators available in CLHEP and can be updated in<br />
case other distributions will be implemented.<br />
Since there is a danger that the interfaces are not released, a wrapper is provided that automatically<br />
releases all resources once the object goes out of scope. This wrapper allows the use of the random<br />
number service in a simple way. Typically there are two different usages of this wrapper:<br />
page 116
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
• Within the user code a series of numbers is required only once, i.e. not every event. In this<br />
case the object is used locally and resources are released immediately after use. This example<br />
is shown in Listing 13.5 .<br />
Listing 13.5 Example of the use of the random number generator to fill a histogram with a Gaussian<br />
distribution within a standard <strong>Athena</strong> algorithm<br />
1: Rndm::Numbers gauss(randSvc(), Rndm::Gauss(0.5,0.2));<br />
2: if ( gauss ) {<br />
3: IHistogram1D* his = histoSvc()->book("/stat/2","Gaussian",40,0.,3.);<br />
4: for ( long i = 0; i < 5000; i++ )<br />
5: his->fill(gauss(), 1.0);<br />
6: }<br />
• One or several random numbers are required for the processing of every event. An example is<br />
shown in Listing 13.6.<br />
Listing 13.6 Example of the use of the random number generator within a standard <strong>Athena</strong> algorithm, for use<br />
at every event. The wrapper to the generator is part of the Algorithm itself and must be initialized before being<br />
used. Afterwards the usage is identical to the example described in Listing 13.5<br />
1: #include "GaudiKernel/RndmGenerators.h"<br />
2:<br />
3: // Constructor<br />
4: class myAlgorithm : public Algorithm {<br />
5: Rndm::Numbers m_gaussDist;<br />
6: ...<br />
7: };<br />
8:<br />
9: // Initialisation<br />
10: StatusCode myAlgorithm::initialize() {<br />
11: ...<br />
1: StatusCode sc=m_gaussDist.initialize(randSvc(), Rndm::Gauss(0.5,0.2));<br />
2: if ( !status.isSuccess() ) {<br />
3: // put error handling code here...<br />
4: }<br />
5: ...<br />
6: }<br />
There are a few points to be mentioned in order to ensure the reproducibility:<br />
• Do not keep numbers across events. If you need a random number ask for it. Usually caching<br />
does more harm than good. If there is a performance penalty, it is better to find a more generic<br />
solution.<br />
• Do not access the RndmEngine directly.<br />
• Do not manipulate the engine. The random seeds should only be set by the framework on an<br />
event by event basis.<br />
page 117
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.9 The Incident Service<br />
The Incident service provides synchronization facilities to components in an <strong>Athena</strong> application.<br />
Incidents are named software events that are generated by software components and that are delivered<br />
to other components that have requested to be informed when that incident happens. The <strong>Athena</strong><br />
components that want to use this service need to implement the IIncidentListener interface,<br />
which has only one method: handle(Incident&), and they need to add themselves as Listeners<br />
to the IncidentSvc. The following code fragment works inside Algorithms.<br />
class MyAlgorithm : public Algorithm, virtual public IIncidentListener {<br />
...<br />
};<br />
MyAlgorithm::Initialize() {<br />
IIncidentSvc* incsvc;<br />
StatusCode sc = service("IncidentSvc", incsvc);<br />
int priority = 100;<br />
if( sc.isSuccess() ) {<br />
incsvc->addListener( this, "BeginEvent", priority);<br />
incsvc->addListener( this, "EndEvent");<br />
}<br />
}<br />
MyAlgorithm::handle(Incident& inc) {<br />
log
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
13.10 Developing new services<br />
13.10.1 The Service base class<br />
Within <strong>Athena</strong> we use the term "Service" to refer to a class whose job is to provide a set of facilities or<br />
utilities to be used by other components. In fact we mean more than this because a concrete service<br />
must derive from the Service base class and thus has a certain amount of predefined behaviour; for<br />
example it has initialize() and finalize() methods which are invoked by the application<br />
manager at well defined times.<br />
Figure 13.1 shows the inheritance structure for an example service called SpecificService. The<br />
key idea is that a service should derive from the Service base class and additionally implement one<br />
or more pure abstract classes (interfaces) such as IConcreteSvcType1 and<br />
IConcreteSvcType2 in the figure.<br />
Figure 13.1 Implementation of a concrete service class. Though not shown in the figure, both of the<br />
IConcreteSvcType interfaces are derived from IInterface.<br />
As discussed above, it is necessary to derive from the Service base class so that the concrete service<br />
may be made accessible to other <strong>Athena</strong> components. The actual facilities provided by the service are<br />
available via the interfaces that it provides. For example the ParticleProperties service<br />
implements an interface which provides methods for retrieving, for example, the mass of a given<br />
particle. In figure 13.1 the service implements two interfaces each of two methods.<br />
page 119
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
A component which wishes to make use of a service makes a request to the application manager.<br />
Services are requested by a combination of name, and interface type, i.e. an algorithm would request<br />
specifically either IConcreteSvcType1 or IConcreteSvcType2.<br />
The identification of what interface types are implemented by a particular class is done via the<br />
queryInterface method of the IInterface interface. This method must be implemented in the<br />
concrete service class. In addition the initialize() and finalize() methods should be<br />
implemented. After initialization the service should be in a state where it may be used by other<br />
components.<br />
The service base class offers a number of facilities itself which may be used by derived concrete service<br />
classes:<br />
• Properties are provided for services just as for algorithms. Thus concrete services may be fine<br />
tuned by setting options in the job options file.<br />
• A serviceLocator method is provided which allows a component to request the use of<br />
other services which it may need.<br />
• A message service.<br />
13.10.2 Implementation details<br />
The following is essentially a checklist of the minimal code required for a service.<br />
1. Define the interfaces<br />
2. Derive the concrete service class from the Service base class.<br />
3. Implement the queryInterface() method.<br />
4. Implement the initialize() method. Within this method you should make a call to<br />
Service::initialize() as the first statement in the method and also make an explicit<br />
call to setProperties() in order to read the service’s properties from the job options<br />
(note that this is different from Algorithms, where the call to setProperties() is done in<br />
the base class).<br />
:<br />
Listing 13.7 An interface class<br />
#include "GaudiKernel/IInterface.h"<br />
class IConcreteSvcType1 : virtual public IInterface {<br />
public:<br />
void method1() = 0;<br />
int method2() = 0;<br />
}<br />
page 120
<strong>Athena</strong><br />
Chapter 13 Framework services Version/Issue: 2.0.0<br />
Listing 13.7 An interface class<br />
#include "IConcreteSvcType1.h"<br />
const IID& IID_IConcreteSvcType1 = 143; // UNIQUE within LHCb !!<br />
Listing 13.8 A minimal service implementation<br />
#include "GaudiKernel/Service.h"<br />
#include "IConcreteSvcType1.h"<br />
#include "IConcreteSvcType2.h"<br />
class SpecificService : public Service,<br />
virtual public IConcreteSvcType1,<br />
virtual public IConcreteSvcType2 {<br />
public:<br />
// Constructor of this form required:<br />
SpecificService(const std::string& name, ISvcLocator* sl);<br />
queryInterface(constIID& riid, void** ppvIF);<br />
};<br />
page 121
<strong>Athena</strong> Chapter 13 Framework services Version/Issue: 2.0.0<br />
Listing 13.8 A minimal service implementation<br />
// Factory for instantiation of service objects<br />
static SvcFactory s_factory;<br />
const ISvcFactory& SpecificServiceFactory = s_factory;<br />
// UNIQUE Interface identifiers defined elsewhere<br />
extern const IID& IID_IConcreteSvcType1;<br />
extern const IID& IID_IConcreteSvcType2;<br />
// queryInterface<br />
StatusCode SpecificService::queryInterface(const IID& riid, void** ppvIF) {<br />
if(IID_IConcreteSvcType1 == riid) {<br />
*ppvIF = dynamic_cast (this);<br />
return StatusCode::SUCCESS;<br />
} else if(IID_IConcreteSvcType2 == riid) {<br />
*ppvIF = dynamic_cast (this);<br />
return StatusCode::SUCCESS;<br />
} else {<br />
return Service::queryInterface(riid, ppvIF);<br />
}<br />
}<br />
StatusCode SpecificService::initialize() { ... }<br />
StatusCode SpecificService::finalize() { ... }<br />
// Implement the specifics ...<br />
SpecificService::method1() {...}<br />
SpecificService::method2() {...}<br />
SpecificService::method3() {...}<br />
SpecificService::method4() {...}<br />
page 122
<strong>Athena</strong><br />
Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
Chapter 14<br />
Tools and ToolSvc<br />
14.1 Overview<br />
Tools are light weight objects whose purpose is to help other components perform their work. A<br />
framework service, the ToolSvc, is responsible for creating and managing Tools. An Algorithm<br />
requests the tools it needs to the ToolSvc, specifying if requesting a private instance by declaring<br />
itself as the parent. Since Tools are managed by the ToolSvc, any component 1 can request a tool.<br />
Only Algorithms and Services can declare themselves as Tools parents.<br />
In this chapter we first describe these objects and the difference between “private” and “shared” tools.<br />
We then look at the AlgTool base class and show how to write concrete Tools.<br />
In section 14.3 we describe the ToolSvc and show how a component can retrieve Tools via the<br />
service.<br />
Finally we describe Associators, common utility GaudiTools for which we provide the interface and<br />
base class.<br />
14.2 Tools and Services<br />
As mentioned elsewhere Algorithms make use of framework services to perform their work. In general<br />
the same instance of a service is used by many algorithms and Services are setup and initialized once at<br />
the beginning of the job by the framework. Algorithms also delegate some of their work to<br />
sub-algorithms. Creation and execution of sub-algorithms are the responsibilities of the parent<br />
1. In this chapter we will use an Algorithm as example component requesting tools.<br />
page 123
<strong>Athena</strong> Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
algorithm whereas the initialize() and finalize() methods are invoked automatically by the<br />
framework while initializing the parent algorithm. The properties of a sub-algorithm are automatically<br />
set by the framework but the parent algorithm can change them during execution. Sharing of data<br />
between nested algorithms is done via the Transient Event Store.<br />
Both Services and Algorithms are created during the initialization stage of a job and live until the jobs<br />
ends.<br />
Sometimes an encapsulated piece of code needs to be executed only for specific events, in which case it<br />
is desirable to create it only when necessary. On other occasions the same piece of code needs to be<br />
executed many times per event. Moreover it can be necessary to execute a sub-algorithm on specific<br />
contained objects that are selected by the parent algorithm or have the sub-algorithm produce new<br />
contained objects that may or may not be put in the Transient Store. Finally different algorithms may<br />
wish to configure the same piece of code slightly differently or share it as-is with other algorithms.<br />
To provide this kind of functionality we have introduced a category of processing objects that<br />
encapsulate these “light” algorithms. We have called this category Tools.<br />
Some examples of possible tools are single track fitters, association to Monte Carlo truth information,<br />
vertexing between particles, smearing of Monte Carlo quantities.<br />
14.2.1 “Private” and “Shared” Tools<br />
Algorithms can share instances of Tools with other Algorithms if the configuration of the tool<br />
is suitable. In some cases however an Algorithm will need to customize a tool in a specific way in<br />
order to use it. This is possible by requesting the ToolSvc to provide a “private” instance of a tool.<br />
If an Algorithm passes a pointer to itself when it asks the ToolSvc to provide it with a tool, it is<br />
declaring itself as the parent and a “private” instance is supplied. Private instances can be configured<br />
according to the needs of each particular Algorithm via jobOptions.<br />
As mentioned above many Algorithms can use a tool as-is, in which case only one instance of a<br />
Tool is created, configured and passed by the ToolSvc to the different algorithms. This is called a<br />
“shared” instance. The parent of “shared” tools is the ToolSvc.<br />
14.2.2 The Tool classes<br />
14.2.2.1 The AlgTool base class<br />
page 124<br />
The main responsibilities of the AlgTool base class (see Listing 14.1) are the identification of the<br />
tools instances, the initialisation of certain internal pointers when the tool is created and the<br />
management of the tools properties. The AlgTool base class also offers some facilities to help in the<br />
implementation of derived tools.
<strong>Athena</strong><br />
Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
Listing 14.1 The definition of the AlgTool Base class<br />
1: class AlgTool : public virtual IAlgTool,<br />
2: public virtual IProperty {<br />
3:<br />
4: public:<br />
5: // Standard Constructor.<br />
6: AlgTool( const std::string& type, const std::string& name,<br />
const IInterface* parent);<br />
7:<br />
8: virtual const std::string& name() const;<br />
9: virtual const std::string& type() const;<br />
10: virtual const IInterface* parent() const;<br />
11:<br />
12: virtual StatusCode setProperty(const Property& p);<br />
13: virtual StatusCode getProperty(Property* p) const;<br />
14: virtual const Property& getProperty( const std::string& name) const;<br />
15: virtual const std::vector& getProperties( ) const;<br />
16:<br />
17: ISvcLocator* serviceLocator();<br />
18: IMessageSvc* msgSvc();<br />
19: IMessageSvc* msgSvc() const;<br />
20:<br />
21: StatusCode setProperties();<br />
22:<br />
23: StatusCode declareProperty(const std::string& name, int& reference);<br />
24: StatusCode declareProperty(const std::string& name, double& reference);<br />
25: StatusCode declareProperty(const std::string& name, bool& reference);<br />
26: StatusCode declareProperty(const std::string& name,<br />
std::string& reference);<br />
27: StatusCode declareProperty(const std::string& name,<br />
std::vector& reference);<br />
28: StatusCode declareProperty(const std::string& name,<br />
std::vector& reference);<br />
29: StatusCode declareProperty(const std::string& name,<br />
std::vector& reference);<br />
30: StatusCode declareProperty(const std::string& name,<br />
std::vector& reference);<br />
31:<br />
32: protected:<br />
33: // Standard destructor.<br />
34: virtual ~AlgTool();<br />
Access to Services - A serviceLocator() method is provided to enable the derived tools to<br />
locate the services necessary to perform their jobs. Since concrete Tools are instantiated by the<br />
ToolSvc upon request, all Services created by the framework prior to the creation of a tool are<br />
available. In addition access to the message service is provided via the msgSvc() method. Both<br />
pointers are retrieved from the parent of the tool.<br />
page 125
<strong>Athena</strong> Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
Declaring Properties - A set of methods for declaring properties similarly to Algorithms is<br />
provided. This allows tuning of data members used by the Tools via JobOptions files. The ToolSvc<br />
takes care of calling the setProperties() method of the AlgTool base class after having<br />
instantiated a tool. Properties need to be declared in the constructor of a Tool. The property<br />
outputLevel is declared in the base class and is identically set to that of the parent component. For<br />
details on Properties see section 13.3.1.<br />
Constructor - The base class has a single constructor which takes three arguments. The first is the type<br />
(i.e. the class) of the Tool object being instantiated, the second is the full name of the object and the<br />
third is a pointer to the IInterface of the parent component. The name is used for the identification<br />
of the tool instance as described below.The parent interface is used by the tool to access for example the<br />
outputLevel of the parent.<br />
IAlgTool Interface - It consists of three accessor methods for the identification and managment of<br />
the tools: type(), name() and parent(). These methods are all implemented by the base class<br />
and should not be overridden.<br />
14.2.2.2 Tools identification<br />
A tool instance is identified by its full name. The name consist of the concatenation of the parent name,<br />
a dot, and a tool dependent part. The tool dependent part can be specified by the user, when not<br />
specified the tool type (i.e. the class) is automatically taken as the tool dependent part of the name.<br />
Examples of tool names are RecPrimaryVertex.VertexSmearer (a private tool) and<br />
ToolSvc.AddFourMom (a shared tool). The full name of the tool has to be used in the jobOptions file to<br />
set its properties.<br />
14.2.2.3 Concrete tools classes<br />
Operational functionalities of tools must be provided in the derived tool classes. A concrete tool class<br />
must inherit directly or indirectly from the AlgTool base class to ensure that it has the predefined<br />
behaviour needed for management by the ToolSvc. The inheritance structure of derived tools is<br />
shown in Figure 14.1. ConcreteTool1 implements one additional abstract interface while<br />
page 126
<strong>Athena</strong><br />
Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
ConcreteTool3 and ConcreteTool4 derive from a base class SubTool that provides them<br />
with additional common functionality.<br />
IAlgTool<br />
IProperty<br />
AlgTool<br />
IConcreteTool1<br />
ConcreteTool1<br />
ConcreteTool2<br />
SubTool<br />
ISubTool<br />
ConcreteTool3<br />
ConcreteTool4<br />
Figure 14.1 Tools class hierarchy<br />
The idea is that concrete tools could implement additional interfaces, specific to the task a tool is<br />
designed to perform. Specialised tools intended to perform similar tasks can be derived from a common<br />
base class that will provide the common functionality and implement the common interface. Consider<br />
as example the vertexing of particles, where separate tools can implement different algorithms but the<br />
arguments passed are the same. If a specialized tool is only accessed via the additional interface, the<br />
interface itself must inherit from the IAlgTool interface in order for the tool to be correctly managed<br />
by the ToolSvc.<br />
14.2.2.4 Implementation of concrete tools<br />
An example minimal implementation of a concrete tool is shown in Listing 14.2 and Listing 14.3, taken<br />
from the LHCb ToolsAnalysis example application..<br />
Listing 14.2 Example of a concrete tool minimal implementation header file<br />
1: #include "GaudiKernel/AlgTool.h"<br />
2: class VertexSmearer : public AlgTool {<br />
3: public:<br />
4: // Constructor<br />
5: VertexSmearer( const std::string& type, const std::string& name,<br />
const IInterface* parent);<br />
6: // Standard Destructor<br />
7: virtual ~VertexSmearer() { }<br />
8: // specific method of this tool<br />
1: StatusCode smear( MyAxVertex* pvertex );<br />
page 127
<strong>Athena</strong> Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
Listing 14.3 Example of a concrete tool minimal implementation file<br />
1: #include "GaudiKernel/ToolFactory.h"<br />
2: // Static factory for instantiation of algtool objects<br />
3: static ToolFactory s_factory;<br />
4: const IToolFactory& VertexSmearerFactory = s_factory;<br />
5:<br />
6: // Standard Constructor<br />
7: VertexSmearer::VertexSmearer(const std::string& type,<br />
const std::string& name,<br />
const IInterface* parent)<br />
: AlgTool( type, name, parent ) {<br />
8:<br />
9: // Locate service needed by the specific tool<br />
10: m_randSvc = 0;<br />
11: if( serviceLocator() ) {<br />
12: StatusCode sc=StatusCode::FAILURE;<br />
13: sc = serviceLocator()->getService( "RndmGenSvc",<br />
IID_IRndmGenSvc,<br />
(IInterface*&) (m_randSvc) );<br />
14: }<br />
15: // Declare properties of the specific tool<br />
16: declareProperty("dxVtx", m_dxVtx = 9 * micrometer);<br />
17: declareProperty("dyVtx", m_dyVtx = 9 * micrometer);<br />
18: declareProperty("dzVtx", m_dzVtx = 38 * micrometer);<br />
19: }<br />
20: // Implement the specific method ....<br />
21: StatusCode VertexSmearer::smear( MyAxVertex* pvertex ) {...}<br />
page 128<br />
The creation of concrete tools is similar to that of Algorithms, making use of a Factory Method. As for<br />
Algorithms, Tool factories enable their creator to instantiate new tools without having to include any of<br />
the concrete tools header files. A template factory is provided and a tool developer will only need to add<br />
the concrete factory in the implementation file as shown in lines 1 to 4 of Listing 14.3<br />
In addition a concrete tool class must specify a single constructor with the same parameter signatures as<br />
the constructor of the AlgTool base class as shown in line 5 of Listing 14.2.<br />
Below is the minimal checklist of the code necessary when developing a Tool:<br />
1. Derive the tool class from the AlgTool base class<br />
2. Provide the constructor<br />
3. Implement the factory adding the lines of code shown in Listing 14.3<br />
In addition if the tool is implementing an additional interface you may need to:<br />
1. Define the specific interface (inheriting from the IAlgTool interface).<br />
2. Implement the queryInterface() method.<br />
3. Implement the specific interface methods.
<strong>Athena</strong><br />
Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
14.3 The ToolSvc<br />
The ToolSvc manages Tools. It is its responsibility to create tools and make them available to<br />
Algorithms or Services.<br />
The ToolSvc verifies if a tool type is available and creates the necessary instance after having verified<br />
if it doesn’t already exist. If a tool instance exists the ToolSvc will not create a new identical one but<br />
pass to the algorithm the existing instance. Tools are created on a “first request” basis: the first<br />
Algorithm requesting a tool will prompt its creation. The relationship between an algorithm, the<br />
ToolSvc and Tools is shown in Figure 14.1.<br />
IToolSvc<br />
ToolSvc<br />
IService<br />
IAlgTool<br />
IProperty<br />
ConcreteAlgorithm<br />
AlgTool<br />
IToolFactory<br />
ToolFactory<br />
< T ><br />
ConcreteTool<br />
Figure 14.1 ToolSvc design diagram<br />
The ToolSvc will “hold” a tool until it is no longer used by any component or until the finalize()<br />
method of the tool service is called. Algorithms can inform the ToolSvc they are not going to use a<br />
tool previously requested via the releaseTool method of the IToolSvc interface 1 .<br />
The ToolSvc is created by default by the ApplicationMgr and algorithms wishing to use the<br />
service can retrieve it using the service accessor method of the Algorithm base class as shown in<br />
the lines below.<br />
include "GaudiKernel/IToolSvc.h"<br />
...<br />
IToolSvc* toolSvc=0;<br />
StatusCode sc = service( "ToolSvc",toolSvc);<br />
if ( sc.isFailure) {<br />
...<br />
1. The releaseTool method is not available in this release<br />
page 129
<strong>Athena</strong> Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
14.3.1 Retrieval of tools via the IToolSvc interface<br />
The IToolSvc interface is the ToolSvc specific interface providing methods to retrieve tools.<br />
The interface has two retrieve methods that differ in their parameters signature, as shown in Listing<br />
14.4<br />
Listing 14.4 The IToolSvc interface methods<br />
1: virtual StatusCode retrieve(const std::string& type,<br />
IAlgTool*& tool,<br />
const IInterface* parent=0,<br />
bool createIf=true ) = 0;<br />
2: virtual StatusCode retrieve(const std::string& type,<br />
const std::string& name,<br />
IAlgTool*& tool,<br />
const IInterface* parent=0,<br />
bool createIf=true ) = 0;<br />
The arguments of the method shown in Listing 14.4, line 1, are the tool type (i.e. the class) and the<br />
IAlgTool interface of the returned tool. In addition there are two arguments with default values: one<br />
is the IInterface of the component requesting the tool, the other a boolean creation flag. If the<br />
component requesting a tool passes a pointer to itself as the third argument, it declares to the ToolSvc<br />
that it is asking for a “private” instance of the tool. By default a “shared” instance is provided. In<br />
general if the requested instance of a Tool does not exist the ToolSvc will create it. This behaviour can<br />
be changed by setting to false the last argument of the method.<br />
The method shown in Listing 14.4, line 2 differs from the one shown in line 1 by an extra argument, a<br />
string specifying the tool dependent part of the full tool name. This enables a component to request two<br />
separately configurable instances of the same tool.<br />
To help the retrieval of concrete tools two template functions, as shown in Listing 14.5, are provided in<br />
the IToolSvc interface file.<br />
Listing 14.5 The IToolSvc template methods<br />
1: template <br />
2: StatusCode retrieveTool( const std::string& type,<br />
T*& tool,<br />
const IInterface* parent=0,<br />
bool createIf=true ) {...}<br />
3: template <br />
4: StatusCode retrieveTool( const std::string& type,<br />
const std::string& name,<br />
T*& tool,<br />
const IInterface* parent=0,<br />
bool createIf=true ) {...}<br />
page 130<br />
The two template methods correspond to the IToolSvc retrieve methods but have the tool returned as<br />
a template parameter. Using these methods the component retrieving a tool avoids explicit<br />
dynamic-casting to specific additional interfaces or to derived classes.
<strong>Athena</strong><br />
Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
Listing 14.6 shows an example of retrieval of a shared and of a common tool.<br />
Listing 14.6 Example of retrieval of a shared tool in line 8: and of a private tool in line 14:<br />
1: IToolSvc* toolsvc = 0;<br />
2: sc = service( "ToolSvc", toolsvc );<br />
3: if( sc.isFailure() ) {<br />
4: log
<strong>Athena</strong> Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
information is stored. For some specific Associators, in addition, it can depend on some algorithmic<br />
choices: consider as an example a physics analysis particle and a possible originating Monte Carlo<br />
particle where the associating discriminant could be the fractional number of hits used in the<br />
reconstruction of the tracks. An advantage of this approach is that the implementation of the navigation<br />
can be modified without affecting the reconstruction and analysis algorithms because it would affect<br />
only the associators. In addition short-cuts or complete navigational information can be provided to the<br />
user in a transparent way. By limiting the use of such associators to dedicated monitoring algorithms<br />
where the comparison between raw/reconstructed data and MC truth is done, one could ensure that the<br />
reconstruction and analysis code treat simulated and real data in an identical way.<br />
Associators must implement a common interface called IAssociator. An Associator base class<br />
providing at the same time common functionality and some facilities to help in the implementation of<br />
concrete Associators is provided. A first version of these classes is provided in the current release of<br />
<strong>Athena</strong>.<br />
14.4.1.1 The IAssociator Interface<br />
As already mentioned Associators must implement the IAssociator interface.<br />
In order for Associators to be retrieved from the ToolSvc only via the IAssociator interface, the<br />
interface itself inherits from the IAlgTool interface. While the implementation of the IAlgTool<br />
interface is done in the AlgTool base class, the implementation of the IAssociator interface is the<br />
full responsibility of concrete associators.<br />
The four methods of the IAssociator interface that a concrete Associator must implement are show<br />
in Listing 14.7<br />
Listing 14.7 Methods of the IAssociator Interface that must be implemented by concrete associators<br />
1: virtual StatusCode i_retrieveDirect( ContainedObject* objFrom,<br />
ContainedObject*& objTo,<br />
const CLID idFrom,<br />
const CLID idTo ) = 0;<br />
2: virtual StatusCode i_retrieveDirect( ContainedObject* objFrom,<br />
std::vector& vObjTo,<br />
const CLID idFrom,<br />
const CLID idTo ) = 0;<br />
3: virtual StatusCode i_retrieveInverse( ContainedObject* objFrom,<br />
ContainedObject*& objTo,<br />
const CLID idFrom,<br />
const CLID idTo) = 0;<br />
4: virtual StatusCode i_retrieveInverse( ContainedObject* objFrom,<br />
std::vector& vObjTo,<br />
const CLID idFrom,<br />
const CLID idTo) = 0;<br />
page 132<br />
Two i_retrieveDirect methods must be implemented for retrieving associated classes following<br />
the same direction as the links in the data: for example from reconstructed particles to Monte Carlo<br />
particles. The first parameter is a pointer to the object for which the associated Monte Carlo
<strong>Athena</strong><br />
Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
quantity(ies) is requested. The second parameter, the discriminating signature between the two<br />
methods, is one or a vector of pointers to the associated Monte Carlo objects of the type requested.<br />
Some reconstructed quantities will have only one possible Monte Carlo associated object of a certain<br />
type, some will have many, others will have many out of which a “best” associated object can be<br />
extracted. If one of the two methods is not valid for a concrete associator, such method must return a<br />
failure. The third and fourth parameters are the class IDs of the objects for which the association is<br />
requested. This allows to verify at run time if the objects’ types are those the concrete associator has<br />
been implemented for.<br />
The two i_retrieveInverse methods are complementary and are for retrieving the association<br />
between the same two classes but in the opposite direction to that of the links in the data: from Monte<br />
Carlo particles to reconstructed particles. The different name is intended to alert the user that navigation<br />
in this direction may be a costly operation<br />
Four corresponding template methods are implemented in IAssociator to facilitate the use of<br />
Associators by Algorithms (see Listing 14.8). Using these methods the component retrieving a tool<br />
avoids some explicit dynamic-casting as well as the setting of class IDs. An example of how to use such<br />
methods is described in section 14.4.1.3.<br />
Listing 14.8 Template methods of the IAssociator interface<br />
1: template <br />
StatusCode retrieveDirect( T1* from, T2*& to ) {...}<br />
2: template <br />
StatusCode retrieveDirect( T1* from,<br />
std::vector& objVTo,<br />
const CLID idTo ) {...}<br />
3: template <br />
StatusCode retrieveInverse( T1* from, T2*& to ) {...}<br />
4: template <br />
StatusCode retrieveInverse( T1* from,<br />
std::vector& objVTo,<br />
const CLID idTo ) {...}<br />
14.4.1.2 The Associator base class<br />
An associator is a type of AlgTool,so the Associator base class inherits from the AlgTool base<br />
class. Thus, Associators can be created and managed as AlgTools by the ToolSvc. Since all the<br />
methods of the AlgTool base class (as described in section 14.2.2.1) are available in the<br />
Associator base class, only the additional functionality is described here.<br />
Access to Event Data Service - An eventSvc() method is provided to access the Event Data<br />
Service since most concrete associators will need to access data, in particular if accessing navigational<br />
short-cuts.<br />
Associator Properties - Two properties are declared in the constructor and can be set in the<br />
jobOptions: “FollowLinks” and “DataLocation”. They are respectively a bool with initial<br />
value true and a std::string with initial value set to “ ”. The first is foreseen to be used by an<br />
associator when it is possible to either follow links between classes or retrieve navigational short cuts<br />
page 133
<strong>Athena</strong> Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
from the data. A user can choose to set either behaviour at run time. The second property contains the<br />
location in the data where the stored navigational information is located. Currently it must be set via the<br />
jobOptions when necessary, as shown in Listing 14.9 for a particular implementation provided in the<br />
Associator example. Two corresponding methods are provided for using the information from these<br />
properties: followLinks() and whichTable().<br />
Inverse Association - Retrieving information in the direction opposite to that of the links in the data is<br />
in general a time consuming operation, that implies checking all the direct associations to access the<br />
inverse relation for a specified object. For this reason Associators should keep a local copy of the<br />
inverse associations after receiving the first request for an event. A few methods are provided to<br />
facilitate the work of Associators in this case. The methods inverseExist() and<br />
setInverseFlag(bool) help in keeping track of the status of the locally kept inverse<br />
information.The method buildInverse() has to be overridden by concrete associators since they<br />
choose in which form to keep the information and should be called by the associator when receiving the<br />
first request during the processing of an event.<br />
Locally kept information - When a new event is processed, the associator needs to reset its status to<br />
the same conditions as those after having been created . In order to be notified of such an incident<br />
happening the Associator base class implements the IListener interface and, in the constructor,<br />
registers itself with the Incident Service (see section 13.9 for details of the Incident Service). The<br />
associator’s flushCache() method is called in the implementation of the IListener interface in<br />
the Associator base class. This method must be overridden by concrete associators wanting to do a<br />
meaningful reset of their initial status.<br />
14.4.1.3 A concrete example<br />
In this section we look at an example implementation of a specific associator. The code is taken from<br />
the LHCb Associator example, but the points illustrated should be clear even without a knowledge<br />
of the LHCb data model.<br />
The AxPart2MCParticleAsct provides association between physics analysis particles<br />
(AxPartCandidate) and the corresponding Monte Carlo particles (MCParticle). The direct<br />
navigational information is stored in the persistent data as short-cuts, and is retrieved in the form of a<br />
SmartRefTable in the Transient Event Store. This choice is specific to<br />
AxPart2MCParticleAsct, any associator can use internally a different navigational mechanism.<br />
The location in the Event Store where the navigational information can be found is set in the job options<br />
via the “DataLocation” property, as shown in Listing 14.9.<br />
Listing 14.9 Example of setting properties for an associator via jobOptions<br />
ToolSvc.AxPart2MCParticleAsct.DataLocation = "/Event/Anal/AxPart2MCParticle";<br />
page 134<br />
In the current LHCb data model only a single MCParticle can be associated to one<br />
AxPartCandidate and vice-versa only one or no AxPartCandidate can be associated to one<br />
MCParticle. For this reason only the i_retrieveDirect and i_retrieveInverse<br />
methods providing one-to-one association are meaningful. Both methods verify that the objects passed<br />
are of the correct type before attempting to retrieve the information, as shown in Listing 14.10. When<br />
no association is found, a StatusCode::FAILURE is returned.
<strong>Athena</strong><br />
Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
Listing 14.10 Checking if objects to be associated are of the correct type<br />
1: if ( idFrom != AxPartCandidate::classID() ){<br />
2: objTo = 0;<br />
3: return StatusCode::FAILURE;<br />
4: }<br />
5: if ( idTo != MCParticle::classID() ) {<br />
6: objTo = 0;<br />
7: return StatusCode::FAILURE;<br />
8: }<br />
The i_retrieveInverse method providing the one-to-many association returns a failure, while a<br />
fake implementation of the one-to-many i_retrieveDirect method is implemented in the<br />
example, to show how an Algorithm can use such a method. In the AxPart2MCParticleAsct<br />
example the inverse table is kept locally and both the buildInverse() and flushCache() methods<br />
are overridden. In the example the choice has been made to implement an additional method<br />
buildDirect() to retrieve the direct navigational information on a first request per event basis.<br />
Listing 14.11 shows how a monitoring Algorithm can get an associator from the ToolSvc and use it to<br />
retrieve associated objects through the template interfaces.<br />
page 135
<strong>Athena</strong> Chapter 14 Tools and ToolSvc Version/Issue: 2.0.0<br />
Listing 14.11 Extracted code from the AsctExampleAlgorithm<br />
1: #include "GaudiTools/IAssociator.h"<br />
2: // Example of retrieving an associator<br />
3: IAssociator<br />
4: StatusCode sc = toolsvc->retrieveTool("AxPart2MCParticleAsct", m_pAsct);<br />
5: if( sc.isFailure() ) {<br />
6: log retrieveInverse( *itm, mptry );<br />
15: if( sc.isSuccess() ) {...}<br />
16: else {...}<br />
17: }<br />
18: // Example of retrieving direct one-to-many information from an<br />
19: // associator<br />
20: SmartDataPtr candidates(evt,<br />
"/Anal/AxPartCandidates");<br />
21: std::vector pptry;<br />
22: AxPartCandidate* itP = *(candidates->begin());<br />
23: StatusCode sa =<br />
m_pAsct->retrieveDirect(itP, pptry, MCParticle::classID());<br />
24: if( sa.isFailure() ) {...}<br />
25: else {<br />
26: for (std::vector::iterator it = pptry.begin();<br />
pptry.end() != it; it++ ) {<br />
27: MCParticle* imc = dynamic_cast( *it );<br />
28: }<br />
29: }<br />
page 136
<strong>Athena</strong><br />
Chapter 15 Converters Version/Issue: 2.0.0<br />
Chapter 15<br />
Converters<br />
15.1 Overview<br />
Consider a small piece of detector; a silicon wafer for example. This “object” will appear in many<br />
contexts: it may be drawn in an event display, it may be traversed by particles in a Geant4 simulation,<br />
its position and orientation may be stored in a database, the layout of its strips may be queried in an<br />
analysis program, etc. All of these uses or views of the silicon wafer will require code.<br />
One of the key issues in the design of the framework was how to encompass the need for these different<br />
views within <strong>Athena</strong>. In this chapter we outline the design adopted for the framework and look at how<br />
the conversion process works. This is followed by sections which deal with the technicalities of writing<br />
converters for reading from and writing to ROOT files.<br />
15.2 Persistency converters<br />
<strong>Athena</strong> gives the possibility to read event data from, and to write data back to, ROOT files. The use of<br />
ODBC compliant databases is also possible, though this is not yet part of the <strong>Athena</strong> release. Other<br />
persistency technologies have been implemented for LHCb, in particular the reading of data from<br />
LHCb DSTs based on ZEBRA.<br />
Figure 15.1 is a schematic illustrating how converters fit into the transient-persistent translation of<br />
event data. We will not discuss in detail how the transient data store (e.g. the event data service) or the<br />
persistency service work, but simply look at the flow of data in order to understand how converters are<br />
used.<br />
One of the issues considered when designing the <strong>Athena</strong> framework was the capability for users to<br />
“create their own data types and save objects of those types along with references to already existing<br />
page 137
<strong>Athena</strong> Chapter 15 Converters Version/Issue: 2.0.0<br />
Transient<br />
Data Store<br />
OC<br />
OC<br />
OC<br />
Persistency service<br />
ROOT<br />
ODBC<br />
I/O<br />
RC<br />
RC<br />
RC<br />
OC<br />
RC<br />
ODBC converter<br />
ROOT converter<br />
MS<br />
Access<br />
ROOT<br />
Figure 15.1 Persistency conversion services in <strong>Athena</strong><br />
objects”. A related issue was the possibility of having links between objects which reside in different<br />
stores (i.e. files and databases) and even between objects in different types of store.<br />
Figure 15.1 shows that data may be read from an ODBC database and from ROOT files into the<br />
transient event data store and that data may be written out again to the same media. It is the job of the<br />
persistency service to orchestrate this transfer of data between memory and disk.<br />
The figure shows two “slave” services: the ODBC conversion service and the ROOT I/O service. These<br />
services are responsible for managing the conversion of objects between their transient and persistent<br />
representations. Each one has a number of converter objects which are actually responsible for the<br />
conversion itself. As illustrated by the figure a particular converter object converts between the<br />
transient representation and one other form, here either MS Access or ROOT.<br />
15.3 Collaborators in the conversion process<br />
page 138<br />
In general the conversion process occurs between the transient representation of an object and some<br />
other representation. In this chapter we will be using persistent forms, but it should be borne in mind<br />
that this could be any other “transient” form such as those required for visualisation or those which<br />
serve as input into other packages (e.g. Geant4).<br />
Figure 15.1 shows the interfaces (classes whose name begins with "I") which must be implemented in<br />
order for the conversion process to function.<br />
The conversion process is essentially a collaboration between the following types:<br />
• IConversionSvc
<strong>Athena</strong><br />
Chapter 15 Converters Version/Issue: 2.0.0<br />
IConverter<br />
__________<br />
createObj( )<br />
updateObj( )<br />
fillObjRefs( )<br />
IConversionSvc<br />
Converter<br />
AConverter1<br />
AConversionSvc<br />
AConverter2<br />
AConverter3<br />
IOpaqueAddress<br />
__________<br />
clID( )<br />
svcType( )<br />
AOpaqueAddress<br />
Figure 15.1 The classes (and interfaces) collaborating in the conversion process.<br />
• IConverter<br />
• IOpaqueAddress<br />
For each persistent technology, or “non-transient” representation, a specific conversion service is<br />
required. This is illustrated in the figure by the class AConversionSvc which implements the<br />
IConversionSvc interface.<br />
page 139
<strong>Athena</strong> Chapter 15 Converters Version/Issue: 2.0.0<br />
A given conversion service will have at its disposal a set of converters. These converters are both type<br />
and technology specific. In other words a converter knows how to convert a single transient type (e.g.<br />
MuonHit) into a single persistent type (e.g. RootMuonHit) and vice versa. Specific converters<br />
implement the IConverter interface, possibly by extending an existing converter base class.<br />
A third collaborator in this process are the opaque address objects. A concrete opaque address class<br />
must implement the IOpaqueAddress interface. This interface allows the address to be passed<br />
around between the transient data service, the persistency service, and the conversion services without<br />
any of them being able to actually decode the address. Opaque address objects are also technology<br />
specific. The internals of an OdbcAddress object are different from those of a RootAddress<br />
object.<br />
Only the converters themselves know how to decode an opaque address. In other words only converters<br />
are permitted to invoke those methods of an opaque address object which do not form a part of the<br />
IOpaqueAddress interface.<br />
Converter objects must be “registered” with the conversion service in order to be usable. For the<br />
“standard” converters this will be done automatically. For user defined converters (for user defined<br />
types) this registration must be done at initialisation time (see Chapter 7).<br />
15.4 The conversion process<br />
page 140<br />
As an example (see Figure 15.1) we consider a request from the event data service to the persistency<br />
service for an object to be loaded from a data file.<br />
As we saw previously, the persistency service has one conversion service slave for each persistent<br />
technology in use. The persistency service receives the request in the form of an opaque address object.<br />
The svcType() method of the IOpaqueAddress interface is invoked to decide which conversion<br />
service the request should be passed onto. This returns a “technology identifier” which allows the<br />
persistency service to choose a conversion service.<br />
The request to load an object (or objects) is then passed onto a specific conversion service. This service<br />
then invokes another method of the IOpaqueAddress interface, clID(), in order to decide which<br />
converter will actually perform the conversion. The opaque address is then passed onto the concrete<br />
converter who knows how to decode it and create the appropriate transient object.<br />
The converter is specific to a specific type, thus it may immediately create an object of that type with<br />
the new operator. The converter must now “unpack” the opaque address, i.e. make use of accessor<br />
methods specific to the address type in order to get the necessary information from the persistent store.<br />
For example, a ZEBRA converter might get the name of a bank from the address and use that to locate<br />
the required information in the ZEBRA common block. On the other hand a ROOT converter may<br />
extract a file name, the names of a ROOT TTree and an index from the address and use these to load<br />
an object from a ROOT file. The converter would then use the accessor methods of this “persistent”<br />
object in order to extract the information necessary to build the transient object.
<strong>Athena</strong><br />
Chapter 15 Converters Version/Issue: 2.0.0<br />
AConversionSvc AOpaqueAddress AConverter DB/File<br />
createObj(OA)<br />
clID( )<br />
Id<br />
createObj(OA)<br />
"unpack"<br />
pointers into<br />
persistent file/DB<br />
new<br />
DataObject<br />
"access(es)"<br />
setX( )<br />
data to build<br />
transient object<br />
return reference to DataObject<br />
setY( )<br />
DataObject<br />
Figure 15.1 A trace of the creation of a new transient object.<br />
We can see that the detailed steps performed within a converter depend very much on the nature of the<br />
non-transient data and (to a lesser extent) on the type of the object being built.<br />
If all transient objects were independent, i.e. if there were no references between objects then the job<br />
would be finished. However in general objects in the transient store do contain references to other<br />
objects.<br />
These references can be of two kinds:<br />
page 141
<strong>Athena</strong> Chapter 15 Converters Version/Issue: 2.0.0<br />
i. “Macroscopic” references appear as separate “leaves” in the data store. They have to be<br />
registered with a separate opaque address structure in the data directory of the object being<br />
converted. This must be done after the object was registered in the data store in the method<br />
fillObjRefs().<br />
ii.<br />
Internal references must be handled differently. There are two possibilities for resolving<br />
internal references:<br />
1. Load on demand: If the object the reference points to should only be loaded when<br />
accessed, the pointer must no longer be a raw C++ pointer, but rather a smart pointer<br />
object containing itself the information for later resolution of the reference. This is<br />
the preferred solution for references to objects within the same data store, e.g.<br />
references from the Monte-Carlo tracks to the Monte-Carlo vertices. Please see in<br />
the corresponding SICB converter implementations how to construct these smart<br />
pointer objects. Late loading is highly preferable compared to the second possibility.<br />
2. Filling of raw C++ pointers: Here things are a little more complicated and introduces<br />
the need for a second step in the process. This is only necessary if the object points<br />
to an object in another store, e.g. the detector data store. To resolve the reference a<br />
converter has to retrieve the other object and set the raw pointer. These references<br />
should be set in the fillObjRefs() method. This of course is more complicated,<br />
because it must be ensured that both objects are present at the time the reference is<br />
accessed (i.e. when the pointer is actually used).<br />
15.5 Converter implementation - general considerations<br />
page 142<br />
After covering the ground work in the preceding sections, let us look exactly what needs to be<br />
implemented in a specific converter class. The starting point is the Converter base class from which<br />
a user converter should be derived. For concreteness let us partially develop a converter for the UDO<br />
class of Chapter 7.<br />
The converter shown in Listing 15.1 is responsible for the conversion of UDO type objects into objects<br />
that may be stored into an Objectivity database and vice-versa. The UDOCnv constructor calls the<br />
Converter base class constructor with two arguments which contain this information. These are the<br />
values CLID_UDO, defined in the UDO class, and Objectivity_StorageType which is also<br />
defined elsewhere. The first two extern statements simply state that these two identifiers are defined<br />
elsewhere.<br />
All of the “book-keeping” can now be done by the Converter base class. It only remains to fill in the<br />
guts of the converter. If objects of type UDO have no links to other objects, then it suffices to implement<br />
the methods createRep() for conversion from the transient form (to Objectivity in this case) and<br />
createObj() for the conversion to the transient form.<br />
If the object contains links to other objects then it is also necessary to implement the methods<br />
fillRepRefs() and fillObjRefs().
<strong>Athena</strong><br />
Chapter 15 Converters Version/Issue: 2.0.0<br />
Listing 15.1 An example converter class<br />
// Converter for class UDO.<br />
extern const CLID& CLID_UDO;<br />
extern unsigned char OBJY_StorageType;<br />
static CnvFactory s_factory;<br />
const ICnvFactory& UDOCnvFactory = s_factory;<br />
class UDOCnv : public Converter {<br />
public:<br />
UDOCnv(ISvcLocator* svcLoc) :<br />
Converter(Objectivity_StorageType, CLID_UDO, svcLoc) { }<br />
}<br />
createRep(DataObject* pO, IOpaqueAddress*& a); // transient->persistent<br />
createObj(IOpaqueAddress* pa, DataObject*& pO); // persistent->transient<br />
fillObjRefs( ... ); // transient->persistent<br />
fillRepRefs( ... ); // persistent->transient<br />
15.6 Storing Data using the ROOT I/O Engine<br />
One possibility for storing data is to use the ROOT I/O engine to write ROOT files. Although ROOT by<br />
itself is not an object oriented database, with modest effort a structure can be built on top to allow the<br />
Converters to emulate this behaviour. In particular, the issue of object linking had to be solved in order<br />
to resolve pointers in the transient world.<br />
The concept of ROOT supporting paged tuples called trees and branches is adequate for storing bulk<br />
event data. Trees split into one or several branches containing individual leaves with data. The data<br />
structure within the <strong>Athena</strong> data store is tree like (as an example, part of the LHCb event data model is<br />
shown in Figure 15.1).<br />
In the transient world <strong>Athena</strong> objects are sub-class instances of the “DataObject”. The DataObject<br />
offers some basic functionality like the implicit data directory which allows e.g. to browse a data store.<br />
This tree structure will be mapped to a flat structure in the ROOT file resulting in a separate tree<br />
representing each leaf of the data store. Each data tree contains a single branch containing objects of the<br />
same type. The <strong>Athena</strong> tree is split up into individual ROOT trees in order to give easy access to<br />
individual items represented in the transient model without the need of loading complete events from<br />
the root file i.e. to allow for selective data retrieval. The feature of ROOT supporting selective data<br />
reading using split trees did not seem too attractive since, generally, complete nodes in the transient<br />
store should be made available in one go.<br />
However, ROOT expects “ROOT” objects, they must inherit from TObject. Therefore the objects<br />
from the transient store have to be converted to objects understandable by ROOT.<br />
page 143
<strong>Athena</strong> Chapter 15 Converters Version/Issue: 2.0.0<br />
Map of data store items<br />
to ROOT tree names<br />
#Event<br />
#Event#MC<br />
#Event#MC#MCECalFacePlaneHits<br />
#Event#MC#MCECalHits<br />
……<br />
Figure 15.1 The Transient data store and its mapping in the Root file. Note that the “/” used within the data<br />
store to identify separate layers are converted to “#” since the “/” within ROOT denominates directory entries<br />
The following sections are an introduction to the machinery provided by the <strong>Athena</strong> framework to<br />
achieve the migration of transient objects to persistent objects. The ROOT specific aspects are not<br />
discussed here; the documentation of the ROOT I/O engine can be found at the ROOT web site<br />
http://root.cern.ch). Note that <strong>Athena</strong> only uses the I/O engine, not all ROOT classes are available.<br />
Within <strong>Athena</strong> the ROOT I/O engine is implemented in the GaudiRootDb package.<br />
15.7 The Conversion from Transient Objects to ROOT Objects<br />
As for any conversion of data from one representation to another within the <strong>Athena</strong> framework,<br />
conversion to/from ROOT objects is based on Converters. The support of a “generic” Converter<br />
accesses pre-defined entry points in each object. The transient object converts itself to an abstract byte<br />
stream.<br />
However, for specialized objects specific converters can be built by virtual overrides of the base class.<br />
page 144<br />
Whenever objects must change their representation within <strong>Athena</strong>, data converters are involved. For the<br />
ROOT case the converters must have some knowledge of ROOT internals and the service finally used<br />
to migrate ROOT objects (->TObject) to a file. In the same way the converter must be able to<br />
translate the functionality of the DataObject component to/from the Root storage. Within ROOT<br />
itself the object is stored as a Binary Large Object (BLOB).
<strong>Athena</strong><br />
Chapter 15 Converters Version/Issue: 2.0.0<br />
The instantiation of the appropriate converter is done by a macro. The macro instantiates also the<br />
converter factory used to instantiate the requested converter. Hence, all other user code is shielded from<br />
the implementation and definitions of the ROOT specific code.<br />
Listing 15.2 Implementing a “generic” converter for the transient class Event.<br />
1: // Include files<br />
2: #include "GaudiKernel/ObjectVector.h"<br />
3: #include "GaudiKernel/ObjectList.h"<br />
4: #include "GaudiDb/DbGenericConverter.h"<br />
5: // Converter implementation for objects of class Event<br />
6: #include "Event.h"<br />
7: _ImplementConverter(Event)<br />
The macro needs a few words of explanation: the instantiated converters are able to create transient<br />
objects of type Event. The corresponding persistent type is of a generic type, the data are stored as a<br />
machine independent byte stream. It is mandatory that the Event class implements a streamer method<br />
“serialize”. An example from the Event class of the RootIO example is shown in Listing 15.3.<br />
The instantiated converter is of the type DbGenericConverter and the instance of the instantiating<br />
factory has the instance name DbEventCnvFactory.<br />
Listing 15.3 Serialisation of the class Event.<br />
1: /// Serialize the object for writing<br />
2: virtual StreamBuffer& serialize( StreamBuffer& s ) const {<br />
3: DataObject::serialize(s);<br />
4: return s<br />
5: > m_run<br />
15: >> m_time;<br />
16: }<br />
15.7.1 Non Identifiable Objects<br />
Non identifiable objects cannot directly be retrieved/stored from the data store. Usually they are small<br />
and in any case they are contained by a container object. Examples are particles, hits or vertices. These<br />
classes can be converted using a generic container converter. Container converters exist currently for<br />
lists and vectors. The containers rely on the serialize methods of the contained objects. The serialisation<br />
is able to understand smart references to other objects within the same data store. Listing 15.4 shows an<br />
example of the serialize methods of the MyTrack class of the RootIO example<br />
page 145
<strong>Athena</strong> Chapter 15 Converters Version/Issue: 2.0.0<br />
Listing 15.4 Serialisation of the class Event.<br />
1: #include "GaudiDb/DbContainerConverter.h"<br />
2: _ImplementContainerConverters(MyTrack)<br />
3:<br />
4: /// Serialize the object for writing<br />
5: inline StreamBuffer& MyTrack::serialize( StreamBuffer& s ) const {<br />
6: ContainedObject::serialize(s);<br />
7: return s<br />
8: m_py<br />
18: >> m_pz<br />
19: >> m_event(this); // Stream a reference to another object<br />
20: return s;<br />
21: }<br />
Please refer to the RootIO Gaudi example for further details how to store objects in ROOT files.<br />
15.8 Storing Data using other I/O Engines<br />
Once objects are stored as BLOBs, it is possible to adopt any storage technology supporting this<br />
datatype. This is the case not only for ROOT, but also for<br />
• Objectivity/DB<br />
• most relational databases, which support an ODBC interface like<br />
• Microsoft Access,<br />
• Microsoft SQL Server,<br />
• MySQL,<br />
• ORACLE and others.<br />
Note that although storing objects using these technologies is possible, there is currently no<br />
implementation available in the <strong>Athena</strong> release. If you desperately want to use Objectivity or one of the<br />
ODBC databases, please contact Markus Frank (Markus.Frank@cern.ch).<br />
page 146
<strong>Athena</strong><br />
Chapter 16 Visualization Version/Issue: 2.0.0<br />
Chapter 16<br />
Visualization<br />
16.1 Overview<br />
This chapter is a place holder for documenting the experiment specific visualization information.<br />
page 147
<strong>Athena</strong> Chapter 16 Visualization Version/Issue: 2.0.0<br />
page 148
<strong>Athena</strong><br />
Chapter 17 Physical design issues Version/Issue: 2.0.0<br />
Chapter 17<br />
Physical design issues<br />
17.1 Overview<br />
This chapter discusses several physical design issues. These include how to access the kernel GAUDI<br />
framework from within the ATLAS SRT software environment, and how to deal with the different<br />
types of libraries that are relevant.<br />
17.2 Accessing the kernel GAUDI framework<br />
<strong>Athena</strong> is based upon the kernel GAUDI framework, and therefore ATLAS packages that create<br />
components such as Algorithms or Services need access to components, header files and helper classes<br />
contained within that framework. The GAUDI framework is treated as an external package by the<br />
ATLAS computing environment. The required access is provided by the GaudiInterface ATLAS<br />
package.<br />
17.2.1 The GaudiInterface Package<br />
ATLAS packages that create Algorithms or Services should include the line shown in Listing 17.1 in<br />
their PACKAGE file:<br />
Listing 17.1 Entry in PACKAGE file for a SRT package<br />
use GaudiInterface<br />
0 External<br />
page 149
<strong>Athena</strong> Chapter 17 Physical design issues Version/Issue: 2.0.0<br />
Notes:<br />
1. The GaudiInterface package resides in the offline/External SRT hierarchy.<br />
While this should generally be sufficient for most packages, the GaudiInterface package also defines<br />
several linksets that can be used to access other aspects of the GAUDI installation itself. These linksets<br />
are the following:<br />
GaudiInterface[DbCnv]<br />
This linkset is obsolete, being replaced by the GaudiDb<br />
linkset. It is retained for backwards compatibility and<br />
will be removed in a future release.<br />
GaudiInterface[GaudiAlg]This allows developers to inherit from the Sequencer<br />
class and override its default behaviour.<br />
GaudiInterface[GaudiDb] This allows access to the StreamBuffer I/O support for<br />
DataObjects and ContainedObjects. It is a replacement<br />
for the DbCnv linkset which is deprecated.<br />
These auxilliary linksets should be used in conjunction with the primary one as illustrated in Listing<br />
17.2.<br />
Listing 17.2 Linkset entry in PACKAGE file for a Package that depends upon GAUDI<br />
use GaudiInterface<br />
use GaudiInterface[GaudiDb]<br />
0 External<br />
0 External<br />
17.3 Framework libraries<br />
Three different sorts of library can be identified that are relevant to the framework. These are<br />
component libraries, linker libraries, and dual-purpose libraries. These libraries are used for different<br />
purposes and are built in different ways.<br />
17.3.1 Component libraries<br />
page 150<br />
Component libraries are shared libraries that contain standard framework components which implement<br />
abstract interfaces. Such components are Algorithms, Auditors, Services, Tools or Converters.<br />
Component libraries are treated in a special manner by <strong>Athena</strong>, and should not be linked against. They<br />
do not export their symbols except for a special one which is used by the framework to discover what<br />
components are contained by the library. Thus component libraries are used purely at run-time, being<br />
loaded dynamically upon request, the configuration being specified by the job options file. Changes in<br />
the implementation of a component library do not require the application to be relinked.<br />
Component libraries contain factories for their components, and it is important that the factory entries<br />
are declared and loaded correctly. The following sections describe how this is done.
<strong>Athena</strong><br />
Chapter 17 Physical design issues Version/Issue: 2.0.0<br />
When a component library is loaded, the framework attempts to locate a single entrypoint, called<br />
getFactoryEntries(). This is expected to declare and load the component factories from the<br />
library. Several macros are available to simplify the declaration and loading of the components via this<br />
mechanism.<br />
Consider a simple package MyComponents, that declares and defines the MyAlgorithm class,<br />
being a subclass of Algorithm, and the MyService class, being a subclass of Service. Thus the<br />
package will contain the header and implementation files for these classes (MyAlgorithm.h,<br />
MyAlgorithm.cxx, MyService.h and MyService.cxx) in addition to whatever other files<br />
are necessary for the correct functioning of these components.<br />
In order to satisfy the requirements of a component library, two additional files must also be present in<br />
the package. One is used to declare the components, the other to load them. Because of the technical<br />
limitations inherent in the use of shared libraries, it is important that these two files remain separate,<br />
and that no attempt is made to combine their contents into a single file.<br />
The names of these files and their contents are described in the following sections.<br />
17.3.1.1 Declaring Components<br />
Components within the component library are declared in a file MyComponents_entries.cxx.<br />
By convention, the name of this file is the package name concatenated with _entries. The contents<br />
of this file are shown in Listing 17.3:<br />
Listing 17.3 The MyComponents_entries.cxx file<br />
#include "Gaudi/Kernel/DeclareFactoryEntries.h"<br />
DECLARE_FACTORY_ENTRIES( MyComponents ) { [1]<br />
DECLARE_ALGORITHM( MyAlgorithm ); [2]<br />
DECLARE_SERVICE ( MyService );<br />
}<br />
Notes:<br />
1. The argument to the DECLARE_FACTORY_ENTRIES statement is the name of the<br />
component library.<br />
2. Each component within the library should be declared using one of the DECLARE_XXX<br />
statements discussed in detail in the next Section.<br />
page 151
<strong>Athena</strong> Chapter 17 Physical design issues Version/Issue: 2.0.0<br />
17.3.1.1.1 Component declaration statements<br />
The complete set of statements that are available for declaring components is shown in Listing 17.4.<br />
They include those that support C++ classes in different namespaces, as well as for DataObjects or<br />
ContainedObjects using the generic converters.<br />
Listing 17.4 The available component declaration statements<br />
DECLARE_ALGORITHM(X)<br />
DECLARE_ALGTOOL(X)<br />
DECLARE_AUDITOR(X)<br />
DECLARE_CONVERTER(X)<br />
DECLARE_GENERIC_CONVERTER(X) [1]<br />
DECLARE_OBJECT(X)<br />
DECLARE_SERVICE(X)<br />
DECLARE_TOOL(X) [2]<br />
DECLARE_NAMESPACE_ALGORITHM(N,X) [3]<br />
DECLARE_NAMESPACE_ALGTOOL(N,X)<br />
DECLARE_NAMESPACE_AUDITOR(N,X)<br />
DECLARE_NAMESPACE_CONVERTER(N,X)<br />
DECLARE_NAMESPACE_GENERIC_CONVERTER(N,X)<br />
DECLARE_NAMESPACE_OBJECT(N,X)<br />
DECLARE_NAMESPACE_SERVICE(N,X)<br />
DECLARE_NAMESPACE_TOOL(N,X)<br />
Notes:<br />
1. Declarations of the form DECLARE_GENERIC_CONVERTER(X) are used to declare the<br />
generic converters for DataObject and ContainedObject classes. For DataObject<br />
classes, the argument should be the class name itself (e.g. EventHeader), whereas for<br />
ContainedObject classes, the argument should be the class name concatenated with<br />
either List or Vector (e.g. CellVector) depending on whether the objects are<br />
associated with an ObjectList or ObjectVector.<br />
2. The DECLARE_ALGTOOL(X) and DECLARE_TOOL(X) declarations are synonyms of each<br />
other.<br />
3. Declarations of this form are used to declare components from explicit C++ namespaces. The<br />
first argument is the namespace (e.g. Atlfast), and the second is the class name (e.g.<br />
CellMaker).<br />
page 152
<strong>Athena</strong><br />
Chapter 17 Physical design issues Version/Issue: 2.0.0<br />
17.3.1.2 Loading Components<br />
Components within the component library are loaded in a file MyComponents_load.cxx. By<br />
convention, the name of this file is the package name concatenated with _load. The contents of this<br />
file are shown in Listing 17.5:<br />
Listing 17.5 The MyComponents_load.cxx file<br />
#include "Gaudi/Kernel/LoadFactoryEntries.h"<br />
LOAD_FACTORY_ENTRIES( MyComponents ) [1]<br />
Notes:<br />
1. The argument to the LOAD_FACTORY_ENTRIES statement is the name of the component<br />
library.<br />
17.3.1.3 Specifying component libraries at run-time<br />
The fragments of the job options file or Python script that specifies the component library at run-time<br />
are shown in Listing 17.6a and Listing 17.6b.<br />
Listing 17.6a Job options file fragment for specifying a component library at run-time<br />
ApplicationMgr.Dlls += { "MyComponents" }; [1]<br />
Listing 17.6b Python script fragment for specifying a component library at run-time<br />
theApp.Dlls = [ "MyComponents" ] [1]<br />
Notes:<br />
1. This is a list property, allowing multiple such libraries to be specified in a single line,<br />
separated by commas.<br />
2. It is important to use the “+=” syntax in a job options file in order to append the new<br />
component library to any that might already have been configured. This is not necessary for a<br />
Python script.<br />
17.3.2 Linker Libraries<br />
These are libraries containing implementation classes. For example, libraries containing code of a<br />
number of base classes or concrete classes without abstract interfaces, etc. These libraries, in contrast to<br />
component libraries, export all their symbols and are needed during the linking phase in application<br />
building. These libraries can be linked to the application either statically or dynamically through the use<br />
of the corresponding type of library. In the first case the code is added physically to the executable file.<br />
page 153
<strong>Athena</strong> Chapter 17 Physical design issues Version/Issue: 2.0.0<br />
In this case, changes in these libraries require the application to be re-linked. In the second case, the<br />
linker only adds to the executable the minimal information required for loading the library and<br />
resolving the symbols at run time. Locating and loading the proper shared library at run-time is done<br />
using the LD_LIBRARY_PATH environment variable. Changes to the linker library will only require<br />
the application to be relinked if there is an interface change.<br />
17.3.3 Dual purpose libraries and library strategy<br />
It is also possible to have dual purpose libraries - ones which are simultaneously component and linker<br />
libraries. In general such libraries will contain DataObjects and ContainedObjects, together with their<br />
converters and associated factories. They are linker libraries since clients of these classes will need<br />
access to the header files and code, but they are component libraries since they have factories associated<br />
with the converters.<br />
It is recommended that such dual purpose libraries be separated from single purpose component or<br />
linker libraries. Consider the case where several Algorithms share the use of several DataObjects (e.g.<br />
where one Algorithm creates and registers them with the transient event store, and another downstream<br />
Algorithm locates them), and also share the use of some helper classes in order to decode and<br />
manipulate the contents of the DataObjects. It is recommended that three different packages be used for<br />
this; one pure component package for the Algorithms, one dual-purpose for the DataObjects, and one<br />
pure linker package for the helper classes. Obviously the package handling the Algorithms will in<br />
general depend on the other packages. However, no other package should in general depend on the<br />
package handling the component library.<br />
17.3.4 Building the different types of libraries<br />
Using ATLAS SRT, component and linker libraries are differentiated by different options that are<br />
passed through to the linker. Specifically, component and dual-purpose libraries require lines to be<br />
added to the package GNUmakefile.in file, as illustrated in Listing 17.7.<br />
Listing 17.7 Linker options for component library in GNUmakefile.in<br />
libMyComponents.so_LDFLAGS = -Wl,Bsymbolic<br />
Notes:<br />
1. This is Lunix-specific. The equivalent instructions for other platforms still need to be defined.<br />
17.3.5 Linking FORTRAN code<br />
page 154<br />
Any library containing FORTRAN code (more specifically, code that references COMMON blocks)<br />
must be linked statically. This is because COMMON blocks are, by definition, static entities. When<br />
mixing C++ code with FORTRAN, it is recommended that separate libraries for the C++ and
<strong>Athena</strong><br />
Chapter 17 Physical design issues Version/Issue: 2.0.0<br />
FORTRAN are built, and the code is written in such a way that communication between the C++ and<br />
FORTRAN worlds is done exclusively via wrappers. In this way it is possible to build shareable<br />
libraries for the C++ code, even if it calls FORTRAN code internally.<br />
page 155
<strong>Athena</strong> Chapter 17 Physical design issues Version/Issue: 2.0.0<br />
page 156
<strong>Athena</strong><br />
Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
Chapter 18<br />
Framework packages, interfaces and<br />
libraries<br />
18.1 Overview<br />
It is clearly important to decompose large software systems into hierarchies of smaller and more<br />
manageable entities. This decomposition can have important consequences for implementation related<br />
issues, such as compile-time and link dependencies, configuration management, etc. A package is the<br />
grouping of related components into a cohesive physical entity. A package is also the minimal unit of<br />
software release.<br />
In this chapter we describe the Gaudi package structure, and how these packages are implemented in<br />
libraries.<br />
18.2 Gaudi Package Structure<br />
The Gaudi software is decomposed into the packages shown in Figure 18.1..<br />
At the lower level we find GaudiKernel, which is the framework itself, and whose only dependency<br />
is on the GaudiPolicy package, which contains the various flags defining the CMT [6]<br />
configuration management environment needed to build the Gaudi software. At the next level are the<br />
packages containing standard framework components (GaudiSvc, GaudiDb, GaudiTools,<br />
GaudiAlg, GaudiAud), which depend on the framework and on widely available foundation<br />
libraries such as CLHEP and HTL. These external libraries are accessed via CMT interface packages<br />
which use environment variables defined in the ExternalLibs package, which should be tailored to<br />
page 157
<strong>Athena</strong> Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
Gaudi package<br />
External package<br />
Gaudi Package dependency<br />
External Package dependency<br />
GaudiDb GaudiSvc<br />
(persistency) (services)<br />
GaudiTools<br />
(tools)<br />
Gaudi Framework<br />
GaudiAlg<br />
(algorithms)<br />
GaudiAud<br />
(monitoring)<br />
GaudiExamples<br />
(applications)<br />
HbookCnv<br />
(converters)<br />
Gaudi<br />
RootDb<br />
(persistency)<br />
RootHist<br />
Cnv<br />
(converters)<br />
CLHEP<br />
HTL<br />
Gaudi<br />
Kernel<br />
(foundations)<br />
GaudiPolicy<br />
(configuration)<br />
GaudiSys<br />
CernLib<br />
ROOT<br />
SIPython<br />
(scripting)<br />
ExternalLibs<br />
(configuration)<br />
Python<br />
Figure 18.1 Package structure of the Gaudi software<br />
the software installation at a given site. All the above packages are grouped into the GaudiSys set of<br />
packages which are the minimal set required for a complete Gaudi installation<br />
The remaining packages are optional packages which can be used according to the specific technology<br />
choices for a given application. In this distribution, there are two specific implementations of the<br />
histogram persistency service, based on HBOOK (HbookCnv) and ROOT (RootHistCnv) and one<br />
implementation of the event data persistency service (GaudiRootDb) which understands ROOT<br />
databases. There is also a prototype scripting service (SIPython) depending on the Python scripting<br />
language. Finally, at the top level we find the applications (GaudiExamples) which depend on<br />
GaudiSys and the scripting and persistency services.<br />
18.2.1 Gaudi Package Layout<br />
Figure 18.1 shows the layout for Gaudi packages. Note that the binaries directories are not in CVS, they<br />
are created by CMT when building a package.<br />
page 158
<strong>Athena</strong><br />
Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
packA<br />
$PACKAROOT<br />
Version<br />
number<br />
v1 v1r1 v2<br />
cmt src packA doc i386- Linuxdbx<br />
linux22<br />
. . .<br />
Win32<br />
Debug<br />
internal include<br />
files and source<br />
code. Avoid many<br />
levels!<br />
external include files<br />
#include “packA/xxx.h”<br />
Avoid many levels!<br />
binaries<br />
Figure 18.1 Layout of Gaudi software packages<br />
18.2.2 Packaging <strong>Guide</strong>lines<br />
Packaging is an important architectural issue for the Gaudi framework, but also for the experiment<br />
specific software packages based on Gaudi. Typically, experiment packages consist of:<br />
• Specific event model<br />
• Specific detector description<br />
• Sets of algorithms (digitisation, reconstruction, etc.)<br />
The packaging should be such as to minimise the dependencies between packages, and must absolutely<br />
avoid cyclic dependencies. The granularity should not be too small or too big. Care should be taken to<br />
identify the external interfaces of packages: if the same interfaces are shared by many packages, they<br />
should be promoted to a more basic package that the others would then depend on. It is a good idea to<br />
discuss your packaging with the librarian and/or architect.<br />
page 159
<strong>Athena</strong> Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
18.3 Interfaces in Gaudi<br />
One of the main design choices at the architecture level in Gaudi was to favour abstract interfaces when<br />
building collaborations of various classes. This is the way we best decouple the client of a class from its<br />
real implementation.<br />
An abstract interface in C++ is a class where all the methods are pure virtual. We have defined some<br />
practical guidelines for defining interfaces. An example is shown in Listing 18.1:<br />
Listing 18.1 Example of an abstract interface (IService)<br />
1: // $Header: $<br />
2: #ifndef GAUDIKERNEL_ISERVICE_H<br />
3: #define GAUDIKERNEL_ISERVICE_H<br />
4:<br />
5: // Include files<br />
6: #include "GaudiKernel/IInterface.h"<br />
7: #include <br />
8:<br />
9: // Declaration of the interface ID. (id, major, minor)<br />
10: static const InterfaceID IID_IService(2, 1, 0);<br />
11:<br />
12: /** @class IService IService.h GaudiKernel/IService.h<br />
13:<br />
14: General service interface definition<br />
15:<br />
16: @author Pere Mato<br />
17: */<br />
18: class IService : virtual public IInterface {<br />
19: public:<br />
20: /// Retrieve name of the service<br />
21: virtual const std::string& name() const = 0;<br />
22: /// Retrieve ID of the Service. Not really used.<br />
23: virtual const IID& type() const = 0;<br />
24: /// Initilize Service<br />
25: virtual StatusCode initialize() = 0;<br />
26: /// Finalize Service<br />
27: virtual StatusCode finalize() = 0;<br />
28: /// Retrieve interface ID<br />
29: static const InterfaceID& interfaceID() { return IID_IService; }<br />
30: };<br />
31:<br />
32: #endif // GAUDIKERNEL_ISERVICE_H<br />
33:<br />
From this example we can make the following observations:<br />
page 160<br />
• Interface Naming. The name of the class has to start with capital “I” to denote that it is an<br />
interface.
<strong>Athena</strong><br />
Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
• Derived from IInterface. We follow the convention that all interfaces should be derived<br />
from a basic interface IInterface. This interface defined 3 methods: addRef(), release()<br />
and queryInterface(). This methods allow the framework to manage the reference counting of<br />
the framework components and the possibility to obtain a different interface of a component<br />
using any interface (see later).<br />
• Pure Abstract Methods. All the methods should be pure abstract (virtual ReturnType<br />
method(...) = 0;) With the exception of the static method interfaceID() (see<br />
later) and some inline templated methods to facilitate the use of the interface by the end-user.<br />
• Interface ID. Each interface should have a unique identification (see later) used by the query<br />
interface mechanism.<br />
18.3.1 Interface ID<br />
We needed to introduce an interface ID for identifying interfaces for the queryInterface functionality.<br />
The interface ID is made of a numerical identifier that needs to be allocated off-line, and a major and<br />
minor version numbers. The version number is used to decide if the interface the service provider is<br />
returning is compatible with the interface the client is expecting. The rules for deciding if the interface<br />
request is compatible are:<br />
• The interface identifier is the same<br />
• The major version is the same<br />
• The minor version of the client is less than or equal to the one of the service provider. This<br />
allows the service provider to add functionality (incrementing minor version number) keeping<br />
old clients still compatible.<br />
The interface ID is defined in the same header file as the rest of the interface. Care should be taken of<br />
globally allocating the interface identifier, and of modifying the version whenever a change of the<br />
interface is required, according to the rules. Of course changes to interfaces should be minimized.<br />
static const InterfaceID IID_Ixxx(2 /*id*/, 1 /*major*/, 0 /*minor*/);<br />
class Ixxx : public IInterface {<br />
. . .<br />
static const InterfaceID& interfaceID() { return IID_Ixxx; }<br />
};<br />
The static method Ixxx::interfaceID() is useful for the implementation of templated methods<br />
and classes using an interface as template parameter. The construct T::interfaceID() returns the<br />
interface ID of interface T.<br />
page 161
<strong>Athena</strong> Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
18.3.2 Query Interface<br />
The method queryInterface() is used to request a reference to an interface implemented by a component<br />
within the Gaudi framework. This method is implemented by each component class of the framework.<br />
Typically, this is not very visible since it is done in the base class from which you inherit. A typical<br />
implementation looks like this:<br />
Listing 18.2 Example implementation of queryInterface()<br />
1: StatusCode DataSvc::queryInterface(const InterfaceID& riid,<br />
2: void** ppvInterface) {<br />
3: if ( IID_IDataProviderSvc.versionMatch(riid) ) {<br />
4: *ppvInterface = (IDataProviderSvc*)this;<br />
5: }<br />
6: else if ( IID_IDataManagerSvc.versionMatch(riid) ) {<br />
7: *ppvInterface = (IDataManagerSvc*)this;<br />
8: }<br />
9: else {<br />
10: return Service::queryInterface(riid, ppvInterface);<br />
11: }<br />
12: addRef();<br />
13: return SUCCESS;<br />
14: }<br />
The implementation returns the corresponding interface pointer if there is a match between the received<br />
InterfaceID and the implemented one. The method versionMatch() takes into account the rules<br />
mentioned in Section 18.3.1.<br />
If the requested interface is not recognized at this level (line 9), the call can be forwarded to the<br />
inherited base class or possible sub-components of this component.<br />
18.4 Libraries in Gaudi<br />
Two different sorts of library can be identified that are relevant to the framework. These are component<br />
libraries, and linker libraries. These libraries are used for different purposes and are built in different<br />
ways.<br />
18.4.1 Component libraries<br />
page 162<br />
Component libraries are shared libraries that contain standard framework components which implement<br />
abstract interfaces. Such components are Algorithms, Auditors, Services, Tools or Converters. These<br />
libraries do not export their symbols apart from one which is used by the framework to discover what<br />
components are contained by the library. Thus component libraries should not be linked against, they<br />
are used purely at run-time, being loaded dynamically upon request, the configuration being specified
<strong>Athena</strong><br />
Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
by the job options file. Changes in the implementation of a component library do not require the<br />
application to be relinked.<br />
Component libraries contain factories for their components, and it is important that the factory entries<br />
are declared and loaded correctly. The following sections describe how this is done.<br />
When a component library is loaded, the framework attempts to locate a single entrypoint, called<br />
getFactoryEntries(). This is expected to declare and load the component factories from the<br />
library. Several macros are available to simplify the declaration and loading of the components via this<br />
function.<br />
Consider a simple package MyComponents, that declares and defines the MyAlgorithm class,<br />
being a subclass of Algorithm, and the MyService class, being a subclass of Service. Thus the<br />
package will contain the header and implementation files for these classes (MyAlgorithm.h,<br />
MyAlgorithm.cpp, MyService.h and MyService.cpp) in addition to whatever other files<br />
are necessary for the correct functioning of these components.<br />
In order to satisfy the requirements of a component library, two additional files must also be present in<br />
the package. One is used to declare the components, the other to load them. Because of the technical<br />
limitations inherent in the use of shared libraries, it is important that these two files remain separate,<br />
and that no attempt is made to combine their contents into a single file.<br />
The names of these files and their contents are described in the following sections.<br />
18.4.1.1 Declaring Components 1<br />
Components within the component library are declared in a file MyComponents_entries.cpp.<br />
By convention, the name of this file is the package name concatenated with _entries. The contents<br />
of this file are shown below:<br />
Listing 18.3 The MyComponents_entries.cpp file<br />
#include "GaudiKernel/DeclareFactoryEntries.h"<br />
DECLARE_FACTORY_ENTRIES( MyComponents ) { [1]<br />
DECLARE_ALGORITHM( MyAlgorithm ); [2]<br />
DECLARE_SERVICE ( MyService );<br />
}<br />
Notes:<br />
1. The macros described in this section are currently only implemented in the Atlas extensions to Gaudi. They are documented<br />
here because they are expected to be included in a future release of Gaudi. The current release of Gaudi uses a similar mechanism<br />
but with slightly different naming conventions.<br />
page 163
<strong>Athena</strong> Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
1. The argument to the DECLARE_FACTORY_ENTRIES statement is the name of the<br />
component library.<br />
2. Each component within the library should be declared using one of the DECLARE_XXX<br />
statements discussed in detail in the next Section.<br />
18.4.1.2 Component declaration statements<br />
The complete set of statements that are available for declaring components is given below. They<br />
include those that support C++ classes in different namespaces, as well as for DataObjects or<br />
ContainedObjects using the generic converters.<br />
Listing 18.4 The available component declaration statements<br />
DECLARE_ALGORITHM(X)<br />
DECLARE_AUDITOR(X)<br />
DECLARE_CONVERTER(X)<br />
DECLARE_GENERIC_CONVERTER(X) [1]<br />
DECLARE_OBJECT(X)<br />
DECLARE_SERVICE(X)<br />
DECLARE_NAMESPACE_ALGORITHM(N,X) [2]<br />
DECLARE_NAMESPACE_AUDITOR(N,X)<br />
DECLARE_NAMESPACE_CONVERTER(N,X)<br />
DECLARE_NAMESPACE_GENERIC_CONVERTER(N,X)<br />
DECLARE_NAMESPACE_OBJECT(N,X)<br />
DECLARE_NAMESPACE_SERVICE(N,X)<br />
Notes:<br />
1. Declarations of the form DECLARE_GENERIC_CONVERTER(X) are used to declare the<br />
generic converters for DataObject and ContainedObject classes. For DataObject<br />
classes, the argument should be the class name itself (e.g. EventHeader), whereas for<br />
ContainedObject classes, the argument should be the class name concatenated with<br />
either List or Vector (e.g. CellVector) depending on whether the objects are<br />
associated with an ObjectList or ObjectVector.<br />
2. Declarations of this form are used to declare components from explicit C++ namespaces. The<br />
first argument is the namespace (e.g. Atlfast), the second is the class name (e.g.<br />
CellMaker).<br />
page 164
<strong>Athena</strong><br />
Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
18.4.1.3 Loading Components<br />
Components within the component library are loaded in a file MyComponents_load.cpp. By<br />
convention, the name of this file is the package name concatenated with _load. The contents of this<br />
file are shown below:<br />
Listing 18.5 The MyComponents_load.cpp file<br />
#include "GaudiKernel/LoadFactoryEntries.h"<br />
LOAD_FACTORY_ENTRIES( MyComponents ) [1]<br />
Notes:<br />
1. The argument of LOAD_FACTORY_ENTRIES is the name of the component library.<br />
18.4.1.4 Specifying component libraries at run-time<br />
The fragment of the job options file that specifies the component library at run-time is shown below.<br />
Listing 18.6 Selecting and running the desired tutorial example<br />
ApplicationMgr.DLLs += { "MyComponents" }; [1]<br />
Notes:<br />
1. This is a list property, allowing multiple such libraries to be specified in a single line.<br />
2. It is important to use the “+=” syntax to append the new component library or libraries to any<br />
that might already have been configured.<br />
The convention in Gaudi is that component libraries have the same name as the package they belong to<br />
(prefixed by "lib" on Linux). When trying to load a component library, the framework will look for it<br />
in various places following this sequence:<br />
— Look for an environment variable with the name of the package, suffixed by "Shr" (e.g.<br />
${MyComponentsShr}). If it exists, it should translate to the full name of the library,<br />
without the file type suffix (e.g. ${MyComponentsShr}<br />
="$MYSOFT/MyComponents/v1/i386_linux22/libMyComponents" ).<br />
— Try to locate the file libMyComponents.so using the LD_LIBRARY_PATH (on Linux),<br />
or MyComponents.dll using the PATH (on Windows).<br />
18.4.2 Linker libraries<br />
These are libraries containing implementation classes. For example, libraries containing code of a<br />
number of base classes or specific classes without abstract interfaces, etc. These libraries, contrary to<br />
page 165
<strong>Athena</strong> Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
the component libraries, export all the symbols and are needed during the linking phase in the<br />
application building. These libraries can be linked to the application "statically" or "dynamically",<br />
requiring a different file format. In the first case the code is added physically to the executable file. In<br />
this case, changes in these libraries require the application to be re-linked, even if these changes do not<br />
affect the interfaces. In the second case, the linker only adds into the executable minimal information<br />
required for loading the library and resolving the symbols at run time. Locating and loading the proper<br />
shareable library at run time is done exclusively using the LD_LIBRARY_PATH for Linux and PATH<br />
for Windows. The convention in Gaudi is that linker libraries have the same name as the package,<br />
suffixed by "Lib" (and prefixed by "lib" on Linux, e.g. libMyComponentsLib.so).<br />
18.4.3 Library strategy and dual purpose libraries<br />
Because component libraries are not designed to be linked against, it is important to separate the<br />
functionalities of these libraries from linker libraries. For example, consider the case of a DataProvider<br />
service that provides DataObjects for clients. It is important that the declarations and definitions of the<br />
DataObjects be handled by a different shared library than that handling the service itself. This implies<br />
the presence of two different packages - one for the component library, the other for the DataObjects.<br />
Clients should only depend on the second of these packages. Obviously the package handling the<br />
component library will in general also depend on the second package.<br />
It is possible to have dual purpose libraries - ones which are simultaneously component and linker<br />
libraries. In general such libraries will contain DataObjects and ContainedObjects, together with their<br />
converters and associated factories. It is recommended that such dual purpose libraries be separated<br />
from single purpose component or linker libraries. Consider the case where several Algorithms share<br />
the use of several DataObjects (e.g. where one Algorithm creates them and registers them with the<br />
transient event store, and another Algorithm locates them), and also share the use of some helper<br />
classes in order to decode and manipulate the contents of the DataObjects. It is recommended that three<br />
different packages be used for this - one pure component package for the Algorithms, one dual-purpose<br />
for the DataObjects, and one pure linker package for the helper classes.<br />
18.4.4 Building and linking with the libraries<br />
Gaudi libraries and applications are built using CMT, but may be used also by experiments using other<br />
configuration management tools, such as ATLAS SRT.<br />
18.4.4.1 Building and linking to the Gaudi libraries with CMT<br />
page 166<br />
Gaudi libraries and applications are built using CMT taking advantage of the CMT macros defined in<br />
the GaudiPolicy package. As an example, the CMT requirements file of the GaudiTools<br />
package is shown in Listing 18.7. The linker and component libraries are defined on lines 23 and 26<br />
respectively - the linker library is defined first because it must be built ahead of the component library.<br />
Lines 28 and 34 set up the generic linker options and flags for the linker library, which are suffixed by<br />
the package specific flags set up by line 35. Line 31 tells CMT to generate the symbols needed for the
<strong>Athena</strong><br />
Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
component library, while line 33 sets up the corresponding linker flags for the component library.<br />
Finally, line 30 updates LD_LIBRARY_PATH (or PATH on Windows) for this package. In packages<br />
with only a component library and no linker library, line 30 could be replaced by "apply_pattern<br />
packageShr", which would create the logical name required to access the component library by the<br />
first of the two methods described in Section 18.4.1.4.<br />
Listing 18.7 CMT requirements file for the GaudiTools package<br />
15: package GaudiTools<br />
16: version v1<br />
17:<br />
18: branches GaudiTools cmt doc src<br />
19: use GaudiKernel v8*<br />
20: include_dirs "$(GAUDITOOLSROOT)"<br />
21:<br />
22: #linker library<br />
23: library GaudiToolsLib ../src/Associator.cpp ../src/IInterface.cpp<br />
24:<br />
25: #component library<br />
26: library GaudiTools ../src/GaudiTools_load.cpp ../src/GaudiTools_dll.cpp<br />
27:<br />
28: apply_pattern package_Llinkopts<br />
29:<br />
30: apply_pattern ld_library_path<br />
31: macro_append GaudiTools_stamps "$(GaudiToolsDir)/GaudiToolsLib.stamp"<br />
32:<br />
33: apply_pattern package_Cshlibflags<br />
34: apply_pattern package_Lshlibflags<br />
35: macro_append GaudiToolsLib_shlibflags $(GaudiKernel_linkopts)<br />
18.4.4.2 Linking to the Gaudi libraries with Atlas SRT<br />
Using ATLAS SRT, component and linker libraries are differentiated by different options that are<br />
passed through to the linker. Specifically, component and dual-purpose libraries require lines to be<br />
added to the package GNUmakefile.in file, as illustrated in Listing 18.8.<br />
Listing 18.8 Linker options for component library in GNUmakefile.in<br />
libMyComponents.so_LDFLAGS = -Wl,Bsymbolic<br />
libMyComponents.so_LIBS = -lGaudiBase [1]<br />
Notes:<br />
page 167
<strong>Athena</strong> Chapter 18 Framework packages, interfaces and libraries Version/Issue: 2.0.0<br />
1. This line is only strictly necessary when other dual-purpose libraries are linked to as a result<br />
of the package dependencies. However, since this dependency can come from other packages<br />
than those that the current package directly depends on, it is recommended that this line<br />
always be present. It is not harmful if used when it isn’t necessary.<br />
18.4.5 Linking FORTRAN code<br />
Any library containing FORTRAN code (more specifically, code that references COMMON blocks)<br />
must be linked statically. This is because COMMON blocks are, by definition, static entities. When<br />
mixing C++ code with FORTRAN, it is recommended to build separate libraries for the C++ and<br />
FORTRAN, and to write the code in such a way that communication between the C++ and FORTRAN<br />
worlds is done exclusively via wrappers. This makes it possible to build shareable libraries for the C++<br />
code, even if it calls FORTRAN code internally.<br />
page 168
<strong>Athena</strong><br />
Chapter 19 Analysis utilities Version/Issue: 2.0.0<br />
Chapter 19<br />
Analysis utilities<br />
19.1 Overview<br />
In this chapter we give pointers to some of the third party software libraries that we use within <strong>Athena</strong><br />
or recommend for use by algorithms implemented in <strong>Athena</strong>.<br />
19.2 CLHEP<br />
CLHEP (“Class Library for High Energy Physics”) is a set of HEP-specific foundation and utility<br />
classes such as random generators, physics vectors, geometry and linear algebra. It is structured in a set<br />
of packages independent of any external package. The documentation for CLHEP can be found on<br />
WWW at http://wwwinfo.cern.ch/asd/lhc++/clhep/index.html<br />
CLHEP is used extensively inside <strong>Athena</strong>, in the GaudiSvc and GaudiDbHCbEvent packages.<br />
19.3 HTL<br />
HTL ("Histogram Template Library") is used internally in <strong>Athena</strong> (GaudiSvc package) to provide<br />
histogramming functionality. It is accessed through its abstract AIDA compliant interfaces. <strong>Athena</strong> uses<br />
only the transient part of HTL. Histogram persistency is available with ROOT or HBOOK.<br />
The documentation on HTL is available at http://wwwinfo.cern.ch/asd/lhc++/HTL/index.html.<br />
Documentation on AIDA can be found at http://wwwinfo.cern.ch/asd/lhc++/AIDA/index.html.<br />
page 169
<strong>Athena</strong> Chapter 19 Analysis utilities Version/Issue: 2.0.0<br />
19.4 NAG C<br />
The NAG C library is a commercial mathematical library providing a similar functionality to the<br />
FORTRAN mathlib (part of CERNLIB). It is organised into chapters, each chapter devoted to a branch<br />
of numerical or statistical computation. A full list of the functions is available at<br />
http://wwwinfo.cern.ch/asd/lhc++/Nag_C/html/doc.html<br />
NAG C is not explicitly used in the <strong>Athena</strong> framework, but developers are encouraged to use it for<br />
mathematical computations. Instructions for linking NAG C with <strong>Athena</strong> can be found at<br />
http://cern.ch/lhcb-comp/Components/html/nagC.html<br />
Some NAG C functions print error messages to stdout by default, without any information about the<br />
calling algorithm and without filtering on severity level. A facility is provided by <strong>Athena</strong> to redirect<br />
these messages to the <strong>Athena</strong> MessageSvc. This is documented at<br />
http://cern.ch/lhcb-comp/Components/html/GaudiNagC.html<br />
19.5 ROOT<br />
ROOT is used by <strong>Athena</strong> for I/O and as a persistency solution for event data, histograms and n-tuples.<br />
In addition, it can be used for interactive analysis, as discussed in Chapter 12. Information about ROOT<br />
can be found at http://root.cern.ch/<br />
page 170
<strong>Athena</strong><br />
Version/Issue: 2.0.0<br />
Appendix A<br />
References<br />
[1] GAUDI User <strong>Guide</strong><br />
http://lhcb-comp.web.cern.ch/lhcb-comp/Components/Gaudi_v6/gug.pdf<br />
[2] GAUDI - Architecture Design Report [LHCb 98-064 COMP]<br />
[3] HepMC Reference<br />
[4] Python Reference<br />
[5] StoreGate Design Document<br />
page 171
<strong>Athena</strong> Version/Issue: 2.0.0<br />
page 172
<strong>Athena</strong><br />
Appendix B Options for standard components Version/Issue: 2.0.0<br />
Appendix B<br />
Options for standard components<br />
The following is a list of options that may be set for the standard components: e.g. data files for input,<br />
print-out level for the message service, etc. The options are listed in tabular form for each component<br />
along with the default value and a short explanation. The component name is given in the table caption<br />
thus: [ComponentName].<br />
Table B.1 Standard Options for the Application manager [ApplicationMgr]<br />
Option name Default value Meaning<br />
EvtSel a "" "NONE"; (if no event input) b<br />
EvtMax -1 Maximum number of events to process. The default is -1 (infinite)<br />
unless EvtSel = "NONE"; in which case it is 10.<br />
TopAlg {} List of top level algorithms. Format:<br />
{/[, /,...]};<br />
ExtSvc {} List of external services names (not known to the ApplicationMgr,<br />
see section 13.2). Format:<br />
{/[, /,...]};<br />
OutStream {} Declares an output stream object for writing data to a persistent<br />
store, e.g. {“DstWriter”};<br />
See also Table B.10<br />
DLLs {} Search list of libraries for dynamic loading. Format:<br />
{[,,...]};<br />
HistogramPersistency "NONE" Histogram persistency mechanism. Available options are<br />
"HBOOK", "ROOT", "NONE"<br />
Runable "AppMgrRunable" Type of runable object to be created by Application manager<br />
page 173
<strong>Athena</strong> Appendix B Options for standard components Version/Issue: 2.0.0<br />
Table B.1 Standard Options for the Application manager [ApplicationMgr]<br />
Option name Default value Meaning<br />
EventLoop "GaudiEventLoopMgr" Type of event loop:<br />
"GaudiEventLoopMgr" is standard event loop<br />
"MinimalEventLoop" exceutes algorithms but does not read events<br />
The last two options define the source of the job options file and so they cannot be defined in the job options file<br />
itself. There are two possibilities to set these options, the first one is using a environment variable called JOBOP-<br />
TPATH or setting the option to the application manager directly from the main program c . The coded option takes<br />
precedence.<br />
JobOptionsType “FILE” Type of file (FILE implies ascii)<br />
JobOptionsPath “jobOptions.txt” Path for job options source<br />
a. The "EvtSel" property of ApplicationMgr is replaced by the property "Input" of EventSelector. Only the value<br />
"NONE" is still valid.<br />
b. A basic DataObject object is created as event root ("/Event")<br />
c. The setting of properties from the main program is discussed in Chapter 4.<br />
Table B.2 Standard Options for the message service [MessageSvc]<br />
Option name Default value Meaning<br />
OutputLevel 0 Verboseness threshold level:<br />
0=NIL,1=VERBOSE, 2=DEBUG, 3=INFO,<br />
4=WARNING, 5=ERROR, 6=FATAL<br />
Format “% F%18W%S%7W%R%T %0W%M” Format string.<br />
Table B.3 Standard Options for all algorithms []<br />
Any algorithm derived from the Algorithm base class can override the global Algorithm options thus:<br />
Option name<br />
Default<br />
value<br />
Meaning<br />
OutputLevel 0 Message Service Verboseness threshold level:<br />
0=NIL,1=VERBOSE, 2=DEBUG, 3=INFO,4=WARNING, 5=ERROR, 6=FATAL<br />
Enable true If false, application manager skips execution of this algorithm<br />
ErrorMax 1 Job stops when this number of errors is reached<br />
ErrorCount 0 Current error count<br />
AuditInitialize false Enable/Disable auditing of Algorithm initialisation<br />
AuditExecute true Enable/Disable auditing of Algorithm execution<br />
AuditFinalize false Enable/Disable auditing of Algorithm finalisation<br />
page 174
<strong>Athena</strong><br />
Appendix B Options for standard components Version/Issue: 2.0.0<br />
Table B.4 Standard Options for all services []<br />
Any service derived from the Service base class can override the global MessageSvc.OutputLevel thus:<br />
Option<br />
name<br />
Default<br />
value<br />
Meaning<br />
OutputLevel 0 Message Service Verboseness threshold level:<br />
0=NIL,1=VERBOSE, 2=DEBUG, 3=INFO,4=WARNING, 5=ERROR, 6=FATAL<br />
Table B.5 Standard Options for all Tools []<br />
Any tool derived from the AlgTool base class can override the global MessageSvc.OutputLevel thus:<br />
Option<br />
name<br />
Default<br />
value<br />
Meaning<br />
OutputLevel 0 Message Service Verboseness threshold level:<br />
0=NIL,1=VERBOSE, 2=DEBUG, 3=INFO,4=WARNING, 5=ERROR, 6=FATAL<br />
Table B.6 Standard Options for all Associators []<br />
Option name Default value Meaning<br />
FollowLinks true Instruct the associator to follow the links instead of using cached information<br />
DataLocation "" Location where to get association information in the data store<br />
Table B.7 Standard Options for Auditor service [AuditorSvc]<br />
Option name<br />
Default<br />
value<br />
Meaning<br />
Auditors {}; List of Auditors to be loaded and to be used.<br />
See section 13.7 for list of possible auditors<br />
Table B.8 Standard Options for all Auditors []<br />
Any Auditor derived from the Auditor base class can override the global Auditor options thus:<br />
Option name<br />
Default<br />
value<br />
Meaning<br />
OutputLevel 0 Message Service Verboseness threshold level:<br />
0=NIL,1=VERBOSE, 2=DEBUG, 3=INFO,4=WARNING, 5=ERROR, 6=FATAL<br />
Enable true If false, application manager skips execution of the auditor<br />
page 175
<strong>Athena</strong> Appendix B Options for standard components Version/Issue: 2.0.0<br />
Table B.9 Options of Algorithms in GaudiAlg package<br />
Algorithm name Option Name Default value Meaning<br />
EventCounter Frequency 1; Frequency with which number of events<br />
should be reported<br />
Prescaler PercentPass 100.0; Percentage of events that should be passed<br />
Sequencer Members Names of members of the sequence<br />
Sequencer StopOverride false; If true, do not stop sequence if a filter fails<br />
Table B.10 Options available for output streams (e.g. DstWriter)<br />
Output stream objects are used for writing user created data into data files or databases. They are created and<br />
named by setting the option ApplicationMgr.OutStream. For each output stream the following options<br />
are available<br />
Option name Default value Meaning<br />
ItemList {} The list of data objects to be written to this stream, e.g.<br />
{“/Event#1”,”Event/MyTracks/#1”};<br />
EvtDataSvc “EventDataSvc” The service from which to retrieve objects.<br />
Output {} Output data stream specification. Format:<br />
{“DATAFILE='mydst.root' TYP='ROOT'”};<br />
AcceptAlgs {} If any of these algorithms sets filterflag=true; the event is accepted<br />
RequireAlgs {} If any of these algorthms is not executed, the event is rejected<br />
VetoAlgs {} If any of these algorithms does not set filterflag = true; the event is rejected<br />
Table B.11 Standard Options for persistency services (e.g. EventPersistencySvc)<br />
Option name Default value Meaning<br />
CnvServices {} Conversion services to be used by the service to load or store<br />
persistent data (e.g. "RootEvtCnvSvc")<br />
Table B.12 Standard Options for conversion services (e.g. RootEvtCnvSvc)<br />
Option name Default value Meaning<br />
DbType "" Persistency technology (e.g. "ROOT")<br />
page 176
<strong>Athena</strong><br />
Appendix B Options for standard components Version/Issue: 2.0.0<br />
Table B.13 Standard Options for the histogram service [HistogramPersistencySvc]<br />
Option name Default value Meaning<br />
OutputFile "" Output file for histograms. No output if not defined<br />
Table B.14 Standard Options for the N-tuple service [NTupleSvc] (see section 12.2.3.1)<br />
Option name Default value Meaning<br />
Input {} Input file(s) for n-tuples. Format:<br />
{“FILE1 DATAFILE='tuple.hbook' OPT='OLD' TYP='HBOOK'”,<br />
[“FILE2 DATAFILE='tuple.root' OPT='OLD' TYP='ROOT'”,...]}<br />
Output {} Output file fInput file(s) for n-tuples. Format:<br />
{“FILE1 DATAFILE='tuple.hbook' OPT='NEW' TYP='HBOOK'”,<br />
[“FILE2 DATAFILE='tuple.root' OPT='NEW' TYP='ROOT'”,...]}<br />
StoreName "/NTUPLES" Name of top level entry<br />
Table B.15 Standard Options for the standard event selector [EventSelector]<br />
Option name Default value Meaning<br />
Input {} Input data stream specification.<br />
Format: " = ’’ "<br />
Possible :<br />
DATAFILE = ’filename’, TYP = ’technology type’ OPT =<br />
’new’|’update’|’old’, SVC = ’CnvSvcName’, AUTH = ’login’<br />
FirstEvent 1 First event to process (allows skipping of preceding events)<br />
EvtMax<br />
All events on Application-<br />
Mgr.EvtSel input stream<br />
Maximum number of events to process<br />
PrintFreq 10 Frequency with which event number is reported<br />
JobInput ““ String of input files (same format as ApplicationMgr.EvtSel), used<br />
only for pileup event selector(s)<br />
page 177
<strong>Athena</strong> Appendix B Options for standard components Version/Issue: 2.0.0<br />
Table B.16 Event Tag Collection Selector [EventCollectionSelector]<br />
Option name Default value Meaning<br />
CnvService “EvtTupleSvc” Conversion service to be used<br />
Authentication "" Authentication to be used<br />
Container "B2PiPi" Container name<br />
Item "Address" Item name<br />
Criteria "" Selection criteria<br />
DB "" Database name<br />
DbType "" Database type<br />
Function "NTuple::Selector" Selection function<br />
Table B.17 Standard Options for Particle Property Service [ParticlePropertySvc]<br />
Option name Default value Meaning<br />
ParticlePropertiesFile “($LHCBDBASE)/cdf/particle.cdf” Particle properties database location<br />
Table B.18 Standard Options for Random Numbers Generator Service [RndmGenSvc]<br />
Option name Default value Meaning<br />
Engine “HepRndm::Engine” Random number generator engine<br />
Seeds<br />
Table of generator seeds<br />
Column 0 Number of columns in seed table -1<br />
Row 1 Number of rows in seed table -1<br />
Luxury 3 Luxury value for the generator<br />
UseTable false Switch to use seeds table<br />
Table B.19 Standard Options for Chrono and Stat Service [ChronoStatSvc]<br />
Option name Default value Meaning<br />
ChronoPrintOutTable true Global switch for profiling printout<br />
PrintUserTime true Switch to print User Time<br />
PrintSystemTime false Switch to print System Time<br />
PrintEllapsedTime false Switch to print Elapsed time (Note typo in option name!)<br />
ChronoDestinationCout false If true, printout goes to cout rather than MessageSvc<br />
page 178
<strong>Athena</strong><br />
Appendix B Options for standard components Version/Issue: 2.0.0<br />
Table B.19 Standard Options for Chrono and Stat Service [ChronoStatSvc]<br />
Option name Default value Meaning<br />
ChronoPrintLevel 3 Print level for profiling (values as for MessageSvc)<br />
ChronoTableToBeOrdered true Switch to order printed table<br />
StatPrintOutTable true Global switch for statistics printout<br />
StatDestinationCout false If true, printout goes to cout rather than MessageSvc<br />
StatPrintLevel 3 Print level for profiling (values as for MessageSvc)<br />
StatTableToBeOrdered true Switch to order printed table<br />
page 179
<strong>Athena</strong> Appendix B Options for standard components Version/Issue: 2.0.0<br />
page 180
<strong>Athena</strong><br />
Appendix C Design considerations Version/Issue: 2.0.0<br />
Appendix C<br />
Design considerations<br />
C.1 Generalities<br />
In this chapter we look at how you might actually go about designing and implementing a real physics<br />
algorithm. It includes points covering various aspects of software development process and in<br />
particular:<br />
• The need for more “thinking before coding” when using an OO language like C++.<br />
• Emphasis on the specification and analysis of an algorithm in mathematical and natural<br />
language, rather than trying to force it into (unnatural?) object orientated thinking.<br />
• The use of OO in the design phase, i.e. how to map the concepts identified in the analysis<br />
phase into data objects and algorithm objects.<br />
• The identification of classes which are of general use. These could be implemented by the<br />
computing group, thus saving you work!<br />
• The structuring of your code by defining private utility methods within concrete classes.<br />
When designing and implementing your code we suggest that your priorities should be as follows: (1)<br />
Correctness, (2) Clarity, (3) Efficiency and, very low in the scale, OOness<br />
Tips about specific use of the C++ language can be found in the coding rules document [5] or<br />
specialized literature.<br />
page 181
<strong>Athena</strong> Appendix C Design considerations Version/Issue: 2.0.0<br />
C.2 Designing within the Framework<br />
A physicist designing a real physics algorithm does not start with a white sheet of paper. The fact that<br />
he or she is using a framework imposes some constraints on the possible or allowed designs. The<br />
framework defines some of the basic components of an application and their interfaces and therefore it<br />
also specifies the places where concrete physics algorithms and concrete data types will fit in with the<br />
rest of the program. The consequences of this are: on one hand, that the physicists designing the<br />
algorithms do not have complete freedom in the way algorithms may be implemented; but on the other<br />
hand, neither do they need worry about some of the basic functionalities, such as getting end-user<br />
options, reporting messages, accessing event and detector data independently of the underlying storage<br />
technology, etc. In other words, the framework imposes some constraints in terms of interfaces to basic<br />
services, and the interfaces the algorithm itself is implementing towards the rest of the application. The<br />
definition of these interfaces establishes the so called “master walls” of the data processing application<br />
in which the concrete physics code will be deployed. Besides some general services provided by the<br />
framework, this approach also guarantees that later integration will be possible of many small<br />
algorithms into a much larger program, for example a reconstruction program. In any case, there is still<br />
a lot of room for design creativity when developing physics code within the framework and this is what<br />
we want to illustrate in the next sections.<br />
To design a physics algorithm within the framework you need to know very clearly what it should do<br />
(the requirements). In particular you need to know the following:<br />
• What is the input data to the algorithm? What is the relationship of these data to other data<br />
(e.g. event or detector data)?<br />
• What new data is going to be produced by the algorithm?<br />
• What’s the purpose of the algorithm and how is it going function? Document this in terms of<br />
mathematical expressions and plain english. 1<br />
• What does the algorithm need in terms of configuration parameters?<br />
• How can the algorithm be partitioned (structured) into smaller “algorithm chunks” that make<br />
it easier to develop (design, code, test) and maintain?<br />
• What data is passed between the different chunks? How do they communicate?<br />
• How do these chunks collaborate together to produce the desired final behaviour? Is there a<br />
controlling object? Are they self-organizing? Are they triggered by the existence of some<br />
data?<br />
• How is the execution of the algorithm and its performance monitored (messages, histograms,<br />
etc.)?<br />
• Who takes the responsibility of bootstrapping the various algorithm chunks.<br />
For didactic purposes we would like to illustrate some of these design considerations using a<br />
hypothetical example. Imagine that we would like to design a tracking algorithm based on a<br />
Kalman-filter algorithm.<br />
1. Catalan is also acceptable.<br />
page 182
<strong>Athena</strong><br />
Appendix C Design considerations Version/Issue: 2.0.0<br />
C.3 Analysis Phase<br />
As mentioned before we need to understand in detail what the algorithm is supposed to do before we<br />
start designing it and of course before we start producing lines of C++ code. One old technique for that,<br />
is to think in terms of data flow diagrams, as illustrated in Figure A.1, where we have tried to<br />
decompose the tracking algorithm into various processes or steps.<br />
geometry<br />
find seeds<br />
Geometry store<br />
geometry<br />
seeds<br />
pad hits<br />
Event Data store<br />
form / refine<br />
track<br />
segment<br />
station hits<br />
tracks<br />
geometry<br />
proto-tracks<br />
proto-track<br />
extrapolate<br />
to next<br />
station<br />
proto-tracks<br />
station hits<br />
proto-tracks<br />
produce<br />
tracks<br />
select/discard<br />
proto-track<br />
Figure A.1 Hypothetical decomposition of a tracking algorithm based on a Kalman filter using a Data flow Diagram<br />
In the analysis phase we identify the data which is needed as input (event data, geometry data,<br />
configuration parameters, etc.) and the data which is produced as output. We also need to think about<br />
the intermediate data. Perhaps this data may need to be saved in the persistency store to allow us to run<br />
a part of the algorithm without starting always from the beginning.<br />
We need to understand precisely what each of the steps of the algorithm is supposed to do. In case a<br />
step becomes too complex we need to sub-divide it into several ones. Writing in plain english and using<br />
page 183
<strong>Athena</strong> Appendix C Design considerations Version/Issue: 2.0.0<br />
mathematics whenever possible is extremely useful. The more we understand about what the algorithm<br />
has to do the better we are prepared to implement it.<br />
C.4 Design Phase<br />
We now need to decompose our physics algorithm into one or more Algorithms (as framework<br />
components) and define the way in which they will collaborate. After that we need to specify the data<br />
types which will be needed by the various Algorithms and their relationships. Then, we need to<br />
understand if these new data types will be required to be stored in the persistency store and how they<br />
will map to the existing possibilities given by the object persistency technology. This is done by<br />
designing the appropriate set of Converters. Finally, we need to identify utility classes which will help<br />
to implement the various algorithm chunks.<br />
C.4.1 Defining Algorithms<br />
Most of the steps of the algorithm have been identified in the analysis phase. We need at this moment to<br />
see if those steps can be realized as framework Algorithms. Remember that an Algorithm from the view<br />
point of the framework is basically a quite simple interface (initialize, execute, finalize) with a few<br />
facilities to access the basic services. In the case of our hypothetical algorithm we could decide to have<br />
a “master” Algorithm which will orchestrate the work of a number of sub-Algorithms. This master<br />
Algorithm will be also be in charge of bootstraping them. Then, we could have an Algorithm in charge<br />
of finding the tracking seeds, plus a set of others, each one associated to a different tracking station in<br />
charge of propagating a proto-track to the next station and deciding whether the proto-track needs to be<br />
kept or not. Finally, we could introduce another Algorithm in charge of producing the final tracks from<br />
the surviving proto-tracks.<br />
It is interesting perhaps in this type of algorithm to distribute parts of the calculations (extrapolations,<br />
etc.) to more sophisticated “hits” than just the unintelligent original ones. This could be done by<br />
instantiating new data types (clever hits) for each event having references to the original hits. For that, it<br />
would be required to have another Algorithm whose role is to prepare these new data objects, see<br />
Figure A.2.<br />
The master Algorithm (TrackingAlg) is in charge of setting up the other algorithms and scheduling their<br />
execution. It is the only one that has a global view but it does not need to know the details of how the<br />
different parts of the algorithm have been implemented. The application manager of the framework<br />
only interacts with the master algorithm and does not need to know that in fact the tracking algorithm is<br />
implemented by a collaboration of Algorithms.<br />
C.4.2 Defining Data Objects<br />
page 184<br />
The input, output and intermediate data objects need to be specified. Typically, the input and output are<br />
specified in a more general way (algorithm independent) and basically are pure data objects. This is
<strong>Athena</strong><br />
Appendix C Design considerations Version/Issue: 2.0.0<br />
TrackingAlg<br />
HitPreprocessor<br />
SeedFinder<br />
Tracker<br />
StationProcessor<br />
StationProcessor<br />
StationProcessor<br />
HitSet<br />
nHitSet<br />
ProtoTrack<br />
Set<br />
Track<br />
Set<br />
Hit<br />
Hit<br />
Hit<br />
nHit<br />
nHit<br />
nHit<br />
PTrack<br />
PTrack<br />
PTrack<br />
PTrack<br />
Event Data Store<br />
IAlgorithm<br />
DataObject<br />
Algorithm<br />
Station<br />
TrackingAlg<br />
HitSet<br />
Hit<br />
HitPreprocessor<br />
SeedFinder<br />
nHitSet<br />
nHit<br />
StationProcessor<br />
ProtoTack<br />
Set<br />
PTrack<br />
Tracker<br />
TackSet<br />
Track<br />
page 185<br />
Figure A.2 Object diagram (a) and class diagram (b) showing how the complete example tracking algorithm could be<br />
decomposed into a set of specific algorithms that collaborate to perform the complete task.
<strong>Athena</strong> Appendix C Design considerations Version/Issue: 2.0.0<br />
because they can be used by a range of different algorithms. We could have various types of tracking<br />
algorithm all using the same data as input and producing similar data as output. On the contrary, the<br />
intermediate data types can be designed to be very algorithm dependent.<br />
The way we have chosen to communicate between the different Algorithms which constitute our<br />
physics algorithm is by using the transient event data store. This allows us to have low coupling<br />
between them, but other ways could be envisaged. For instance, we could implement specific methods<br />
in the algorithms and allow other “friend” algorithms to use them directly.<br />
Concerning the relationships between data objects, it is strongly discouraged to have links from the<br />
input data objects to the newly produced ones (i.e. links from hits to tracks). In the other direction this<br />
should not be a problem (i.e from tracks to constituent hits).<br />
For data types that we would like to save permanently we need to implement a specific Converter. One<br />
converter is required for each type of data and each kind of persistency technology that we wish to use.<br />
This is not the case for the data types that are used as intermediate data, since these data are completely<br />
transient.<br />
C.4.3 Mathematics and other utilities<br />
It is clear that to implement any algorithm we will need the help of a series of utility classes. Some of<br />
these classes are very generic and they can be found in common class libraries. For example the<br />
standard template library. Other utilities will be more high energy physics specific, especially in cases<br />
like fitting, error treatment, etc. We envisage making as much use of these kinds of utility classes as<br />
possible.<br />
Some algorithms or algorithm-parts could be designed in a way that allows them to be reused in other<br />
similar physics algorithms. For example, perhaps fitting or clustering algorithms could be designed in a<br />
generic way such that they can be used in various concrete algorithms. During design is the moment to<br />
identify this kind of re-usable component or to identify existing ones that could be used instead and<br />
adapt the design to make possible their usage.<br />
page 186
<strong>Athena</strong><br />
Version/Issue: 2.0.0<br />
Appendix D<br />
Installation guide<br />
19.6 Overview<br />
Installation of <strong>Athena</strong> at a remote site involves the following major components.<br />
1. Installation of external packages. These should generally be very stable such that this<br />
procedure need not be repeated very frequently.<br />
2. Installation of the GAUDI package. This is currently being modified frequently, and typically<br />
a new version is necessary for each <strong>Athena</strong> release. It is obviously hoped that eventually this<br />
will reach a level of functionality where installation of a new version will not be necessary for<br />
every <strong>Athena</strong> release.<br />
3. Installation of the ATLAS release.<br />
19.6.1 Acknowledgements<br />
The following abreviated guide is a summary of more detailed information that is available from the<br />
following sources:<br />
1. Kristo Karr (Affiliation?) has put together a set of web pages that details installation<br />
instructions for various ATLAS releases accessible from the Trigger Web page at:<br />
URL<br />
The most recent set of such instructions (for ATLAS release 1.2.2) are located at:<br />
http://atddoc.cern.ch/Atlas/EventFilter/documents/importing_athena_122.html<br />
page 187
<strong>Athena</strong> Version/Issue: 2.0.0<br />
[6] Iwona Sajda (LBNL) has compiled a web page based on her experience installing<br />
ATLAS software at LBNL. The URL ss:<br />
http://www-rnc.lbl.gov/~sakrejda/ATLASREADME.txt<br />
19.6.2 External Packages<br />
<strong>Athena</strong> 1.3.2 depends upon the external packages and versions shown in Listing 19.1:<br />
Listing 19.1 External packages and versions<br />
CLHEP 1.6.0.0 [1]<br />
HTL 1.3.0.1<br />
CERNLIB 2000<br />
ROOT 2.25 [2]<br />
Qt 2.2.1 [3]<br />
CMT v1r7 [4]<br />
GAUDI 0.7.0 [5]<br />
Notes:<br />
1. CLHEP and HTL are both part of the LHC++ set of packages.<br />
2. The dependencies on ROOT come from the ROOT generic converters as described in Chapter<br />
6, and from the ROOT histogram and ntuple persistency service available from GAUDI.<br />
3. The Qt package is required by the ATLAS graphics environment.<br />
4. CMT notes here.<br />
5. This Gaudi version convention has been adopted by ATLAS. This version corresponds to the<br />
official GAUDI release v7, with some patches. Note that the other packages must be installed<br />
before attempting to build the GAUDI package.<br />
19.6.3 Installing CLHEP<br />
This section is in preparation.<br />
19.6.4 Installing HTL<br />
This section is in preparation.<br />
19.6.5 Installing CERNLIB<br />
page 188<br />
This section is in preparation.
<strong>Athena</strong><br />
Version/Issue: 2.0.0<br />
19.6.6 Installing ROOT<br />
This section is in preparation.<br />
19.6.7 Installing Qt<br />
The section is in preparation.<br />
19.6.8 Installing CMT<br />
This section is in preparation.<br />
19.6.9 Installing GAUDI<br />
The GAUDI 0.7.0 release is located at:<br />
/afs/cern.ch/atlas/project/Gaudi/install/0.7.0.tar.gz<br />
This should be transferred to the desired location on the remote machine.<br />
19.6.10 Installing the ATLAS release<br />
This section is in preparation.<br />
page 189
<strong>Athena</strong> Version/Issue: 2.0.0<br />
page 190
<strong>Athena</strong><br />
Version/Issue: 2.0.0<br />
A<br />
AIDA 169<br />
see Interfaces<br />
Algorithm 14<br />
Base class 15, 23<br />
branches 30<br />
Concrete 23, 26<br />
Constructor 25, 26<br />
Declaring properties 25<br />
Execution 27<br />
Filters 30<br />
Finalisation 29<br />
Initialisation 25, 27, 29<br />
Nested 29<br />
sequences 30<br />
Setting properties 25<br />
Algorithms<br />
EventCounter 31, 98<br />
Prescaler 31<br />
Sequencer 31<br />
Application Manager 16<br />
Architecture 13<br />
Associators 131<br />
Example 134<br />
B<br />
Branches 30<br />
C<br />
Casting<br />
of DataObjects 45<br />
Checklist<br />
for implementing algorithms 29<br />
Class<br />
identifier (CLID) 51<br />
CLHEP 169<br />
CMT<br />
Building libraries with 166<br />
Component 13, 162<br />
libraries 162, 165<br />
Component libraries 149, 150<br />
component libraries 153<br />
components 150<br />
ContainedObject 47<br />
Converters 137<br />
page 191
<strong>Athena</strong> Version/Issue: 2.0.0<br />
page 192<br />
D<br />
Data Store 43<br />
finding objects in 45, 51<br />
Histograms 77<br />
registering objects into 46<br />
DataObject 15, 44, 45, 47<br />
ownership 46<br />
DECLARE_ALGORITHM 151, 164<br />
DECLARE_FACTORY_ENTRIES 151, 164<br />
E<br />
endreq, MsgStream manipulator 106<br />
Event Collections 89<br />
Filling 90<br />
Reading Events with 91<br />
Writing 89<br />
EventCounter algorithm. See Algorithms<br />
Examples<br />
Associator 134<br />
Exception<br />
when casting 45<br />
F<br />
Factory<br />
for a concrete algorithm 25<br />
Filters 30<br />
FORTRAN 14, 154<br />
and shareable libraries 168<br />
G<br />
getFactoryEntries 151, 163<br />
<strong>Guide</strong>lines<br />
for software packaging 159<br />
H<br />
HBOOK<br />
Constraints on histograms 79<br />
For histogram persistency 80<br />
Limitations on N-tuples 84, 89<br />
Histograms<br />
HTL 169<br />
Persistency service 80<br />
HTL 169<br />
I<br />
Inheritance 23<br />
Intefaces<br />
IConversionSvc 138<br />
Interactive Analysis
<strong>Athena</strong><br />
Version/Issue: 2.0.0<br />
of N-tuples 92<br />
Interface 13<br />
and multiple inheritance 17<br />
Identifier 17<br />
In C++ 17<br />
Interface ID 161<br />
Interfaces<br />
AIDA 77, 169<br />
IAlgorithm 17, 23, 25, 27<br />
IAlgTool 126<br />
IAssociator 132<br />
IAuditor 114<br />
IAxis 77<br />
IConverter 139<br />
IDataManager 16<br />
IDataProviderSvc 16, 44, 83<br />
IDataProvideSvc 77<br />
IHistogram 77<br />
IHistogram1D 77<br />
IHistogram2D 77<br />
IHistogramSvc 16, 44, 77<br />
IIncidentListener 118<br />
IMessageSvc 17<br />
in Gaudi 160<br />
INTupleSvc 44, 83<br />
INtupleSvc 16<br />
IOpaqueAddress 139<br />
IParticlePropertySvc 107<br />
IProperty 17, 23<br />
ISvcLocator 25<br />
IToolSvc 130<br />
L<br />
LD_LIBRARY_PATH 154<br />
Libraries<br />
Building 166<br />
Building, with CMT 166<br />
Component 162, 165<br />
containing FORTRAN code 168<br />
Linker 165<br />
Linker Libraries 153<br />
LOAD_FACTORY_ENTRIES 153, 165<br />
M<br />
Message service 104<br />
page 193
<strong>Athena</strong> Version/Issue: 2.0.0<br />
page 194<br />
Monitoring<br />
of algorithm calls, with the Auditor service 113<br />
statistical, using the Chrono&stat service 111<br />
Monte Carlo truth<br />
navigation using Associators 131<br />
N<br />
NAG C 170<br />
N-tuples 83<br />
Booking and declaring tags 85<br />
filling 85<br />
Interactive Analysis of 92<br />
Limitations imposed by HBOOK 84, 89<br />
persistency 87<br />
reading 86<br />
Service 83<br />
O<br />
Object Container 47<br />
and STL 47<br />
ObjectList 47<br />
ObjectVector 47<br />
P<br />
Package 157<br />
Internal layout 158<br />
structure of LHCb software 157<br />
Packages<br />
Dependencies of Gaudi 157<br />
<strong>Guide</strong>lines 159<br />
PAW<br />
for N-Tuple analysis 92<br />
Persistency<br />
of histograms 80<br />
of N-tuples 87<br />
Persistent store<br />
saving data to 53<br />
Prescaler algorithm. See Algorithms<br />
Profiling<br />
of execution time, using the Chrono&Stat service 110<br />
of execution time, with the Auditor service 113<br />
of memory usage, with the Auditor service 113<br />
R<br />
Random numbers<br />
generating 115<br />
Service 115<br />
Retrieval 130
<strong>Athena</strong><br />
Version/Issue: 2.0.0<br />
ROOT 170<br />
for histogram persistency 80<br />
for N-Tuple analysis 93<br />
S<br />
Saving data 53<br />
Sequencer algorithm. See Algorithms<br />
Sequences 30<br />
Services 15<br />
Auditor Service 113<br />
Chrono&Stat service 110<br />
Histogram Persistency Services 80<br />
Incident service 118<br />
Job Options service 97<br />
Message Service 104<br />
N-tuples Service 83<br />
Particle Properties Service 107<br />
Random numbers service 115<br />
requesting and accessing 95<br />
ToolSvc 123, 129<br />
vs. Tools 123<br />
SmartDataLocator 51<br />
SmartDataPtr 51<br />
SmartRef 52<br />
StatusCode 27<br />
T<br />
Tools 123<br />
Associators 131<br />
provided in Gaudi 131<br />
vs. Services 123<br />
ToolSvc, see Services<br />
page 195