25.06.2013 Views

IBM® Cognos® 8 Planning - Contributor Administration Guide

IBM® Cognos® 8 Planning - Contributor Administration Guide

IBM® Cognos® 8 Planning - Contributor Administration Guide

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

IBM ® Cognos ® 8 <strong>Planning</strong><br />

CONTRIBUTOR<br />

ADMINISTRATION GUIDE


Product Information<br />

This document applies to IBM ® Cognos ® 8 <strong>Planning</strong> Version 8.4 and may also apply to subsequent releases. To check for newer versions of<br />

this document, visit the IBM Cognos Resource Center (http://www.ibm.com/software/data/support/cognos_crc.html).<br />

Copyright<br />

Copyright © 2008 Cognos ULC (formerly Cognos Incorporated). Cognos ULC is an IBM Company.<br />

Portions of Cognos ULC software products are protected by one or more of the following U.S. Patents: 6,609,123 B1; 6,611,838 B1; 6,662,188<br />

B1; 6,728,697 B2; 6,741,982 B2; 6,763,520 B1; 6,768,995 B2; 6,782,378 B2; 6,847,973 B2; 6,907,428 B2; 6,853,375 B2; 6,986,135 B2;<br />

6,995,768 B2; 7,062,479 B2; 7,072,822 B2; 7,111,007 B2; 7,130,822 B1; 7,155,398 B2; 7,171,425 B2; 7,185,016 B1; 7,213,199 B2; 7,243,106<br />

B2; 7,257,612 B2; 7,275,211 B2; 7,281,047 B2; 7,293,008 B2; 7,296,040 B2; 7,318,058 B2; 7,325,003 B2; 7,333,995 B2.<br />

Cognos and the Cognos logo are trademarks of Cognos ULC (formerly Cognos Incorporated) in the United States and/or other countries. IBM<br />

and the IBM logo are trademarks of International Business Machines Corporation in the United States, or other countries or both. Java and<br />

all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Other company, product,<br />

or service names may be trademarks or service marks of others.<br />

While every attempt has been made to ensure that the information in this document is accurate and complete, some typographical errors or<br />

technical inaccuracies may exist. Cognos does not accept responsibility for any kind of loss resulting from the use of information contained<br />

in this document.<br />

This document shows the publication date. The information contained in this document is subject to change without notice. Any improvements<br />

or changes to the information contained in this document will be documented in subsequent editions.<br />

U.S. Government Restricted Rights. The software and accompanying materials are provided with Restricted Rights. Use, duplication, or disclosure<br />

by the Government is subject to the restrictions in subparagraph (C)(1)(ii) of the Rights in Technical Data and Computer clause at<br />

DFARS 252.227-7013, or subparagraphs (C)(1) and (2) of the Commercial Computer Software - Restricted Rights at 48CFR52.227 as applicable.<br />

The Contractor is Cognos Corporation, 15 Wayside Road, Burlington, MA 01803.<br />

This document contains proprietary information of Cognos. All rights are reserved. No part of this document may be copied, photocopied,<br />

reproduced, stored in a retrieval system, transmitted in any form or by any means, or translated into another language without the prior<br />

written consent of Cognos.


Table of Contents<br />

Introduction 13<br />

Chapter 1: What’s New? 17<br />

New Features in Version 8.4 17<br />

Load A-Tables Using <strong>Administration</strong> Links 17<br />

New <strong>Contributor</strong> Web Client 17<br />

Improved Performance During Incremental Publish 17<br />

Additional Support for Built-in Functions 17<br />

IBM Cognos 8 Business Viewpoint Client Integration 17<br />

New Features in Version 8.3 18<br />

Extended Language Support 18<br />

Microsoft Vista Compliance 18<br />

Microsoft Excel 2007 18<br />

Select Folders in IBM Cognos Connection 18<br />

Select a Framework Manager Package for an <strong>Administration</strong> Link 19<br />

Chapter 2: IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> 21<br />

Extending the Functionality of the <strong>Contributor</strong> <strong>Administration</strong> Console and the Classic <strong>Contributor</strong><br />

Web Client 21<br />

Using <strong>Contributor</strong> Applications 21<br />

Cubes 21<br />

Dimensions 21<br />

e.Lists 21<br />

Access Tables and Saved Selections 22<br />

D-Links 22<br />

Managing <strong>Contributor</strong> Applications 22<br />

Multiple Administrators 22<br />

Moving Data 22<br />

System Links 23<br />

Automating <strong>Contributor</strong> Tasks Using Macros 23<br />

Publishing Data 23<br />

Creating a <strong>Contributor</strong> Application 23<br />

Developing the Plan in Analyst 24<br />

Designing the e.List 24<br />

Assigning Rights 24<br />

Creating the Application 24<br />

Creating the Production Application 25<br />

Running Jobs 25<br />

Testing the Web Site 25<br />

The Administrator 25<br />

The Planner 26<br />

The Reviewer 26<br />

The Toolbar 27<br />

<strong>Administration</strong> <strong>Guide</strong> 3


Table of Contents<br />

4 <strong>Contributor</strong><br />

Chapter 3: Security 29<br />

Cognos Namespace 29<br />

Authentication Providers 29<br />

Deleting or Restoring Unconfigured Namespaces 31<br />

Users, Groups, and Roles 31<br />

Users 32<br />

Groups and Roles 32<br />

Setting up Security for an IBM Cognos 8 <strong>Planning</strong> Installation 34<br />

Configure IBM Cognos 8 to Use an Authentication Provider 35<br />

Add or Remove Members From <strong>Planning</strong> Rights Administrators and <strong>Planning</strong> <strong>Contributor</strong><br />

Users Roles 36<br />

Enabling <strong>Planning</strong> Roles in IBM Cognos 8 37<br />

Restricting Access to the Everyone Group 37<br />

Recommendation - Creating Additional Roles or Groups for <strong>Contributor</strong> 37<br />

Configuring Access to the <strong>Contributor</strong> <strong>Administration</strong> Console 38<br />

Granting Access Rights to Administrators 39<br />

Access Rights for Macros 42<br />

Assign Scheduler Credentials 44<br />

Chapter 4: Configuring the <strong>Administration</strong> Console 47<br />

Creating <strong>Planning</strong> Tables 47<br />

Add a Datastore Server 48<br />

Datastore Server Information 49<br />

Jobs 49<br />

Types of Jobs 49<br />

Run Order for Jobs 50<br />

Actions That Cause Jobs to Run 51<br />

Securing Jobs 51<br />

Managing Jobs 52<br />

Reconciliation 54<br />

Deleting Jobs 55<br />

Managing Job Servers 55<br />

Manage a Job Server Cluster 56<br />

Add a Job Server and Change its Content Store 56<br />

Add Applications and Other Objects to a Job Server Cluster 57<br />

Add Objects to a Job Server 58<br />

Remove Job Servers 59<br />

Monitor Application Folders 59<br />

Creating, Adding and Upgrading Applications 60<br />

Remove Datastore Definitions and <strong>Contributor</strong> Applications 60<br />

Adding an Existing Application to a Datastore Server 60<br />

The Monitoring Console 61<br />

Managing Sessions 61<br />

Sending Email 63<br />

Chapter 5: Creating a <strong>Contributor</strong> Application 65<br />

Creating a <strong>Contributor</strong> Application 65<br />

Application Folders 68<br />

Model Details 68


Running the Script.sql file (DBA Only) 69<br />

Application Information 70<br />

Configuring the <strong>Contributor</strong> Application 70<br />

Configure the Web Client 71<br />

Set the Cube Order for an Application 71<br />

Set the Order of Axes 72<br />

Change Grid Options 72<br />

Change Application Options 74<br />

Create Planner-Only Cubes 78<br />

Creating General Messages and Cube Instructions 78<br />

Maintaining the <strong>Contributor</strong> Application 79<br />

Save Application XML for Support 79<br />

View Application Details 79<br />

Admin Options 79<br />

Select Dimensions for Publish 82<br />

Set Go to Production Options 82<br />

Datastore Options 84<br />

Chapter 6: The <strong>Contributor</strong> Web Client Application 87<br />

The <strong>Contributor</strong> Web Site 87<br />

The Tree 87<br />

The Table 87<br />

Set Web Site Language 88<br />

Access <strong>Contributor</strong> Applications 88<br />

Configure Classic <strong>Contributor</strong> Web Client Security Settings 89<br />

Linking to Earlier Versions of <strong>Contributor</strong> Applications 89<br />

Working Offline 89<br />

The Offline Store 90<br />

Independent Web Applications 90<br />

<strong>Contributor</strong> for Microsoft Excel 91<br />

Chapter 7: Managing User Access to Applications 93<br />

The e.List 93<br />

Multiple Owners of e.List Items 95<br />

Import e.List and Rights 96<br />

Export the e.List and Rights 101<br />

Managing the e.List 101<br />

Rights 107<br />

Actions Allowed for Review e.List Items 108<br />

Actions Allowed for Contribution e.List items 109<br />

Rights File Formats 110<br />

Modify Rights Manually 111<br />

Validating Users, Groups and Roles in the Application Model and Database 113<br />

Chapter 8: Managing User Access to Data 115<br />

Saved Selections 115<br />

Editing Saved Selections 116<br />

Access Tables 119<br />

Access Tables and Cubes 119<br />

Rules for Access Tables 120<br />

Table of Contents<br />

<strong>Administration</strong> <strong>Guide</strong> 5


Table of Contents<br />

6 <strong>Contributor</strong><br />

Creating Access Tables 123<br />

Large Access Tables 129<br />

Multiple Access Tables 135<br />

Changes to Access Tables That Cause a Reconcile Job to Be Run 136<br />

Access Tables and Import Data 137<br />

Access Levels and <strong>Contributor</strong> Data Entry 137<br />

Force to Zero 137<br />

Reviewer Access Levels 137<br />

Cut-down Models 138<br />

When Does the Cut-down Models Process Happen? 138<br />

Limitations 138<br />

Cut-down Model Options 139<br />

Cut-down Models and Translation 139<br />

Cut-down Models and Access Tables 140<br />

Restrictions to Cutting Down Dimensions 140<br />

Estimating Model and Data Block Size 141<br />

Cut-down Model Example 142<br />

Chapter 9: Managing Data 143<br />

Understanding <strong>Administration</strong>, System, and Local Links 144<br />

Using Links to Move Data Between Cubes and Applications 145<br />

Using Links in Model Design 146<br />

<strong>Administration</strong> Links 147<br />

Create an <strong>Administration</strong> Link 149<br />

Map Dimensions Manually 154<br />

Use an Allocation Table in <strong>Administration</strong> Links 155<br />

Validate <strong>Administration</strong> Links 156<br />

Synchronize <strong>Administration</strong> Links 156<br />

View Items in a Dimension 157<br />

Remove a Dimension 157<br />

Running <strong>Administration</strong> Links 157<br />

Exporting and Importing <strong>Administration</strong> Links 157<br />

Tuning <strong>Administration</strong> Links 158<br />

System Links 162<br />

Create a System Link 163<br />

Importing Data from IBM Cognos 8 Data Sources 164<br />

Create a Framework Manager Project and Import Metadata 165<br />

Create and Publish the IBM Cognos Package 166<br />

Working with SAP BW Data 167<br />

Recommendation - Query Items 168<br />

Recommendation - Hierarchy 168<br />

Recommendation - Hiding the Dimension Key Field 169<br />

Working with Packages 169<br />

Troubleshooting Detailed Fact Query Subject Memory Usage 169<br />

Deploying the <strong>Planning</strong> Environment and Viewing the Status of Deployments 170<br />

Export a Model 170<br />

Import a Model 170<br />

View the Status of Existing Deployments 172<br />

Troubleshooting Out of Memory Exception When Exporting During a Deployment 173


Importing Text Files into Cubes 173<br />

Creating the Source File 173<br />

Select the Cube and Text File to Load into the Cube 175<br />

Load the Data into the Datastore 175<br />

Prepare the Import Data Blocks 176<br />

Chapter 10: Synchronizing an Application 179<br />

Changes that Result in Loss of Data 179<br />

Synchronizing an Application 180<br />

Generate Scripts 180<br />

How to Avoid Loss of Data 180<br />

Example Synchronization 181<br />

Advanced - Model Changes 182<br />

Chapter 11: Translating Applications into Different Languages 185<br />

Assigning a Language Version to a User 185<br />

Translate the Application 186<br />

Translate Strings Using the <strong>Administration</strong> Console 187<br />

Exporting and Importing Files for Translation 189<br />

Export Files for Translation 190<br />

Import Translated Files 190<br />

Search for Strings in the Content Language or Product Language Tab 191<br />

Translating Help 191<br />

System Locale and Code Pages 191<br />

About Fonts 192<br />

Chapter 12: Automating Tasks Using Macros 193<br />

Common Tasks to Automate 193<br />

Creating a Macro 194<br />

Create a New Macro 194<br />

Create a Macro Step 195<br />

Transferring Macros and Macro Steps 198<br />

Job Servers (Macro Steps) 199<br />

Development (Macro Steps) 202<br />

Production (Macro Steps) 211<br />

Administrator Links (Macro Steps) 219<br />

Macros (Macro Steps) 220<br />

Session (Macro Steps) 223<br />

Running a Macro 224<br />

Run a Macro from <strong>Administration</strong> Console 225<br />

Run a Macro from IBM Cognos Connection 225<br />

Run a Macro from an IBM Cognos 8 Event 226<br />

Run a Macro using Macro Executor 227<br />

Run a Macro using Command Line 228<br />

Run a Macro using Batch File 228<br />

Troubleshooting Macros 229<br />

Unable to Run <strong>Contributor</strong> Macros Using a Batch File 229<br />

Chapter 13: Data Validations 231<br />

Setting Up Data Validation 232<br />

Table of Contents<br />

<strong>Administration</strong> <strong>Guide</strong> 7


Table of Contents<br />

8 <strong>Contributor</strong><br />

The Impact of Aggregation on Validation Rules 233<br />

Define a Validation Rule 237<br />

Define or Edit a Rule Set 239<br />

Edit a Validation Rule 240<br />

Associate Rule Sets to e.List Items 240<br />

Chapter 14: The Go to Production Process 243<br />

<strong>Planning</strong> Packages 244<br />

Reconciliation 245<br />

The Production Application 245<br />

Model Definition 245<br />

Data Block 245<br />

Production Tasks 246<br />

Cut-down Models and Multiple Languages 246<br />

The Development Application 247<br />

Development Model Definition 247<br />

Import Data Blocks 247<br />

Run Go to Production 248<br />

Go to Production Options Window 248<br />

Show Changes Window 249<br />

Model Changes Window 250<br />

Import Data Details Tab 254<br />

Invalid Owners and Editors Tab 254<br />

e.List Items to be Reconciled Tab 256<br />

Cut-down Models Window 256<br />

Finish Window 256<br />

Chapter 15: Publishing Data 259<br />

The Publish Data Store Container 260<br />

Access Rights Needed for Publishing 260<br />

Publish Scripts 260<br />

Selecting e.List Items to Be Published 261<br />

Reporting Directly From Publish Tables 261<br />

Model Changes that Impact the Publish Tables 262<br />

Data Dimensions for Publish 263<br />

Selecting a Dimension for Publish for Reporting 264<br />

The Table-Only Publish Layout 265<br />

Database Object Names 266<br />

Items Tables for the Table-only Layout 267<br />

Hierarchy Tables for the Table-only Layout 267<br />

Export Tables For the Table-only Layout 270<br />

Annotations Tables for the Table-only Layout 271<br />

Attached Document Tables for the Table-only Layout 272<br />

Metadata Tables 274<br />

Common Tables 276<br />

Job Tables 276<br />

The P_OBJECTLOCK Table 277<br />

Create a Table-only Publish Layout 277<br />

Options for Table-only Publish Layout 278<br />

Create an Incremental Publish 279


The View Publish Layout 279<br />

Database Object Names 280<br />

Items Tables for the View Layout 281<br />

Hierarchy Tables for the View Layout 281<br />

Export Tables for the View Layout 282<br />

Annotation Tables for the View Layout 282<br />

Views 283<br />

Create a View Layout 283<br />

Options for View Layout 284<br />

Create a Custom Publish Container 285<br />

Configure the Datastore Connection 286<br />

Remove Unused Publish Containers 288<br />

Chapter 16: Commentary 289<br />

User and Audit Annotations 289<br />

Delete Commentary 289<br />

Delete Commentary - e.List items 289<br />

Deleting Commentary 290<br />

Configuring the Attached Documents Properties 290<br />

Publishing Attached Documents 291<br />

Copy Commentary 291<br />

Breakback Considerations when Moving Commentary 291<br />

Chapter 17: Previewing the Production Workflow 293<br />

Previewing e.List item Properties 293<br />

Preview Properties - General 293<br />

Preview Properties - Owners 294<br />

Preview Properties - Editors 294<br />

Preview Properties - Reviewers 294<br />

Preview Properties - Rights 295<br />

Workflow State Definition 295<br />

Additional Workflow States 296<br />

Workflow State Explained 296<br />

Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products 299<br />

Table of Contents<br />

Client and Admin Extensions 300<br />

Classic Client Extensions 300<br />

Admin Extensions 301<br />

Integrating with IBM Cognos Business Intelligence Products 302<br />

Using IBM Cognos 8 BI with <strong>Contributor</strong> Unpublished (Real-Time) Data 302<br />

The Generate Framework Manager Model Admin Extension 306<br />

Generate Transformer Model 309<br />

Excel and <strong>Contributor</strong> 311<br />

Design Considerations When Using <strong>Contributor</strong> for Excel 311<br />

Print to Excel for Classic <strong>Contributor</strong> Web Client 312<br />

Export for Excel for Classic <strong>Contributor</strong> Web Client 312<br />

Financial <strong>Planning</strong> with IBM Cognos Performance Applications and <strong>Planning</strong> 312<br />

Managing <strong>Contributor</strong> Master Dimensions with IBM Cognos 8 Business Viewpoint Client 314<br />

Launching Business Viewpoint Client from <strong>Contributor</strong> 314<br />

<strong>Administration</strong> <strong>Guide</strong> 9


Table of Contents<br />

10 <strong>Contributor</strong><br />

Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products 315<br />

Download and Deploy the Sample 315<br />

Example of Integration with IBM Cognos 8 Business Intelligence 316<br />

Run a <strong>Contributor</strong> Macro to Import Data 317<br />

Create and Publish a Framework Manager Package 317<br />

Create a Report 319<br />

Create an Event Studio Agent 327<br />

Chapter 20: Upgrading IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> 329<br />

Upgrade the <strong>Planning</strong> <strong>Administration</strong> Domain 329<br />

Upgrade <strong>Contributor</strong> Applications 332<br />

Upgrade Security 335<br />

Accessing <strong>Contributor</strong> Applications 336<br />

Chapter 21: Analyst Model Design Considerations 337<br />

Designing an Analyst Model for <strong>Contributor</strong> 337<br />

Analyst Library <strong>Guide</strong>lines 337<br />

D-Cube Restrictions 338<br />

D-Links 339<br />

Dimensions 341<br />

Creating Applications with Very Large Cell Counts 345<br />

Break-Back Differences Between Analyst and <strong>Contributor</strong> 346<br />

Analyst<strong>Contributor</strong> Links 347<br />

Set Up a Link Between Analyst and <strong>Contributor</strong> and Between <strong>Contributor</strong> Applications 348<br />

Analyst><strong>Contributor</strong> D-Links 348<br />

<strong>Contributor</strong>>Analyst Links 349<br />

<strong>Contributor</strong>><strong>Contributor</strong> links 349<br />

Copying Analyst<strong>Contributor</strong> Links 350<br />

Links and Memory Usage 351<br />

Update a Link from a Computer That Cannot Access the Original Datastore 351<br />

Multiple D-Links Using the @DLinkExecuteList Macro 352<br />

Run D-Links While Making Model Changes 352<br />

Effects of Fill and Substitute Mode on Untargeted Cells 353<br />

Effect of Access Tables in <strong>Contributor</strong> 354<br />

Appendix A: DB2 UDB Supplementary Information 355<br />

The <strong>Contributor</strong> Datastore 355<br />

Requirements for the DB2 UDB Database Environment 355<br />

Background Information For DB2 UDB DBAs 356<br />

Security and Privileges 356<br />

Naming Conventions 357<br />

Metadata 357<br />

Backup 357<br />

Standards 357<br />

Preventing Lock Escalation 358<br />

Large Object Types 358<br />

Job Architecture 358<br />

Importing and Reporting Data 359<br />

Reporting Data: Understanding the Publish Job Process 359<br />

Data Loading 360


Job Failure 360<br />

Appendix B: Troubleshooting the Generate Framework Manager Model Extension 361<br />

Unable To Connect to the Database While Using Oracle 361<br />

Unable to Create Framework Manager Model 361<br />

Unable to Retrieve Session’s Namespace 362<br />

Unable to Change Model Design Language 362<br />

Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages 363<br />

Limitations for Importing IBM Cognos Packages 363<br />

Troubleshooting Modeled Data Import 366<br />

Viewing Generated Files 366<br />

Using Error Messages to Troubleshoot 369<br />

Techniques to Troubleshoot Problems with an Import 372<br />

Appendix D: Customizing IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> Help 375<br />

Creating Cube Help 375<br />

Detailed Cube Help 375<br />

Using HTML Formatting 375<br />

Using Images, Hypertext Links, and E-Mail Links in <strong>Contributor</strong> Applications 377<br />

Appendix E: Error Handling 379<br />

Error Logs and History Tracking 379<br />

Application XML issues 380<br />

Timeout Errors 380<br />

History Tracking 380<br />

Calculation Engine (JCE) error logs 382<br />

General Error Logging 383<br />

How Errors are Logged in the <strong>Administration</strong> Console 383<br />

Using the LogFetcher Utility 385<br />

Appendix F: Illegal Characters 387<br />

Appendix G: Default Options 389<br />

Grid Options 389<br />

Application Options 389<br />

XML Location and Filename 390<br />

Admin Options 390<br />

Go to Production Options 391<br />

Go to Production Wizard Options 391<br />

Publish Options-View Layout 392<br />

Publish Options-Table Only Layout 392<br />

e.List 393<br />

Rights 393<br />

Access Tables 393<br />

Delete Commentary 394<br />

Appendix H: Data Entry Input Limits 395<br />

Limits For Text Formatted Cells 395<br />

Limits for Numerical Cells 395<br />

Table of Contents<br />

<strong>Administration</strong> <strong>Guide</strong> 11


Table of Contents<br />

12 <strong>Contributor</strong><br />

Glossary 397<br />

Index 405


Introduction<br />

This document is intended for use with the IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> <strong>Administration</strong><br />

Console. This guide describes how to use the <strong>Contributor</strong> <strong>Administration</strong> Console to create and<br />

manage <strong>Contributor</strong> applications.<br />

IBM Cognos 8 <strong>Planning</strong> provides the ability to plan, budget, and forecast in a collaborative, secure<br />

manner. The major components are Analyst and <strong>Contributor</strong>.<br />

IBM Cognos 8 <strong>Planning</strong> - Analyst<br />

Analyst is a flexible tool used by financial specialists to define their business models. These models<br />

include the drivers and content required for planning, budgeting, and forecasting. The models can<br />

then be distributed to managers using the Web-based architecture of IBM Cognos 8 <strong>Planning</strong> -<br />

<strong>Contributor</strong>.<br />

IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

<strong>Contributor</strong> streamlines data collection and workflow management. It eliminates the problems of<br />

errors, version control, and timeliness that are characteristic of a planning system solely based on<br />

spreadsheets. Users have the option to submit information simultaneously through a simple Web<br />

or Microsoft Excel ® interface. Using an intranet or secure Internet connection, users review only<br />

what they need to review and add data where they are authorized.<br />

For more information about using this product, visit the IBM Cognos Resource Center (http://www.<br />

ibm.com/software/data/support/cognos_crc.html).<br />

Best Practices for IBM Cognos 8 <strong>Planning</strong><br />

The Cognos Innovation Center for Performance Management provides a forum and Performance<br />

Blueprints that you can use to discover new ideas and solutions for finance and performance man-<br />

agement issues. Blueprints are pre-defined data, process, and policy models that incorporate best<br />

practice knowledge from customers and the Cognos Innovation Center. These Blueprints are free<br />

of charge to existing customers or Platinum and Gold partners. For more information about the<br />

Cognos Innovation Center or the Performance Blueprints, visit http://www.cognos.com/<br />

innovationcenter.<br />

Audience<br />

To use this guide, you should have an understanding of IBM Cognos 8 <strong>Planning</strong> - Analyst. Some<br />

knowledge of security and database systems would also be helpful.<br />

Related Documentation<br />

Our documentation includes user guides, getting started guides, new features guides, readmes, and<br />

other materials to meet the needs of our varied audience. The following documents contain related<br />

information and may be referred to in this document.<br />

<strong>Administration</strong> <strong>Guide</strong> 13


Introduction<br />

14 <strong>Contributor</strong><br />

Note: For online users of this document, a Web page such as The page cannot be found may appear<br />

when clicking individual links in the following table. Documents are made available for your par-<br />

ticular installation and translation configuration. If a link is unavailable, you can access the document<br />

on the IBM Cognos Resource Center (http://www.ibm.com/software/data/support/cognos_crc.html).<br />

Document<br />

Analyst User <strong>Guide</strong><br />

<strong>Contributor</strong> Web Client<br />

User <strong>Guide</strong><br />

<strong>Contributor</strong> for Microsoft<br />

Excel® User <strong>Guide</strong><br />

IBM Cognos Connection<br />

User <strong>Guide</strong><br />

IBM Cognos 8 Administra-<br />

tion and Security <strong>Guide</strong><br />

Framework Manager User<br />

<strong>Guide</strong><br />

<strong>Guide</strong>lines for Modeling<br />

Metadata<br />

Event Studio User <strong>Guide</strong><br />

Finding Information<br />

Description<br />

Using IBM Cognos 8 <strong>Planning</strong> - Analyst<br />

Using the IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> Web client<br />

Using the IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> for Microsoft Excel®<br />

Using IBM Cognos Connection to publish, find, manage, organize,<br />

and view IBM Cognos content, such as scorecards, reports, analyses,<br />

and agents<br />

Managing servers, security, reports, and portal services; and setting<br />

up the samples, customizing the user interface and troubleshooting<br />

Creating and publishing models using Framework Manager<br />

Recommendations for modeling metadata to use in business reporting<br />

and analysis<br />

Creating and managing agents that monitor data and perform tasks<br />

when the data meets predefined thresholds<br />

Product documentation is available in online help from the Help menu or button in IBM Cognos<br />

products.<br />

To find the most current product documentation, including all localized documentation and<br />

knowledge base materials, access the IBM Cognos Resource Center (http://www.ibm.com/software/<br />

data/support/cognos_crc.html).<br />

You can also read PDF versions of the product readme files and installation guides directly from<br />

IBM Cognos product CDs.<br />

Getting Help<br />

For more information about using this product or for technical assistance, visit the IBM Cognos<br />

Resource Center (http://www.ibm.com/software/data/support/cognos_crc.html). This site provides<br />

information on support, professional services, and education.


Printing Copyright Material<br />

You can print selected pages, a section, or the whole book. You are granted a non-exclusive, non-<br />

transferable license to use, copy, and reproduce the copyright materials, in printed or electronic<br />

format, solely for the purpose of operating, maintaining, and providing internal training on IBM<br />

Cognos software.<br />

Introduction<br />

<strong>Administration</strong> <strong>Guide</strong> 15


Introduction<br />

16 <strong>Contributor</strong>


Chapter 1: What’s New?<br />

This section contains a list of new features for this release. It also contains a cumulative list of<br />

similar information for previous releases. It will help you plan your upgrade and application<br />

deployment strategies and the training requirements for your users.<br />

For information about upgrading, see the IBM Cognos 8 <strong>Planning</strong> Installation and Configuration<br />

<strong>Guide</strong>.<br />

For information about new features for this release, see the IBM Cognos 8 <strong>Planning</strong> New Features<br />

<strong>Guide</strong>.<br />

For changes to previous versions, see New Features in Version 8.3.<br />

To review an up-to-date list of environments supported by IBM Cognos products, such as operating<br />

systems, patches, browsers, Web servers, directory servers, database servers, and application servers,<br />

visit the IBM Cognos Resource Center (http://www.ibm.com/software/data/support/cognos_crc.<br />

html).<br />

New Features in Version 8.4<br />

Listed below are new features since the last release. Links to directly-related topics are included.<br />

Load A-Tables Using <strong>Administration</strong> Links<br />

This release supports loading of A-Tables into <strong>Contributor</strong> using <strong>Administration</strong> Links. For more<br />

information, see "Create an <strong>Administration</strong> Link" (p. 149).<br />

New <strong>Contributor</strong> Web Client<br />

This release provides a new <strong>Contributor</strong> Web Client. For more information, see the IBM Cognos 8<br />

<strong>Planning</strong> - <strong>Contributor</strong> Web Client User <strong>Guide</strong>.<br />

Improved Performance During Incremental Publish<br />

This release provides changes to incremental publish to increase performance. For more information,<br />

see "Create an Incremental Publish" (p. 279)<br />

Additional Support for Built-in Functions<br />

This release includes more built-in functions that are available for use in <strong>Contributor</strong>. For more<br />

information, see "Supported BiFs" (p. 343).<br />

IBM Cognos 8 Business Viewpoint Client Integration<br />

You can launch the new Business Viewpoint Client from <strong>Contributor</strong>. With Business Viewpoint<br />

Client, you can nominate master dimensional data from <strong>Contributor</strong> into a Business Viewpoint<br />

<strong>Administration</strong> <strong>Guide</strong> 17


Chapter 1: What’s New?<br />

Studio master repository, subscribe to Business Viewpoint Studio master dimensional data, and<br />

update the data to ensure it is synchronized between the two sources. From within Business Viewpoint<br />

Client, you can also open the Business Viewpoint Studio.<br />

For information on how to use the Business Viewpoint Client with <strong>Contributor</strong>, see the IBM Cognos<br />

8 Business Viewpoint Client User <strong>Guide</strong>. You can access it after you launch Business Viewpoint<br />

Client.<br />

New Features in Version 8.3<br />

Listed below are new features in version 8.3. Links to directly-related topics are included.<br />

Extended Language Support<br />

This release provides support for Japanese and Swedish product strings. <strong>Contributor</strong> Web Client<br />

strings can be translated into these languages without the need to enter translation strings. Japanese<br />

and Swedish language content strings are also supported. For more information, see the following<br />

sections.<br />

● "Translating Applications into Different Languages" (p. 185)<br />

● "Set Web Site Language" (p. 88)<br />

Microsoft Vista Compliance<br />

Microsoft Excel 2007<br />

The <strong>Contributor</strong> Web application and <strong>Contributor</strong> for Excel can be used with Microsoft Vista. For<br />

more information about installing and using Vista, see the IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

for Microsoft Excel ® Installation <strong>Guide</strong>. For more information about using <strong>Contributor</strong> with Excel<br />

or the <strong>Contributor</strong> Web application, see the <strong>Contributor</strong> for Microsoft Excel ® User <strong>Guide</strong> and<br />

<strong>Contributor</strong> Browser User <strong>Guide</strong>.<br />

This release supports <strong>Contributor</strong> and Analyst for Excel using Microsoft Excel 2007. For more<br />

information about using Analyst or <strong>Contributor</strong> with Excel, see the Analyst for Microsoft Excel ®<br />

User <strong>Guide</strong> and <strong>Contributor</strong> for Microsoft Excel ® User <strong>Guide</strong>.<br />

Select Folders in IBM Cognos Connection<br />

18 <strong>Contributor</strong><br />

This release supports folder selection in IBM Cognos Connection when generating Framework<br />

Manager and Transformer models. For more information, see the following sections.<br />

● "Run the Generate Framework Manager Model Admin Extension" (p. 308)<br />

● "Generate Transformer Model" (p. 309)


Select a Framework Manager Package for an <strong>Administration</strong> Link<br />

This release supports browsing in IBM Cognos Connection folders to select a Framework Manager<br />

package for an <strong>Administration</strong> Link. For more information, see "Steps to Create Links with IBM<br />

Cognos Package as the Source" (p. 151).<br />

Chapter 1: What’s New?<br />

<strong>Administration</strong> <strong>Guide</strong> 19


Chapter 1: What’s New?<br />

20 <strong>Contributor</strong>


Chapter 2: IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> is a Web-based planning platform that can involve thousands<br />

of people in the planning process, collecting data from managers and others, in multiple locations.<br />

Complex calculations are performed on the Web client showing totals as soon as data is entered,<br />

preventing unnecessary traffic on the server during busy times. Information is then stored in a data<br />

repository, providing an accurate and single pool of planning data.<br />

In addition, users can use <strong>Contributor</strong> for Excel to view and edit <strong>Contributor</strong> data using Excel.<br />

Administrators use the <strong>Contributor</strong> <strong>Administration</strong> Console to create and configure <strong>Contributor</strong><br />

applications, manage access settings, distribute IBM Cognos 8 <strong>Planning</strong> - Analyst business plans,<br />

and configure the user's view of the business plan.<br />

Extending the Functionality of the <strong>Contributor</strong> <strong>Administration</strong><br />

Console and the Classic <strong>Contributor</strong> Web Client<br />

Extensions are provided that extend the functionality of the <strong>Contributor</strong> <strong>Administration</strong> Console<br />

and the Classic <strong>Contributor</strong> Web Client. There are two types of extensions: Admin Extensions and<br />

Client Extensions. Admin Extensions run in <strong>Administration</strong> Console. Client Extensions are activated<br />

through buttons on the Classic <strong>Contributor</strong> grid. For example, you configure an extension to print<br />

Excel.<br />

Using <strong>Contributor</strong> Applications<br />

Cubes<br />

Dimensions<br />

e.Lists<br />

A <strong>Contributor</strong> application is an Analyst plan that is made available to users on the Web through<br />

the <strong>Contributor</strong> <strong>Administration</strong> Console. A <strong>Contributor</strong> application consists of a series of linked<br />

cubes that can be used for data entry by many people at the same time.<br />

A cube is similar to a spreadsheet. A cube always contains rows and columns and usually at least<br />

one other page, making it multidimensional. It is used to collect data. Cells in cubes can contain<br />

entered data or calculations.<br />

The rows, columns, and pages of a cube are created from dimensions. Dimensions are lists of related<br />

items, such as Profit and Loss items, products, customers, cost centers, and months. Dimensions<br />

also contain all the calculations. One dimension can be used by many cubes.<br />

The structure of an application is based on an e.List. An e.List is a kind of dimension that contains<br />

a hierarchical structure that typically reflects the structure of the organization. For example, it may<br />

<strong>Administration</strong> <strong>Guide</strong> 21


Chapter 2: IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

include cost centers and profit centers. There is one e.List per application, and the e.List item is<br />

assigned to a user, group, or role. There are two types of user: planners and reviewers. A planner<br />

enters and submits data to be reviewed by a reviewer. There may be several layers of reviewer<br />

depending on the structure of the e.List.<br />

Access Tables and Saved Selections<br />

D-Links<br />

Access to cubes is managed by e.List item, using saved selections and access tables. A saved selection<br />

is a collection of dimension items that is assigned to e.List items using access tables. This means<br />

that users can view only data that is relevant to them. For example, you may want to show only<br />

travel expense items to one user and entertainment expense items to another user where both items<br />

are held in the same dimension.<br />

Using access tables, you assign different levels of access to e.List items, saved selections, dimension<br />

items and cubes.<br />

For more information, see "Managing User Access to Data" (p. 115).<br />

Cubes are linked by a series of D-Links in Analyst. A D-Link copies information in and out of<br />

cubes, and sometimes to and from ASCII or text files.<br />

Managing <strong>Contributor</strong> Applications<br />

There is a single configuration process for all <strong>Contributor</strong> applications in an installation.<br />

You can use application folders to organize your applications into related groups. When you have<br />

created them, you can use the application folders to assign job servers, job server clusters and access<br />

rights to these groups of applications.<br />

Multiple Administrators<br />

Moving Data<br />

22 <strong>Contributor</strong><br />

You can secure individual elements of the <strong>Administration</strong> Console and therefore allow multiple<br />

administrators to access different parts of the <strong>Contributor</strong> application at the same time. For example,<br />

you can give rights to a specific user to create and configure applications on a specific datastore.<br />

You can choose to cascade rights to all applications on a datastore, or restrict rights to specific<br />

applications. <strong>Contributor</strong> administrators have access only to those applications and operations that<br />

they have rights for.<br />

Administrators can use administration links to move data quickly and easily between applications<br />

without having to publish, reducing the need for large applications. You can have several small,<br />

focused applications. Smaller e.List structures provide quicker reconciliation times. Also, the need<br />

for cut-down models and access tables is reduced.<br />

<strong>Administration</strong> links give you the following benefits:<br />

● You can achieve a matrix e.List structure.


System Links<br />

For example, you can have a Company model where Human Resources reports into Country,<br />

and this can be linked to a Corporate model where Country reports into Human Resources.<br />

● You can import data from IBM Cognos 8 data sources.<br />

using administration links, you can import data into IBM Cognos 8 <strong>Planning</strong> from packages<br />

that were modeled and published in Framework Manager.<br />

Administrators can set up links so that Web client users and <strong>Contributor</strong> for Excel users can move<br />

data between <strong>Contributor</strong> cubes in the same or different applications. System links are run from<br />

the target application.<br />

Automating <strong>Contributor</strong> Tasks Using Macros<br />

Publishing Data<br />

You can group related tasks into a single macro to run them in sequence. For example, you can<br />

group the following tasks: Load Import Data, Prepare Import, and Go to Production. Macros can<br />

be run in the <strong>Administration</strong> Console, from IBM Cognos Connection, used in events created in<br />

Event Studio, or by using external scheduling tools.<br />

Three publish layouts are available.<br />

The table-only layout is designed to give users greater flexibility in reporting on IBM Cognos 8<br />

<strong>Planning</strong> data, and for use as a data source for other applications. It is used with the Generate<br />

Framework Manager Model extension, and the Generate Transformer Model extension.<br />

The incremental publish layout publishes only e.List items that contain changed data. Users can<br />

schedule an incremental publish using a macro or through IBM Cognos Connection and Event<br />

Studio. You can achieve near real-time publishing by closely scheduling incremental publishes.<br />

The view layout, as supported in <strong>Contributor</strong> and Analyst version 7.2, is compatible with previous<br />

IBM Cognos <strong>Planning</strong> data solutions.<br />

Data is always published to a separate publish datastore.<br />

Creating a <strong>Contributor</strong> Application<br />

Creating a <strong>Contributor</strong> Application involves<br />

❑ developing the plan in Analyst<br />

❑ designing the e.List<br />

❑ configuring rights<br />

❑ creating the application<br />

❑ creating the production application<br />

❑ running jobs<br />

Chapter 2: IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

<strong>Administration</strong> <strong>Guide</strong> 23


Chapter 2: IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

❑ testing the application in the Web site<br />

Developing the Plan in Analyst<br />

Designing the e.List<br />

Assigning Rights<br />

Create a business model for planning, budgeting, and forecasting using Analyst. This step is typically<br />

performed by Analyst Model Builders.<br />

You can specifically design the Analyst model to be optimized for <strong>Contributor</strong> (p. 337). Part of the<br />

model development process involves establishing how the <strong>Contributor</strong> application is to be used in<br />

the organization. This includes looking at who contributes data and reviews data. This information<br />

is used to create an e.List.<br />

For more information, see the Analyst User <strong>Guide</strong>.<br />

An e.List is a very important part of the <strong>Contributor</strong> application. It specifies how the application<br />

is distributed to end users, the hierarchy of the application, and security.<br />

An e.List has a hierarchical structure that typically reflects the structure of an organization. The<br />

dimension that represents the e.List is created in Analyst. The file containing e.List data is imported<br />

into the <strong>Contributor</strong> <strong>Administration</strong> Console.<br />

For more information, see "The e.List" (p. 93).<br />

Rights determine whether users can view, save, submit, and so on.<br />

For example, you want to allow a planner to view, but not save or submit, or to make and save<br />

changes but not submit.<br />

The rights a user can have are also affected by the view and review depth, set in the e.List window,<br />

and the Reviewer edit setting in the Application Options window. A user can be directly assigned<br />

rights, or inherited rights.<br />

You can set up rights directly in the <strong>Administration</strong> Console after you create the <strong>Contributor</strong><br />

application, or you can create and maintain the rights in an external system, and import them.<br />

For more information, see "The e.List" (p. 93).<br />

Creating the Application<br />

24 <strong>Contributor</strong><br />

After the model is created and tested in Analyst, create the <strong>Contributor</strong> application using the<br />

<strong>Administration</strong> Console. The application creation wizard helps you create a <strong>Contributor</strong> application.<br />

After the application is created, you can<br />

● configure the application<br />

This establishes how the application appears and behaves in the Web browser.<br />

● import the e.List data and rights<br />

● restrict what users can see and do using saved selections and access tables.


For example, you may want to hide salary details from some users.<br />

● import data from other applications<br />

After you set up the application, run the Production process.<br />

Creating the Production Application<br />

Running Jobs<br />

There can be two versions of a <strong>Contributor</strong> application: development and production.<br />

When you run the Go to Production process (p. 243), the development application becomes the<br />

production application. The previous production application, if it existed, is archived and a new<br />

development application is established. At this stage, the current production application and the<br />

new development application are the same.<br />

The production version is the live version of the application. It is the application that is online and<br />

that users are working on.<br />

Having two versions of an application means that you can make changes to the application without<br />

having to take it offline, reducing the time that users are offline for as little as a minute. This is the<br />

time taken to integrate the new e.List items into the hierarchy and set the correct workflow states<br />

(p. 295).<br />

Run the Go to Production process to formally commit a set of changes. When the Go to Production<br />

process is complete, jobs run to ensure that all the data is up to date using a process named recon-<br />

ciliation (p. 54). Reconciliation happens on the server side or, if the user tries to view or edit the<br />

application before the e.List item is processed, reconciliation may happen on the client.<br />

A job is an administration task that runs on job servers and is monitored by the <strong>Administration</strong><br />

Console. You can start the process and monitor its progress. All jobs can be run while the application<br />

is online.<br />

Testing the Web Site<br />

The Administrator<br />

Using the Monitoring Console (p. 61) in the <strong>Administration</strong> tree, you can manage and monitor the<br />

progress of jobs in <strong>Contributor</strong> applications.<br />

An example of a job is reconcile, which ensures that the structure of the e.List item data is up to<br />

date, if required. This job is created and runs after Go to Production runs.<br />

For more information, see "Jobs" (p. 49).<br />

Chapter 2: IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

To test the Web site, run Go to Production and log on as a user with rights to the <strong>Contributor</strong><br />

application. You should be able to view the application in a web browser.<br />

<strong>Administration</strong> can be divided into separate functions depending on your business needs. A <strong>Planning</strong><br />

Rights Administrator assigns administrative access to <strong>Contributor</strong> applications and to functions<br />

within applications. Administrators have access only to those applications and operations that they<br />

<strong>Administration</strong> <strong>Guide</strong> 25


Chapter 2: IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

The Planner<br />

have rights for. In addition, multiple administrators can access different parts of the <strong>Contributor</strong><br />

application at the same time.<br />

You can restrict administrative access on a per application basis so that someone who can see only<br />

database maintenance in application A can create applications in application B.<br />

Administrators see only those applications that they have rights to, and only those functions within<br />

those applications.<br />

Depending on the rights assigned to them, administrators can<br />

● assign functional rights to other administrators<br />

● add job servers and job server clusters to the <strong>Planning</strong> Store<br />

● create and configure a <strong>Contributor</strong> application<br />

● create new e.List items<br />

● make changes to the e.list hierarchy<br />

● assign rights to e.List items<br />

● import actual data to the application<br />

● amend workflow states from the <strong>Administration</strong> Console where required<br />

● monitor the progress of jobs<br />

Planners are responsible for entering data into the <strong>Contributor</strong> application using the web client, or<br />

<strong>Contributor</strong> for Excel. This data is referred to as a contribution. Planners edit data only in the<br />

selection assigned to them by the administrator. They cannot make structural changes to the<br />

application. After data is entered, the planner can either save or submit the data. Submitted data<br />

is forwarded to a reviewer and cannot be edited further by the planner unless the reviewer rejects<br />

it.<br />

The Reviewer<br />

26 <strong>Contributor</strong><br />

A planner can be responsible for more than one e.List item and can view each e.List item individually<br />

or view all e.List items in a single view, if configured by the administrator.<br />

Reviewers are responsible for approving contributions submitted by one or more planners.<br />

Reviewers can view data and see the status of all submissions they are responsible for managing at<br />

any stage in the planning and review cycle. Reviewers can edit contributions if they have appropriate<br />

rights.<br />

After data is submitted, the reviewer has the following options:<br />

● reject the data if they are not satisfied with it<br />

Typically, a reviewer sends an email to the planner to give the reason for rejection.<br />

● accept the data


The Toolbar<br />

When a complete set of data is viewed and considered satisfactory, it can be submitted to the<br />

next reviewer in the e.List hierarchy.<br />

● edit the data, if allowed<br />

After the reviewer takes over editorial control of a contribution, the planner is no longer the owner<br />

of the contribution. The reviewer has the right to submit it.<br />

Any user can be both a planner and a reviewer for the same e.List item. When users have both roles,<br />

they can view their review items and contribution items in the same Web page.<br />

In addition, reviewers can annotate any changes they make to a Contribution e.List item (p. 289).<br />

The following functions are available on the <strong>Administration</strong> Console toolbar.<br />

Icon<br />

Action<br />

Email<br />

Save<br />

Help<br />

Go to Production<br />

Set Online<br />

Set Offline<br />

Reset<br />

Description<br />

Sends e-mail to users defined in an application using your<br />

default email tool.<br />

Saves changes to the development application package.<br />

Shows the <strong>Contributor</strong> <strong>Administration</strong> online help. You can<br />

also use the F1 key.<br />

Starts the Go to Production process. Go to production can be<br />

automated.<br />

Makes the application visible in a Web browser. Set Online<br />

can be automated.<br />

Prevents the application from being accessed in a Web browser.<br />

Set Offline can be automated<br />

Resets the development application to the production applica-<br />

tion, clearing any changes that were made since you last ran<br />

Go to Production.<br />

Chapter 2: IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

<strong>Administration</strong> <strong>Guide</strong> 27


Chapter 2: IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

28 <strong>Contributor</strong>


Chapter 3: Security<br />

IBM Cognos 8 security is designed to meet the need for security in various situations. You can use<br />

it in everything from a proof of concept application where security is rarely enabled to a large scale<br />

enterprise deployment.<br />

The security model can be easily integrated with the existing security infrastructure in your organ-<br />

ization. It is built on top of one or more other authentication providers. You use the providers to<br />

define and maintain users, groups, and roles, and to control the authentication process. Each<br />

authentication provider known to IBM Cognos 8 is referred to as a namespace.<br />

In addition to the namespaces that represent other authentication providers, IBM Cognos 8 has its<br />

own namespace named Cognos. The Cognos namespace makes it easier to manage security policies<br />

and deploy applications.<br />

For more information, see the IBM Cognos 8 Security and <strong>Administration</strong> <strong>Guide</strong>.<br />

Cognos Namespace<br />

The Cognos namespace is the IBM Cognos 8 built-in namespace. It contains the IBM Cognos<br />

objects, such as groups, roles, data sources, distribution lists, and contacts.<br />

During the content store initialization, built-in and predefined security entries are created in this<br />

namespace. You must modify the initial security settings for those entries and for the IBM Cognos<br />

namespace immediately after installing and configuring IBM Cognos 8.<br />

You can rename the Cognos namespace using IBM Cognos Configuration, but you cannot delete<br />

it. The namespace is always active.<br />

When you set security in IBM Cognos 8, you may want to use the Cognos namespace to create<br />

groups and roles that are specific to IBM Cognos 8. In this namespace, you can also create security<br />

policies that indirectly reference other security entries so that IBM Cognos 8 can be more easily<br />

deployed from one installation to another.<br />

The Cognos namespace always exists in IBM Cognos 8, but the use of the groups and roles it contains<br />

is optional. The groups and roles created in the Cognos namespace repackage the users, groups,<br />

and roles that exist in the authentication providers to optimize their use in the IBM Cognos 8<br />

environment. For example, in the Cognos namespace, you can create a group named HR Managers<br />

and add to it specific users and groups from your corporate IT and HR organizations defined in<br />

your authentication provider. Later, you can set access permissions for the HR Managers group to<br />

entries in IBM Cognos 8.<br />

Authentication Providers<br />

User authentication in IBM Cognos 8 is managed by authentication providers. Authentication<br />

providers define users, groups, and roles used for authentication. User names, IDs, passwords,<br />

regional settings, personal preferences are some examples of information stored in the providers.<br />

<strong>Administration</strong> <strong>Guide</strong> 29


Chapter 3: Security<br />

30 <strong>Contributor</strong><br />

If you set up authentication for IBM Cognos 8, users must provide valid credentials, such as user<br />

ID and password, at logon time. In IBM Cognos 8 environment, authentication providers are also<br />

referred to as namespaces, and they are represented by namespace entries in the user interface.<br />

IBM Cognos 8 does not replicate the users, groups, and roles defined in your authentication provider.<br />

However, you can reference them in IBM Cognos 8 when you set access permissions to reports and<br />

other content. They can also become members of Cognos groups and roles.<br />

The following authentication providers are supported in this release:<br />

● Active Directory Server<br />

● IBM Cognos Series 7<br />

● eTrust SiteMinder<br />

● LDAP<br />

● NTLM<br />

● SAP<br />

You configure authentication providers using IBM Cognos Configuration. For more information,<br />

see the Installation and Configuration <strong>Guide</strong>.<br />

Multiple Namespaces<br />

If multiple namespaces are configured for your system, at the start of a session you must select one<br />

namespace that you want to use. However, this does not prevent you from logging on to other<br />

namespaces later in the session. For example, if you set access permissions, you may want to reference<br />

entries from different namespaces. To log on to a different namespace, you do not have to log out<br />

of the namespace you are currently using. You can be logged on to multiple namespaces simultan-<br />

eously.<br />

Your primary logon is the namespace and the credentials that you use to log on at the beginning<br />

of the session. The namespaces that you log on to later in the session and the credentials that you<br />

use become your secondary logons.<br />

When you delete one of the namespaces, you can log on using another namespace. If you delete all<br />

namespaces except for the Cognos namespace, you are not prompted to log on. If anonymous access<br />

is enabled, you are automatically logged on as an anonymous user. If anonymous access is not<br />

enabled, you cannot access the IBM Cognos Connection logon page. In this situation, use IBM<br />

Cognos Configuration to enable anonymous access.<br />

Hiding Namespaces<br />

You can hide namespaces from users during logon. This lets you have trusted signon namespaces<br />

without showing them on the namespace selection list that is presented when users log on.<br />

For example, you may want to integrate single signon across systems, but maintain the ability for<br />

customers to authenticate directly to IBM Cognos 8 without being prompted to choose a namespace.<br />

You can hide Custom Java Provider and eTrust SiteMinder namespaces that you configured.<br />

For more information, see the Installation and Configuration <strong>Guide</strong>.


Deleting or Restoring Unconfigured Namespaces<br />

You can preserve namespaces and all their contents in the content store even if they are no longer<br />

configured for use in IBM Cognos 8. When a namespace is not configured, it is listed as inactive in<br />

the directory tool.<br />

An inactive namespace is one that was configured, but later deleted in IBM Cognos Configuration.<br />

The namespace can be deleted from the content store by members of the System Administrators<br />

role. You cannot log on to an inactive namespace.<br />

If a new version of IBM Cognos 8 detects a previously configured namespace that is no longer used,<br />

the namespace appears in the directory tool as inactive. You can configure the namespace again if<br />

you still require the data. If the namespace is not required, you can delete it.<br />

When you delete a namespace, you also delete all entries in My Folders that are associated with<br />

that namespace, and their contents.<br />

An active namespace cannot be deleted, but can be updated.<br />

To recreate a namespace in IBM Cognos Configuration, you must use the original ID of the<br />

namespace. For information about configuring and recreating namespaces, see the Installation and<br />

Configuration <strong>Guide</strong>.<br />

Delete an Inactive Namespace<br />

If a namespace was removed from IBM Cognos Configuration and is no longer required, a member<br />

of the System Administrators role can delete it permanently in the directory tool. Deleting a<br />

namespace also deletes all the entries in My Folders that are associated with the namespace.<br />

To access the directory administration tool, you must have execute permissions for the directory<br />

secured feature and traverse permissions for the administration secured function.<br />

Steps<br />

1. In IBM Cognos Connection, in the upper-right corner, click Launch, IBM Cognos Administra-<br />

tion.<br />

2. On the Security tab, click Users, Groups, and Roles.<br />

If the namespace you want to delete does not have a check mark in the Active column, it is<br />

inactive and can be deleted.<br />

3. In the Actions column, click the delete button.<br />

If the namespace is active, the delete button is not available.<br />

The namespace is permanently deleted. To use the namespace again in IBM Cognos 8, you must<br />

add it using IBM Cognos Configuration.<br />

Users, Groups, and Roles<br />

Chapter 3: Security<br />

Users, groups, and roles are created for authentication and authorization purposes. In IBM Cognos 8,<br />

you can use users, groups, and roles created in other authentication providers, and groups and roles<br />

<strong>Administration</strong> <strong>Guide</strong> 31


Chapter 3: Security<br />

Users<br />

Groups and Roles<br />

32 <strong>Contributor</strong><br />

created in IBM Cognos 8. The groups and roles created in IBM Cognos 8 are referred to as IBM<br />

Cognos groups and IBM Cognos roles.<br />

A user entry is created and maintained in other authentication providers to uniquely identify a<br />

human or a computer account. You cannot create user entries in IBM Cognos 8.<br />

Information about users, such as first and last names, passwords, IDs, locales, and email addresses,<br />

is stored in the authentication providers. However, this may not be all the information required by<br />

IBM Cognos 8. For example, it does not specify the location of the users' personal folders, or format<br />

preferences for viewing reports. This additional information about users is stored in IBM Cognos 8,<br />

but when addressed in IBM Cognos 8, the information appears as part of the external namespace.<br />

Access Permissions for Users<br />

Users must have at least traverse permissions for the parent entries of the entries they want to access.<br />

The parent entries include container objects such as folders, packages, groups, roles, and namespaces.<br />

Permissions for users are based on permissions set for individual user accounts and for the<br />

namespaces, groups, and roles to which the users belong. Permissions are also affected by the<br />

membership and ownership properties of the entry.<br />

IBM Cognos 8 supports combined access permissions. When users who belong to more than one<br />

group log on, they have the combined permissions of all the groups to which they belong. This is<br />

important to remember, especially when you are denying access.<br />

Tip: To ensure that a user or group can run reports from a package, but not open the package in<br />

an IBM Cognos studio, grant the user or group execute and traverse permissions on the package.<br />

Users can become members of groups and roles defined in other authentication providers, and<br />

groups and roles defined in IBM Cognos 8. A user can belong to one or more groups or roles. If<br />

users are members of more than one group, their access permissions are merged.<br />

Groups and roles represent collections of users that perform similar functions, or have a similar<br />

status in an organization. Examples of groups are Employees, Developers, or Sales Personnel.<br />

Members of groups can be users and other groups. When users log on, they cannot select a group<br />

they want to use for a session. They always log on with all the permissions associated with the<br />

groups to which they belong.<br />

Roles in IBM Cognos 8 have a similar function as groups. Members of roles can be users,<br />

groups, and other roles.<br />

The following diagram shows the structure of groups and roles.


User<br />

Group<br />

Role<br />

Group User<br />

Group Role<br />

You create IBM Cognos groups and roles when<br />

● you cannot create groups or roles in your authentication provider<br />

● groups or roles are required that span multiple namespaces<br />

● portable groups and roles that can be deployed are required<br />

In this case, it is best to populate groups and roles in the other provider, and then add those<br />

groups and roles to the IBM Cognos groups and roles to which they belong. Otherwise, you<br />

may have trouble managing large lists of users in a group in the Cognos namespace.<br />

● you want to address specific needs of IBM Cognos 8 administration<br />

● you want to avoid cluttering your organization security systems with information used only in<br />

IBM Cognos 8<br />

IBM Cognos 8 <strong>Planning</strong> Roles<br />

Capabilities<br />

There are two predefined roles for IBM Cognos 8 <strong>Planning</strong>:<br />

● <strong>Planning</strong> Rights Administrators<br />

This role enables you to access <strong>Contributor</strong> <strong>Administration</strong> Console, Analyst, and all associated<br />

objects in the application for the first time following installation. You can then change the<br />

roles, groups, and users who can access the <strong>Contributor</strong> <strong>Administration</strong> Console and to Analyst.<br />

● <strong>Planning</strong> <strong>Contributor</strong> Users<br />

This is the default role for users who want to access the <strong>Contributor</strong> Web client, <strong>Contributor</strong><br />

for Excel, or Analyst. However, anyone can be assigned rights to use the <strong>Contributor</strong> Web<br />

client, or <strong>Contributor</strong> for Excel regardless of whether they are a member of the <strong>Planning</strong> Con-<br />

tributor Users role. Analyst users must be members of the <strong>Planning</strong> <strong>Contributor</strong> User role.<br />

Note: You do not have to use these roles, they can be deleted or renamed. If you decide not to use<br />

the predefined roles, you must assign the access permissions and capabilities required by IBM<br />

Cognos 8 <strong>Planning</strong> to other groups, roles, or users.<br />

Capabilities are secured functions and features. If you are an administrator, you set access to the<br />

secured functions and features by granting execute permissions for specified users, groups, or roles.<br />

Users must have at least one capability to be accepted through the IBM Cognos Application Firewall.<br />

The <strong>Planning</strong> <strong>Contributor</strong> Users role has the <strong>Planning</strong> <strong>Contributor</strong> capability by default. If you do<br />

not want to use this role, you can assign the capability to any groups, users, or roles that you create<br />

to replace this role by giving execute permissions to the appropriate members.<br />

Chapter 3: Security<br />

<strong>Administration</strong> <strong>Guide</strong> 33


Chapter 3: Security<br />

The <strong>Planning</strong> Rights Administrators role has the <strong>Planning</strong> Rights <strong>Administration</strong> capability by<br />

default. To assign this capability to groups, users, or roles, you must give execute permissions to<br />

the appropriate members. You must also give members permissions to traverse the <strong>Administration</strong><br />

folder.<br />

Tip: You change capabilities through IBM Cognos <strong>Administration</strong>, by clicking the Security tab.<br />

For more information, see "Securing Functions and Features" in the <strong>Administration</strong> and Security<br />

<strong>Guide</strong>.<br />

Capabilities Needed to Create IBM Cognos 8 <strong>Planning</strong> Packages<br />

You can create a <strong>Planning</strong> Package during the Go to Production process, giving users access to IBM<br />

Cognos 8 studios from the <strong>Contributor</strong> application and enabling users to report against live Con-<br />

tributor data using the <strong>Planning</strong> Data Service. To do this, the <strong>Planning</strong> Rights Administrators role<br />

must be granted the Directory capability. Members of the System Administrator role are automat-<br />

ically granted this capability, but <strong>Planning</strong> Rights Administrator members are not.<br />

Setting up Security for an IBM Cognos 8 <strong>Planning</strong> Installation<br />

34 <strong>Contributor</strong><br />

You must set up security for an IBM Cognos 8 <strong>Planning</strong> installation.<br />

To configure security for IBM Cognos 8 <strong>Planning</strong>, do the following:<br />

❑ Using Cognos Configuration, configure IBM Cognos 8 to use an authentication provider<br />

❑ Using IBM Cognos <strong>Administration</strong><br />

● add <strong>Contributor</strong> <strong>Administration</strong> Console and Analyst administrators to the <strong>Planning</strong> Rights<br />

Administrators role<br />

● add <strong>Contributor</strong> application members and Analyst users to the <strong>Planning</strong> <strong>Contributor</strong> Users<br />

role<br />

● enable <strong>Planning</strong> Roles in IBM Cognos 8<br />

● restrict access to the everyone group<br />

● create additional roles or groups for IBM Cognos 8 <strong>Planning</strong> (optional)<br />

Note: We recommend that you add groups of users as defined in your authentication pro-<br />

vider to the roles in IBM Cognos 8 <strong>Planning</strong>, rather than individual users. This means that<br />

changes in group membership are reflected immediately in the roles without having to make<br />

changes in IBM Cognos 8<br />

❑ To configure Analyst security:<br />

● configure integrated Windows authentication if you want to execute macros without<br />

interaction<br />

● specify a default library<br />

● assign access at object, library, or item level<br />

❑ Using the <strong>Contributor</strong> <strong>Administration</strong> Console


● set access rights for <strong>Contributor</strong> administrators to <strong>Contributor</strong> administration functions<br />

● set rights for <strong>Contributor</strong> application users for <strong>Contributor</strong> applications<br />

Configure IBM Cognos 8 to Use an Authentication Provider<br />

IBM Cognos 8 components can run with two types of access: anonymous and authenticated. By<br />

default, anonymous access is enabled. To use IBM Cognos 8 <strong>Planning</strong>, you must disable anonymous<br />

access so that users are required to log on. Only authenticated users can access your <strong>Planning</strong><br />

applications.<br />

For authenticated access, you must configure IBM Cognos 8 components to use a namespace<br />

associated with an authentication provider used by your organization. You can also configure<br />

multiple namespaces. At run time, users can choose which namespace they want to use.<br />

Note: If you are using the Generate Transformer Model extension, you must add the IBM Cognos<br />

Series 7 namespace. Local authentication export (LAE) files cannot be used.<br />

Steps to Disable Anonymous Access<br />

1. On each Content Manager computer, start IBM Cognos Configuration.<br />

2. In the Explorer window, under Security, Authentication, click IBM Cognos.<br />

The IBM Cognos resource represents the Cognos namespace. For more information, see the<br />

<strong>Administration</strong> and Security <strong>Guide</strong>.<br />

3. In the Properties window, ensure that Allow Anonymous Access is set to False.<br />

4. From the File menu, click Save.<br />

Steps to Configure Authentication Providers<br />

1. On each Content Manager computer, start IBM Cognos Configuration.<br />

2. In the Explorer window, under Security, right-click Authentication, and then click New resource,<br />

Namespace.<br />

3. In the Name box, type a name for your authentication namespace.<br />

4. In the Type list, click the appropriate namespace and then click OK.<br />

The new authentication provider resource appears in the Explorer window, under the<br />

Authentication component.<br />

5. In the Properties window, for the Namespace ID property, specify a unique identifier for the<br />

namespace.<br />

6. In the Properties window for Authentication, for the Allow session information to be shared<br />

between client applications, set the value to True.<br />

This enables you to have single signon between multiple clients on the same computer. Note<br />

that you cannot have single signon between a Windows application, and a Web client application,<br />

for example, <strong>Contributor</strong> administration and IBM Cognos 8.<br />

Chapter 3: Security<br />

<strong>Administration</strong> <strong>Guide</strong> 35


Chapter 3: Security<br />

7. Specify the values for all other required properties to ensure that IBM Cognos 8 components<br />

can locate and use your existing authentication provider.<br />

8. Test the connection to a new namespace. In the Explorer window, under Authentication, right-<br />

click the new authentication resource and click Test.<br />

9. From the File menu, click Save.<br />

IBM Cognos 8 loads, initializes, and configures the provider libraries for the namespace.<br />

For more specific information regarding the configurion of each kind of authentication provider,<br />

see the IBM Cognos 8 <strong>Planning</strong> Installation and Configuration <strong>Guide</strong>.<br />

Add or Remove Members From <strong>Planning</strong> Rights Administrators and <strong>Planning</strong><br />

<strong>Contributor</strong> Users Roles<br />

36 <strong>Contributor</strong><br />

Using IBM Cognos <strong>Administration</strong>, add <strong>Contributor</strong> <strong>Administration</strong> Console administrators and<br />

Analyst administrators to the <strong>Planning</strong> Rights Administrators role. Add <strong>Contributor</strong> application<br />

and Analyst users to the <strong>Planning</strong> <strong>Contributor</strong> Users role.<br />

Steps<br />

1. In IBM Cognos Connection, in the upper-right corner, click IBM Cognos <strong>Administration</strong>.<br />

2. On the Security tab, click Users, Groups, and Roles.<br />

3. Click on the IBM Cognos namespace.<br />

4. In the Actions column, click the properties button for the <strong>Planning</strong> Rights Administrators or<br />

<strong>Planning</strong> <strong>Contributor</strong> Users role.<br />

5. Click the Members tab.<br />

6. To add members, click Add and do the following:<br />

● To choose from listed entries, click the appropriate namespace.<br />

● To search for entries, click the appropriate namespace and then click Search. In the Search<br />

string box, type the phrase you want to search for. For search options, click Edit. Find and<br />

click the entry you want.<br />

● To type the name of entries you want to add, click Type and type the names of groups,<br />

roles, or users using the following format, where a semicolon (;) separates each entry:<br />

namespace/group_name;namespace/role_name;namespace/user_name;<br />

Here is an example:<br />

Cognos/Authors;LDAP/scarter;<br />

7. Click the right-arrow button, and when the entries you want appear in the Selected entries box,<br />

click OK.<br />

Tip: To remove entries from the Selected entries list, select them and click Remove. To select<br />

all entries in a list, click the check box in the upper-left corner of the list. To make the user<br />

entries visible, click Show users in the list.


8. Click OK.<br />

For more information, see the IBM Cognos 8 <strong>Administration</strong> and Security <strong>Guide</strong>.<br />

Enabling <strong>Planning</strong> Roles in IBM Cognos 8<br />

<strong>Planning</strong> tasks that require access to the IBM Cognos 8 data store, such as running macros, go to<br />

production, and adding job servers, require additional security configuration. Access to the data<br />

store is restricted to certain groups through IBM Cognos <strong>Administration</strong>. You must have a system<br />

administrator or a user from one of the following groups perform tasks that require the IBM<br />

Cognos 8 data store. Optionally, you can add users to the required groups to perform the tasks.<br />

Group<br />

Data Manager<br />

Authors<br />

Directory Adminis-<br />

trators<br />

Report Administrat-<br />

ors or Server Admin-<br />

istrators<br />

Tool and Task<br />

Framework Manager<br />

Only members of the Data Manager Authors group can import from a<br />

Framework Manager data source.<br />

Restricting Access to the Everyone Group<br />

You must have a Data Manager Authors group member perform this task.<br />

IBM Cognos <strong>Administration</strong> Configuration, Data Source Connections<br />

You must have a Directory Administrator create a data source named IBM<br />

Cognos <strong>Planning</strong> - <strong>Contributor</strong> with a connection of type IBM Cognos<br />

<strong>Planning</strong> - <strong>Contributor</strong> before performing go to production.<br />

IBM Cognos <strong>Administration</strong> Configuration, Content <strong>Administration</strong><br />

A Report Administrators or Server Administrators group member must<br />

publish and run macros in Content <strong>Administration</strong>.<br />

The Everyone group represents all authenticated users and the Anonymous user account. The<br />

membership of this group is maintained by the product and cannot be viewed or altered.<br />

By default, the Everyone group belongs to several built-in groups and roles in the Cognos namespace.<br />

To restrict access, remove the Everyone group the System Administrators role and replace it with<br />

authorized groups, roles, or users. Optionally, remove the Everyone group from the <strong>Planning</strong><br />

<strong>Contributor</strong> Users role to restrict access to <strong>Contributor</strong> plans.<br />

For more information about the Everyone group, and System Administrators role, see "Initial<br />

Security" in the <strong>Administration</strong> and Security <strong>Guide</strong>.<br />

Recommendation - Creating Additional Roles or Groups for <strong>Contributor</strong><br />

To secure your <strong>Contributor</strong> applications, you may want to create roles or groups for the following<br />

users:<br />

● Classic <strong>Contributor</strong> client extensions<br />

Chapter 3: Security<br />

<strong>Administration</strong> <strong>Guide</strong> 37


Chapter 3: Security<br />

● <strong>Contributor</strong> work offline users<br />

for example, create one work offline role per <strong>Contributor</strong> application and assign the offline<br />

users to the relevant role. The application administrator must also belong to this role.<br />

● system links<br />

● translated applications<br />

<strong>Planning</strong> <strong>Contributor</strong> User Roles<br />

When you assign user rights to <strong>Contributor</strong> applications, the first time you click User, Group, Role<br />

in the Rights window, a list of all the Users, Groups, and Roles that are members of the <strong>Planning</strong><br />

<strong>Contributor</strong> Users role is displayed. If the <strong>Planning</strong> <strong>Contributor</strong> Users role contains a large number<br />

of members directly below, you can improve performance by creating a smaller number of groups<br />

or roles below the <strong>Planning</strong> <strong>Contributor</strong> Users role to act as filters.<br />

Note: Members of roles can be users, groups, and other roles. Groups can contain users and other<br />

groups, but not roles.<br />

<strong>Planning</strong> Rights Administrator Roles<br />

If you have a large number of administrators, you may wish to create a roles, or groups for specific<br />

tasks, and then add the individual users to this role or group. For example, a role Allow System<br />

Links can be used for this task, and any user added to that role is assigned that right.<br />

For more information about creating groups or roles, see the <strong>Administration</strong> and Security <strong>Guide</strong>.<br />

Configuring Access to the <strong>Contributor</strong> <strong>Administration</strong> Console<br />

38 <strong>Contributor</strong><br />

Administrative access can be set for the IBM Cognos 8 <strong>Planning</strong> environment or the datastore server<br />

that it contains. It can also be set for the application or publish container objects in the datastore<br />

server, the job server objects in the job server cluster, links, and macros. You can also set rights to<br />

individual functions of these objects.<br />

If you are an administrator with no access rights to an object, you cannot view the details for that<br />

object in the <strong>Administration</strong> Console, and cannot select these objects. You will see the datastore<br />

servers, even if you have no rights to it, because you may have rights to applications on the datastore<br />

server. If you have no rights to any object, the <strong>Contributor</strong> <strong>Administration</strong> Console closes.<br />

Cascade Rights<br />

If you set rights to operations for the IBM Cognos 8 <strong>Planning</strong> environment, the datastore server,<br />

or the job server, you are prompted to cascade the rights to the lower levels. Regardless of your<br />

response, when you grant rights to a datastore server or job server cluster, the user automatically<br />

inherits the same rights for any applications, publish containers, or job servers that you subsequently<br />

add.<br />

Tip: To always cascade rights without being prompted, in the Access Rights window, click Cascade<br />

rights selection.<br />

Rights that are cascaded are indicated by blue text.


Operations are the functions that can be performed in the <strong>Contributor</strong> <strong>Administration</strong> Console.<br />

Initially, a <strong>Planning</strong> Rights Administrator grants rights so that other <strong>Contributor</strong> administrators<br />

perform these operations.<br />

Granting Access Rights to Administrators<br />

You can grant rights so that administrators can view who has write access to the development<br />

model, select an application and cube as the source and target of a system or administration link,<br />

add or remove a datastore server, and add a job server cluster.<br />

Rights<br />

Session Details Access<br />

Global <strong>Administration</strong><br />

Privileges<br />

You can grant the write to view who has write access to the<br />

development model.<br />

You can grant the right to<br />

Access ● run Go to Production to create the production application<br />

Links Access<br />

Datastores Access<br />

● modify the datastore connection details for applications and<br />

publish containers<br />

● set an application online or offline in the Web client<br />

● assign access rights<br />

You can assign access rights to datastores, applications, publish<br />

containers, job server clusters, and job servers.<br />

You can secure the ability to create, edit, execute, delete, import,<br />

and export administration links. You can also secure previously<br />

created administration links (administration link instances).<br />

You secure <strong>Administration</strong> Link instances individually. To locate<br />

them, scroll to the bottom of the Operations tree and look for<br />

LinkLink Name.<br />

You can grant the right to select an application and cube as the<br />

source and target of a system or administration link.<br />

Chapter 3: Security<br />

You can grant the right to add or remove a datastore server.<br />

<strong>Administration</strong> <strong>Guide</strong> 39


Chapter 3: Security<br />

40 <strong>Contributor</strong><br />

Rights<br />

Application Containers<br />

Privileges<br />

You can grant the right to<br />

Access ● upgrade or import a <strong>Contributor</strong> application from an earlier<br />

Publish Container Access<br />

version of <strong>Contributor</strong><br />

● link to an existing application<br />

● create an application<br />

● create a script that can be run by a database administrator to<br />

create an application. This option is used when the Generate<br />

Scripts option is set to Yes in the Admin Options table<br />

● remove an application from the <strong>Planning</strong> environment<br />

● assign or remove an application from an application folder<br />

You can grant rights for administering publish containers<br />

● link to a publish container<br />

● create a publish container<br />

● create a script that is run by a database administrator to create<br />

a publish container.<br />

This option is used when the Generate Scripts option is set to<br />

Yes in the Admin Options table.<br />

Publish containers are created the first time someone publishes.<br />

The administrator must have the right to create a publish container<br />

in order to publish.<br />

For publish jobs to be processed, the publish container must be<br />

added to a job server cluster or job server.<br />

To modify a publish datastore connection you must have the<br />

Global <strong>Administration</strong> right Modify connection document.


Rights<br />

Development Access<br />

Production Access<br />

Job Server Clusters Access<br />

Job Server Access<br />

Steps<br />

1. Click Access Rights.<br />

2. Click Add.<br />

Privileges<br />

You can grant the right to perform the following operations in a<br />

development application<br />

● configure the Web client - navigation, orientation, options,<br />

planner-only cubes and <strong>Contributor</strong> help<br />

● configure application maintenance options<br />

● import and maintain the e.List, and rights<br />

● create and maintain access tables and saved selections<br />

● import data<br />

● synchronize<br />

● set datastore options<br />

● create and maintain translations<br />

You can grant the right to perform the following operations on<br />

the production version of the application<br />

● publish data<br />

● delete annotations<br />

● preview data<br />

This option is important if you want to hide sensitive data<br />

● manage extensions<br />

You can grant the right to add or remove a job server cluster.<br />

You can grant the right to update job server properties for the<br />

<strong>Planning</strong> environment.<br />

Go to a Job server cluster<br />

● to enable and disable job processing of an application<br />

● to add and remove job servers from a job server cluster<br />

● to add and remove applications, and publish containers from<br />

the job server<br />

● to update job server properties<br />

Chapter 3: Security<br />

<strong>Administration</strong> <strong>Guide</strong> 41


Chapter 3: Security<br />

3. Click the appropriate Namespace and select the user, group, or role.<br />

4. Click the green arrow button and then click OK.<br />

5. Under Name, select the name.<br />

6. Click the operations needed.<br />

Tip: you can filter operations by datastore, application, publish container, job server cluster,<br />

and job server.<br />

7. Click Save.<br />

Access Rights for Macros<br />

42 <strong>Contributor</strong><br />

Access rights apply as soon as the changes are saved.<br />

A macro consists of one or more macro steps that you select when you create a macro. For example,<br />

a macro that imports data into a cube, named "Import Expenses", might contain the following<br />

macro steps:<br />

● Upload an Import File<br />

● Prepare Import<br />

● Go To Production<br />

You can secure the rights to create, edit, execute, delete, and transfer macros, and the ability to<br />

create individual macro steps.<br />

By default, when a user with the Create Macro right adds a new macro instance, they are granted<br />

all rights to it - edit, execute, delete and transfer. For other users, the access to that instance are<br />

determined by their rights.<br />

After a link or macro is created, only a <strong>Planning</strong> Rights Administrator can change the instance<br />

rights.<br />

For example, consider a user who is granted create, edit, and execute macro access rights. By default,<br />

this user has all access rights to macros they create. However, they only have edit and execute rights<br />

to those created by other users. A <strong>Planning</strong> Rights Administrator can subsequently grant or revoke<br />

any rights to those macros for any user.<br />

The Execute Command Line macro step is secured by default. This is to minimize the risk of<br />

unauthorized access to resources.<br />

You can also secure the rights to edit, execute, delete and transfer previously created macro instances,<br />

for example, "Import Expenses".<br />

Rights Needed to Transfer Macros<br />

Transferring a macro enables you to copy steps from one macro to another, add steps to another<br />

macro, and make a copy of an existing macro. To do this, you must have:<br />

● transfer rights for the macro instance being transferred to or from<br />

● edit rights for the target macro


● create macro step rights for all the macro steps that you are transferring<br />

Authentication of Macros<br />

Authentication is based on the security context under which the macros are run. For example, if<br />

the macro contains a Go to Production step, the user specified in the authentication details when<br />

you create a macro must have the rights to run Go to Production. This is separate from the access<br />

rights used to secure the management of macros.<br />

Recommendation - Granting Rights<br />

We recommend that you secure the "Execute Command Line" step to prevent unauthorized access<br />

to the functionality that you can execute from the commend line.<br />

Grant the create macro step "Execute Command Line" rights to trusted users only, typically <strong>Planning</strong><br />

Rights Administrators who can schedule macros on the <strong>Contributor</strong> server.<br />

Revoke edit, transfer, and delete rights from all macro instances that contain the "Execute Command<br />

Line" step to prevent unauthorized change by another user. An administrator would have to re-<br />

grant edit, transfer, and delete rights to permit any kind of maintenance changes to the macro.<br />

Set Access Rights for <strong>Contributor</strong> Macros in IBM Cognos Connection<br />

<strong>Contributor</strong> macros that are published to IBM Cognos Connection must be secured by the <strong>Planning</strong><br />

Rights Administrator or the System Administrator to ensure that only the required users, groups,<br />

and role have access to execute or schedule macros.<br />

You must be a member of the <strong>Planning</strong> Rights Administrators or System Administrator roles to<br />

change user, group, and role capabilities.<br />

Steps<br />

1. In IBM Cognos Connection, in the upper-right corner, click IBM Cognos <strong>Administration</strong>.<br />

2. On the Security tab, click Capabilities.<br />

3. Click the actions button next to the <strong>Administration</strong> capability and click Set Properties button<br />

. On the Permissions tab, grant the traverse permission to the required users, groups, and<br />

roles and click OK.<br />

4. Click the Administrator capability to show additional functions. Click the actions button next<br />

to Run activities and schedules and click Set Properties. On the Permissions tab, grant execute<br />

and traverse permissions to the required users, groups, and roles and click OK.<br />

5. On the Configuration tab, click Content <strong>Administration</strong>.<br />

6. Click the Set Properties button on the <strong>Administration</strong> page. On the Permissions tab, grant<br />

read, execute, and traverse permissions to the required users, groups, and roles.<br />

If required, grant write permission if you want the user, group, or role to be able to modify the<br />

contents of this folder. Grant set policy permission if you want the user, group, or role to be<br />

able to change security permissions on this folder.<br />

7. Click OK.<br />

Chapter 3: Security<br />

<strong>Administration</strong> <strong>Guide</strong> 43


Chapter 3: Security<br />

Members of the required users, groups, or roles now have access schedule and run <strong>Contributor</strong><br />

macros in IBM Cognos Connection.<br />

Assign Scheduler Credentials<br />

44 <strong>Contributor</strong><br />

All scheduled macros, and some jobs, run under the identity setup in the IBM Cognos scheduler or<br />

the scheduler credentials for other schedulers. This is because macros and jobs that run in the<br />

background cannot prompt the user for authentication information. Scheduler credentials are also<br />

used to lookup <strong>Contributor</strong> email addresses from workflow pages.<br />

Scheduler credentials are associated with an authenticated user, which can include more than one<br />

user logged on to different namespaces. The user, or group or role that they are a member of, must<br />

have rights granted in the Access Rights window to run macros, jobs, and lookup email addresses.<br />

If there are users from multiple namespaces in your application, you must have scheduler credentials<br />

associated with those namespaces. When the Validate Users job is run, the scheduler credentials<br />

must be associated with all the namespaces you imported users from. If you are logged on to only<br />

one namespace, users that belong to other namespaces are considered invalid.<br />

Macros that are run directly from the <strong>Contributor</strong> <strong>Administration</strong> Console through user interaction<br />

run under the currently logged on user account and do not use the scheduler credentials. Macros<br />

that are triggered through events run under the identity used to create the event, not the scheduler<br />

credentials.<br />

Only users who have the <strong>Planning</strong> Rights Administrator capability can modify the scheduler creden-<br />

tials. By default, the scheduler credentials are associated with the user who opens the <strong>Contributor</strong><br />

<strong>Administration</strong> Console for the first time, if that user is a <strong>Planning</strong> Rights Administrator. If that<br />

user is not a <strong>Planning</strong> Rights Administrator, a message states that the credentials are invalid, or do<br />

not exist, and a <strong>Planning</strong> Rights Administrator must update the scheduler credentials.<br />

Note that the following <strong>Contributor</strong> jobs use the scheduler credentials:<br />

● Validate Users<br />

● Reconcile<br />

Steps<br />

1. In the Systems Settings pane, click the Scheduler Credentials tab, and click Update.<br />

2. Click Logon.<br />

Initially, the Logon As button is disabled.<br />

3. Enter the User ID and password to be used as the scheduler credentials and click OK.<br />

4. If your logon is successful, the Logon button is disabled, and the Logon as button is enabled.<br />

This enables you to log on to different namespaces.<br />

You must provide scheduler credentials for all namespaces that users of the applications are<br />

imported from.


After you create and configure <strong>Contributor</strong> applications, the next step is to configure user access.<br />

You do this by assigning users, roles, or groups to e.List items in the Rights window in the Contrib-<br />

utor <strong>Administration</strong> Console, either by importing a file, or by manually inserting rights.<br />

Chapter 3: Security<br />

<strong>Administration</strong> <strong>Guide</strong> 45


Chapter 3: Security<br />

46 <strong>Contributor</strong>


Chapter 4: Configuring the <strong>Administration</strong> Console<br />

When you start the IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> <strong>Administration</strong> Console for the first time<br />

you must configure it before you can use it.<br />

Before you can configure the IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> <strong>Administration</strong> Console, you<br />

must be a member of the <strong>Planning</strong> rights administration capability which by default, is granted to<br />

members of the <strong>Planning</strong> rights administration role. The <strong>Planning</strong> rights administration capability<br />

can be granted to any user, group, or role (p. 33).<br />

To configure the IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> <strong>Administration</strong> Console<br />

❑ Create the <strong>Planning</strong> tables in the <strong>Planning</strong> content store (p. 47).<br />

❑ Add a datastore server (p. 48).<br />

❑ Add job server clusters and job servers (p. 56).<br />

❑ Configure administration security for the <strong>Contributor</strong> <strong>Administration</strong> Console.<br />

When you have completed these tasks, applications can be created, imported, or upgraded (p. 60).<br />

Creating <strong>Planning</strong> Tables<br />

The first time the <strong>Contributor</strong> <strong>Administration</strong> Console is run, a check is run to see if any <strong>Planning</strong><br />

tables exist in the <strong>Planning</strong> store. If not, you are prompted to create them.<br />

You can either create the tables by selecting Create and populate tables now or you can choose to<br />

create the tables using a script which is then run by a Database Administrator (DBA). Use this<br />

option if you do not have access rights to create tables in the database. To choose the script option,<br />

you select Generate table scripts and data files and enter the location of where to save the script.<br />

The script is then created automatically and the DBA can run the script to create the tables.<br />

If the filesys.ini was not specified during installation, you must specify the path when you create<br />

<strong>Planning</strong> tables. You change this setting if the default path is not used.<br />

If you want to work with a different FileSys.ini, and the associated properties and samples other<br />

than the default, select the Allow use of non-default FileSys.ini check box.<br />

The filesys.ini file is a control file used by Analyst. It contains file paths for the Libs.tab, Users.tab,<br />

and Groups.tab that control the specific library and user setup. You can edit the filesys.ini path by<br />

selecting Tools, Edit FileSys.Ini path.<br />

<strong>Planning</strong> tables are typically prefixed with a P_, and hold information about<br />

● datastore servers<br />

● <strong>Contributor</strong> application datastores<br />

● job servers and job server clusters<br />

● security<br />

<strong>Administration</strong> <strong>Guide</strong> 47


Chapter 4: Configuring the <strong>Administration</strong> Console<br />

● macros<br />

● administration links<br />

● jobs<br />

Add a Datastore Server<br />

48 <strong>Contributor</strong><br />

When you open the <strong>Contributor</strong> <strong>Administration</strong> Console for the first time, you must add an<br />

application datastore server.<br />

Steps<br />

1. Right-click Datastores in the <strong>Administration</strong> tree, and click Add Datastore.<br />

Tip: You can also modify the existing application datastore connection by clicking the Configure<br />

button on the Datastore server information page (p. 49).<br />

2. Select the Datastore provider.<br />

The options are SQL Server, Oracle or DB2.<br />

3. Do one of the following:<br />

● For SQL Server, enter the Datastore server name, or click the browse button to list the<br />

available servers<br />

● For Oracle, enter the service name.<br />

● For DB2, enter the database name.<br />

4. Enter the information as described in the table below:<br />

Setting<br />

Trusted Connection<br />

Use this account<br />

Password<br />

Preview Connection<br />

Description<br />

Click to use Windows authentication as the method for<br />

logging on the datastore. You do not have to specify a sep-<br />

arate logon id or password. This method is common for<br />

SQL Server datastores and less common, but possible, for<br />

Oracle.<br />

Enter the datastore account that this application will use to<br />

connect. This box is not enabled if you use a trusted connec-<br />

tion.<br />

Type the password for the account. This box is not enabled<br />

if you use a trusted connection.<br />

Provides a summary of the datastore server connection<br />

details.


Setting<br />

Test Connection<br />

Description<br />

Mandatory. Click to check the validity of the connection<br />

to the datastore server.<br />

5. If you want to configure advanced settings, click Advanced.<br />

Typically these settings should be left as the default. They may not be supported by all datastore<br />

configurations.<br />

Enter the following information.<br />

Setting<br />

Provider Driver<br />

Connection Prefix<br />

Connection Suffix<br />

Datastore Server Information<br />

Jobs<br />

Types of Jobs<br />

Description<br />

Select the appropriate driver for your datastore.<br />

Specify to customize the connection strings for the needs of<br />

the datastore.<br />

Specify to customize the connection strings for the needs of<br />

the datastore.<br />

When you click the datastore server name in the tree, the datastore server information is displayed.<br />

The datastore server connection area displays the connection string that is used to connect to the<br />

datastore server. You can configure the datastore connection, change the datastore server, change<br />

the account detail and test the connections by clicking the Configure button.<br />

A job is an administration task that runs on job servers and is monitored by the <strong>Administration</strong><br />

Console.<br />

Additional servers can be added to manage applications, speeding up the processing of jobs. You<br />

can run the job and monitor its progress. All jobs can be run while the application is online to Web<br />

clients. This means that you can make changes to the development version of an application while<br />

the production version is live. It reduces the offline time during the Go to Production process.<br />

Each job is split into job items, one job item for each e.List item. If you are running a Publish job<br />

for eight e.List items, eight job items are created. <strong>Contributor</strong> applications can be added to more<br />

than one job server, or to a job server cluster. When a job is created, job items are run on the dif-<br />

ferent job servers, speeding up the processing of a job.<br />

The following table describes each type of job:<br />

Chapter 4: Configuring the <strong>Administration</strong> Console<br />

<strong>Administration</strong> <strong>Guide</strong> 49


Chapter 4: Configuring the <strong>Administration</strong> Console<br />

Job Type<br />

Commentary tidy<br />

Cut-down models<br />

Cut-down tidy<br />

Export queue tidy<br />

Import queue tidy<br />

Inter-app links<br />

Job test<br />

Language tidy<br />

Prepare import<br />

Publish<br />

Reconcile<br />

Reporting publish<br />

Validate links<br />

Validate users<br />

Run Order for Jobs<br />

50 <strong>Contributor</strong><br />

Description<br />

Deletes user annotations and audit annotations, and references to<br />

attached documents (p. 290).<br />

Cuts down applications (p. 138).<br />

Removes any cut-down models that are no longer required.<br />

Removes obsolete items from the export queue.<br />

Removes from the import queue model import blocks that are no longer<br />

required.<br />

Transfers data between applications.<br />

To run successfully, requires that links are reconciled to e.List items.<br />

Test the Job sub system using a configurable Job Item.<br />

Cleans up unwanted languages from the data store after the Go to Pro-<br />

duction process is run. This job is created and runs after Go to Produc-<br />

tion is run.<br />

Processes import data ready for reconciliation (p. 176).<br />

Publishes the data to a view format (p. 259).<br />

Ensures that the structure of the e.List item data is up to date. This job<br />

is created and runs after Go to Production is run. For more information,<br />

see "Reconciliation" (p. 54).<br />

Publishes the data to a table-only format (p. 265).<br />

Updates the validation status of links.<br />

Checks to see if the owner or editor of an e.List item has the rights to<br />

access the e.List item. For more information, see "Ownership" (p. 95).<br />

The job checks the rights of users and updates the information in the<br />

Web browser. This job is created only if a change is made to the Con-<br />

tributor model, and runs after Go to Production is run.<br />

The order in which jobs are run depends on the number of job processors, the hardware used, and<br />

the job polling interval. If there is one job processor, jobs are run in no specific order. If there is<br />

more than one job processor, the first job processor to poll for a job picks a job in no specific order


and runs it. The additional job processors prioritize queued jobs over running jobs and pick one<br />

of the queued jobs to work on in no specific order. If there are enough job processors, all jobs can<br />

be run at the same time.<br />

Because jobs are independent of each other, they do not need to run in a specific order. The<br />

exception is the publish job. The publish job can be started when a reconcile is running, but for it<br />

to complete successfully, all e.List items that are being published must be reconciled.<br />

Actions That Cause Jobs to Run<br />

Securing Jobs<br />

The following actions in the <strong>Administration</strong> Console cause jobs to run.<br />

Action<br />

Import data (p. 143)<br />

Running administration links<br />

Publish data (p. 259)<br />

Delete annotations (p. 290)<br />

Create a new application<br />

Cut-down model options<br />

Change planner-only cubes<br />

Synchronize with IBM Cognos 8 <strong>Planning</strong> -<br />

Analyst<br />

* Triggered by Go to Production<br />

Job<br />

prepare import and reconcile *<br />

inter-app links, validate links<br />

publish and reporting publish<br />

annotations tidy<br />

reconcile *<br />

cut-down models and cut-down tidy *<br />

reconcile *<br />

reconcile *<br />

Changes in the following areas do not cause jobs to be run:<br />

● navigation<br />

● orientation<br />

● <strong>Contributor</strong> help text<br />

● all application settings except for cut-down models<br />

● the backup file location<br />

Chapter 4: Configuring the <strong>Administration</strong> Console<br />

Some jobs run under scheduler credentials. This is because jobs that run in the background cannot<br />

prompt the user for authentication information. Scheduler credentials are associated with an<br />

authenticated session, which can include more than one user logged on to different namespaces.<br />

<strong>Administration</strong> <strong>Guide</strong> 51


Chapter 4: Configuring the <strong>Administration</strong> Console<br />

Managing Jobs<br />

52 <strong>Contributor</strong><br />

The following jobs run under scheduler credentials:<br />

● Validate users<br />

When the Validate users job is run, the scheduler credentials must be associated with all the<br />

namespaces you imported users from. If it is logged on to only one namespace, users that belong<br />

to other namespaces are considered invalid.<br />

● Reconcile<br />

● all jobs launched by a scheduled macro<br />

Only members of the <strong>Planning</strong> rights administrator capability can modify the scheduler credentials.<br />

For information about setting the scheduler credentials, see "Assign Scheduler Credentials" (p. 44).<br />

You can manage and monitor the progress of running jobs in applications from the Job Management<br />

branch in the <strong>Administration</strong> tree, or from the Monitoring Console (p. 61).<br />

You can also monitor the progress of jobs triggered by administration links from the Monitor Links<br />

window.<br />

The Job Monitor shows the following information:<br />

Details<br />

Status<br />

Succeeded<br />

Failed<br />

Total Items<br />

Estimated Completion<br />

Description<br />

The status is one of the following:<br />

● creating<br />

● ready to run<br />

● queued. The job is waiting to be run. The job may have to wait<br />

for a job server to become available before it can be run.<br />

● running<br />

● complete - the job has finished running successfully<br />

● cancelled - the job has failed and was cancelled. Double-click<br />

on the line to find out more.<br />

The number of job items that ran successfully. If all did, All is stated.<br />

A percentage is shown.<br />

The number of job items that failed, if any.<br />

The total number of job items that the job is split into. Jobs are<br />

broken down into atoms of work known as job items, enabling a<br />

job such as Publish to be run over different threads.<br />

An estimated date and time of completion for the job in local time.


Details<br />

Average Duration<br />

Next Item<br />

Start<br />

Last Completion<br />

Duration (min)<br />

Description<br />

Description<br />

The average interval between the completion of job items.<br />

The next job item to be run.<br />

The date and time when the job started, in the local format.<br />

The time when the last job item was completed.<br />

The time in minutes the job task has taken to complete.<br />

A description of the job type.<br />

The lower area shows tasks running on job servers.<br />

Details<br />

Status<br />

Start<br />

End<br />

Duration (min)<br />

User<br />

Process ID<br />

Thread ID<br />

Description<br />

Indicates when a job is running.<br />

The date and time when the job started on that processor in local<br />

format.<br />

Job View Refresh Interval (seconds)<br />

The date and time when the job stopped running on that processor.<br />

The time in minutes the job task has taken to complete.<br />

The user account on the job server.<br />

The process ID can be used for debugging.<br />

The thread ID can be used for debugging.<br />

You can change the job view refresh interval to whatever you feel is appropriate. We recommend<br />

that you do not set it to less than 5 seconds. For most people's needs, 15 seconds is adequate. If<br />

network traffic is an issue, the interval can be longer.<br />

Publish Jobs<br />

The publish process is carried out by the reporting publish job for a table-only layout (p. 265), or<br />

the publish job if a view layout (p. 279) is selected.<br />

To monitor publish jobs in the jobs monitor, select the publish container from the box that is<br />

available at the top of the job monitor.<br />

Chapter 4: Configuring the <strong>Administration</strong> Console<br />

<strong>Administration</strong> <strong>Guide</strong> 53


Chapter 4: Configuring the <strong>Administration</strong> Console<br />

Cancelled Jobs<br />

If a Job is cancelled, you can show information about why this happened by double-clicking on the<br />

line.<br />

Reconciliation<br />

Pausing Jobs<br />

If you want to pause a running job, you must stop the Job Server. To do this, right-click the job<br />

server or job server cluster name and click Disable Job Processing.<br />

Reconciliation ensures that the copy of the application that the user uses on the Web is up to date.<br />

For example, all data is imported, new cubes are added, and changed cubes are updated.<br />

Reconciliation takes place after Go to Production runs and a new production application is created.<br />

It also takes place when an administration link or an Analyst to <strong>Contributor</strong> link to the production<br />

application is run. However, in this case, only the imported data is updated. The application is<br />

reconciled on the job server unless a user tries to access an e.List item before it is reconciled. For<br />

more information, see "The Effect of Changes to the e.List on Reconciliation" (p. 104).<br />

All contribution e.List items are reconciled and aggregated if<br />

● the application was synchronized with Analyst<br />

● changes were made to the Access Tables, Saved Selections, or the e.List that resulted in a different<br />

pattern of No Data cells for contribution e.List items that are common to both the development<br />

and production applications<br />

Note: Changing an access setting to No data, saving the application, and then changing the<br />

access setting to what it was previously also results in reconciliation.<br />

● they have import blocks<br />

Note: If you use the Prepare zero data option, an import data block is created for all e.List<br />

items, so all e.List items are reconciled.<br />

● children were added or removed<br />

● they are new<br />

Reconciliation on the Server<br />

54 <strong>Contributor</strong><br />

During reconciliation on the server, the following process occurs for each e.List item:<br />

● The data block is loaded onto the job server.<br />

● The transformation process occurs<br />

New cubes are added, changes are made to existing cubes such as dimensions added or removed,<br />

and access tables are applied.<br />

● Data is imported.<br />

● Data is saved.


Reconciliation on the Client<br />

Deleting Jobs<br />

Reconciliation can be performed across multiple processors and job servers. For more information,<br />

see "Manage a Job Server Cluster" (p. 56).<br />

If a user attempts to open an e.List item that is not yet reconciled, the e.List item is reconciled on<br />

the user's computer. This may take a few minutes.<br />

In the Web application, the user can determine the state of an e.List item by the workflow icon. If<br />

an icon is enclosed in a red box, the item is out of date and needs reconciling. For more information,<br />

see "Workflow State Definition" (p. 295).<br />

If the Prevent client-side reconciliation check box is selected (p. 82), a user cannot open the e.List<br />

item until the e.List item is reconciled on the server side.<br />

You can delete jobs, but we do not recommended it because it can leave your data in an unstable<br />

state.<br />

Tip: If you want to pause a running job, you can stop the job server. Right-click the name of the<br />

job server or job server cluster and click Disable Job Processing.<br />

Prepare import<br />

If you delete a prepare import job, the next time you run Prepare import, it tidies this up.<br />

Cut-down models<br />

You typically delete a cut-down models job after you cancel the Go to Production process. Clicking<br />

the Go to Production button reruns the cut-down models job. If you delete a cut-down models job<br />

during Go to Production, Go to Production does not run.<br />

Reconciliation<br />

If you delete a reconcile job, all e.List items that were not reconciled stay unreconciled. e.List items<br />

that were already reconciled by the job remain reconciled. When a user attempts to view an unre-<br />

conciled e.List item in the grid, if client side reconciliation is allowed, the reconciliation process<br />

takes place on the client. If client side reconciliation is not allowed, users cannot view the e.List<br />

items. Rerun the Go to Production process to trigger a repair reconcile job.<br />

Managing Job Servers<br />

A job server cluster groups the job servers that are used to process administration jobs. You can<br />

stop, start, and monitor all job servers in a cluster at the same time. You can also choose to have<br />

specific objects run on individual servers.<br />

When you create a <strong>Contributor</strong> application, you are prompted to choose a cluster or server to run<br />

the jobs.<br />

You can add applications and publish containers to multiple job servers. This speeds up the pro-<br />

cessing of jobs, such as reconciliation, giving near linear improvements per processor.<br />

The following two scenarios provide examples of how this might work.<br />

Chapter 4: Configuring the <strong>Administration</strong> Console<br />

<strong>Administration</strong> <strong>Guide</strong> 55


Chapter 4: Configuring the <strong>Administration</strong> Console<br />

Scenario 1<br />

If you publish large amounts of data, you might want to assign the publish container to different<br />

servers than those that are processing the main application. This is because if you assigned the<br />

application container and the publish container to cluster X containing servers A, B,C, D, it is<br />

possible that a large job, for example, publishing 5000 e.List items, could consume all the resources<br />

in cluster X for some time, preventing other jobs from being processed. So in this case you might<br />

want to assign the publish container to, for example, servers A and B, and the application container<br />

to server C and D to enable other jobs such as prepare import, and server-side reconciliation to run<br />

at the same time as the publish job. There is no control at job level.<br />

Scenario 2<br />

If you have some applications that are in production and are live, you might want to have one or<br />

more job clusters with your best hardware to monitor these applications to ensure that they are<br />

stable and available. For applications that are in development, you might want to have a different<br />

job cluster containing your less efficient hardware.<br />

For more information, see "Jobs" (p. 49). You can also automate job server management (p. 199).<br />

Manage a Job Server Cluster<br />

Before administration jobs can be processed, you must add a job server cluster to the <strong>Administration</strong><br />

Console. A job server cluster manages a group of job servers. For information on managing job<br />

servers, see "Managing Job Servers" (p. 55).<br />

Steps<br />

1. Right-click Job Server Clusters and click Add Job Server Cluster.<br />

2. Type a unique name for the job server cluster.<br />

3. Click Add.<br />

The next step is to add job servers to the job server cluster.<br />

4. Remove a job server cluster by right-clicking it and selecting Delete Job Server Cluster.<br />

5. Disable job processing on a cluster by right-clicking the cluster and selecting Disable Job Pro-<br />

cessing.<br />

6. To test communication with the job server, right-click the job server name and click Test. Any<br />

errors are displayed in a message box.<br />

Add a Job Server and Change its Content Store<br />

56 <strong>Contributor</strong><br />

You must add job servers to a job server cluster so that administration jobs can be processed.


Job servers can exist in only one <strong>Planning</strong> content store. You can either manually delete the job<br />

server (p. 59) and add it to the new <strong>Planning</strong> content store, or on the job server, change the content<br />

store that it is associated with.<br />

Steps to Add a Job Server<br />

1. Right-click Job Servers and click Add Job Server.<br />

2. Select a job server from Available Servers.<br />

Computers that are job servers must have the <strong>Planning</strong> job service option enabled in IBM<br />

Cognos Configuration. For more information, see the IBM Cognos 8 <strong>Planning</strong> Installation and<br />

Configuration <strong>Guide</strong>.<br />

3. Enter the Polling interval.<br />

This sets how often the job server polls the database to see if there are any new jobs. The default<br />

is 15 seconds.<br />

4. Enter the Maximum concurrent jobs. The default is -1. This is 1 concurrent job per processor.<br />

Typing 0 stops job execution.<br />

5. Click Add.<br />

You should now add any applications, publish containers, and other objects to either a job<br />

server cluster, or an individual job server. Jobs such as reconcile are not run until an application<br />

is added to a job server or job server cluster.<br />

Tip: to modify the properties of a job server, right-click the server name and click Properties.<br />

Steps to Change the Content Store for the Job Server<br />

1. On the job server computer, open IBM Cognos Configuration.<br />

2. Under Data Access, Content Manager, Content Store, change the value for Database server<br />

with port number or instance name.<br />

Add Applications and Other Objects to a Job Server Cluster<br />

Adding an object to a job server cluster means that when a job such as Publish is run, the job is run<br />

by the job servers in the cluster.<br />

You can monitor applications at cluster level and at job server level.<br />

You must add the following objects to run jobs:<br />

● a <strong>Contributor</strong> application<br />

● an application folder<br />

● View Publish Container<br />

You need this if you are publishing using the view layout.<br />

● Table-only Publish Container<br />

You need this if you are publishing using the Table-only Layout.<br />

Chapter 4: Configuring the <strong>Administration</strong> Console<br />

<strong>Administration</strong> <strong>Guide</strong> 57


Chapter 4: Configuring the <strong>Administration</strong> Console<br />

● a <strong>Planning</strong> Content Store<br />

Step<br />

● Click the job server cluster name, and click Add.<br />

The window contains details about the objects monitored by the cluster. It lists only objects<br />

that are directly assigned to the cluster, not those objects that are assigned to individual job<br />

servers.<br />

Add Objects to a Job Server<br />

58 <strong>Contributor</strong><br />

Tip: Start or stop all job servers in the cluster by right-clicking the cluster name and selecting<br />

Enable Job Processing or Disable Job Processing as required.<br />

You can add objects to individual job servers.<br />

Tip: You can see whether a cluster or individual job server is started from the icons in the adminis-<br />

tration tree:<br />

Icon<br />

Steps<br />

Description<br />

Job server started.<br />

Job server cluster started.<br />

1. Click the name of the job server, and then click Monitored Applications.<br />

2. Click Add.<br />

Three panes of information appear.<br />

Pane<br />

Monitored objects held in<br />

the Job Server Cluster<br />

Monitored objects held in<br />

this Job Server<br />

Description<br />

Shows the objects monitored by the cluster that the job server<br />

is in.<br />

Lists the objects monitored by the job server. It contains only<br />

details of objects directly assigned to the server.<br />

You can assign an application folder and its contents to a job<br />

server or job server cluster as a monitored object. You can<br />

also assign each application within an application folder to<br />

a different job server. For more information on monitoring<br />

application folders, see "Monitor Application Folders" (p. 59)


Pane<br />

Job Tasks being processed<br />

Description<br />

Monitors the jobs that are being processed by the server:<br />

by this Job Server ● data source: If you see N/A by an application folder, this<br />

indicates that the folder may have more than one data<br />

source (applications within an application folder can be<br />

on more than one datastore).<br />

● application name<br />

● job type: the job being processed (p. 49)<br />

● Process ID: identifies the process used to execute a job<br />

task. Used for debugging.<br />

● Thread ID: identifies the thread used to execute the job<br />

task. A thread is created for each job task. Multiple<br />

threads can be created per process. The number of threads<br />

per process is set in the Maximum concurrent jobs option.<br />

Used for debugging.<br />

3. Right-click the job server name and select Enable Job Processing or Disable Job Processing as<br />

required.<br />

Remove Job Servers<br />

You can automate this process, see "Job Servers (Macro Steps)" (p. 199), "Disable Job Pro-<br />

cessing" (p. 199), or "Enable Job Processing" (p. 199).<br />

A Job server can exist in only one <strong>Planning</strong> Content store at a time. You can either manually delete<br />

the job server from the <strong>Planning</strong> content store and add it to the new one, or on the job server,<br />

change the content store that it is associated with. For more information, see "Steps to Change the<br />

Content Store for the Job Server" (p. 57).<br />

Step<br />

● In the <strong>Contributor</strong> <strong>Administration</strong> Console, right-click the job server name and click Delete<br />

Job Server<br />

Monitor Application Folders<br />

An application folder can be assigned to a job server or job cluster as a monitored object. This adds<br />

the applications held in the folder, which are not already being monitored, to the job server or<br />

cluster, allowing a group of applications to be monitored in one single step. If applications are<br />

subsequently added to that application folder, they are not automatically assigned to the job server<br />

or cluster.<br />

Chapter 4: Configuring the <strong>Administration</strong> Console<br />

You can also remove an application folder from a job server or cluster in a single step. This removes<br />

all the applications contained and monitored in that folder by that job server or cluster. You can<br />

<strong>Administration</strong> <strong>Guide</strong> 59


Chapter 4: Configuring the <strong>Administration</strong> Console<br />

still add individual applications within an application folder to be monitored by different servers<br />

or clusters.<br />

● To add the contents of an application folder to be monitored, select the application folder row<br />

and click Add.<br />

● To add individual applications from within an application folder, select the applications required<br />

and click Add.<br />

Creating, Adding and Upgrading Applications<br />

You can create new applications, add existing applications to the server, or upgrade applications<br />

by right-clicking Applications and selecting one of the following options:<br />

● Create New Application (p. 65).<br />

● Link to existing Applications (p. 60). This adds <strong>Contributor</strong> applications that exist on the<br />

datastore server to the <strong>Administration</strong> Console.<br />

● Upgrade Application (p. 329). You can import and upgrade <strong>Contributor</strong> applications created<br />

in an earlier version of <strong>Contributor</strong>.<br />

Remove Datastore Definitions and <strong>Contributor</strong> Applications<br />

When you remove a <strong>Contributor</strong> application and datastore definition, only the entries from the<br />

<strong>Planning</strong> content store are removed, the datastores still remain on the server. To delete an application<br />

from the server, you must have Database Administrator (DBA) permissions. If you do not, contact<br />

your database administrator.<br />

Step<br />

● In the <strong>Administration</strong> Console, right-click on the server name, or the application name in the<br />

<strong>Administration</strong> tree and click Remove Application.<br />

Tip: You can also assign applications to application folders. For more information, see<br />

"Application Folders" (p. 68).<br />

Adding an Existing Application to a Datastore Server<br />

60 <strong>Contributor</strong><br />

There are several circumstances in which you might add an existing application to a datastore<br />

server:<br />

● During application creation, you selected the Generate datastore scripts and data files option.<br />

In this case, before you can add the application, the script must be run by a database adminis-<br />

trator, see "Running the Script.sql file (DBA Only)" (p. 69).<br />

● You chose to create scripts during an application upgrade<br />

● You want to link to an application that was created in another <strong>Planning</strong> content store.


Steps<br />

To do this, you must first remove the application from the <strong>Planning</strong> content store table it cur-<br />

rently resides in.<br />

1. Right-click Applications in the <strong>Administration</strong> tree and click Link to existing Applications.<br />

2. The Add existing applications window lists the applications that exist for the currently selected<br />

datastore server. Click the application you require.<br />

3. If the application was created in a different <strong>Planning</strong> content store, add an application ID. This<br />

is used by the Web browser to identify the application. It must be a unique character string.<br />

4. For applications created using the Script.sql file, select the XML package. The location of the<br />

package.xml is the same as the location of the script.<br />

5. Click Add.<br />

Ensure that the application is added to a job server or job server cluster. For more information,<br />

see "Add Objects to a Job Server" (p. 58).<br />

The Monitoring Console<br />

The Monitoring Console enables you to monitor the progress of the following processes running<br />

in the <strong>Contributor</strong> <strong>Administration</strong> Console from one location:<br />

● application (p. 52)<br />

● job server clusters (p. 58)<br />

● administration links (p. 52)<br />

● macros (p. 194)<br />

● deployment (p. 170)<br />

Managing Sessions<br />

Multiple administrators may administer an IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> application at<br />

any one time. However, to prevent data integrity issues, when a change is made to the development<br />

model, it is locked. The lock is dropped when the administrator navigates to a different function<br />

or closes the <strong>Administration</strong> Console. You can manually remove the lock by clicking the Remove<br />

button on the Session Details area. Use this with caution because it could prevent other users from<br />

saving changes.<br />

You can automate the removal of an application lock. See "Remove Application Lock" (p. 223) for<br />

more information.<br />

The development model can be changed by the following:<br />

● changing navigation<br />

● changing orientation<br />

Chapter 4: Configuring the <strong>Administration</strong> Console<br />

<strong>Administration</strong> <strong>Guide</strong> 61


Chapter 4: Configuring the <strong>Administration</strong> Console<br />

62 <strong>Contributor</strong><br />

● selecting grid options<br />

● selecting application options<br />

● selecting planner-only cubes<br />

● creating <strong>Contributor</strong> help text<br />

● selecting dimensions for publish<br />

● modifying e.List, and rights<br />

● selecting saved selections and access tables<br />

● synchronizing with IBM Cognos 8 <strong>Planning</strong> - Analyst<br />

● modifying datastore maintenance options<br />

● resetting development to production.<br />

The production model is updated during the Go to Production process. There are checks in place<br />

to ensure that two jobs do not run concurrently. Additionally, when you create a job using the<br />

<strong>Administration</strong> Console, you are prompted if a valid job of that type already exists and alerted that<br />

this new job may overwrite the existing job.<br />

Example 1<br />

<strong>Administration</strong> Console 1: User A clicks Navigation in the <strong>Administration</strong> tree. At this stage, the<br />

development model is not locked.<br />

<strong>Administration</strong> Console 2: User B clicks e.List in the <strong>Administration</strong> tree, and makes a change. The<br />

development model is then locked by User B. A further check is made to ensure that no changes<br />

were made to the development application since it was last read by <strong>Administration</strong> Console 2. If<br />

this is true, User B can continue to edit the e.List and save the changes.<br />

<strong>Administration</strong> Console 1: User A edits Navigation. User A is informed that User B has the applic-<br />

ation locked and is asked whether they want to take the lock. User A takes the lock from User B.<br />

Because User B updated the development model since User A opened the Navigation page, the<br />

development model is no longer up-to-date. The user is prompted and Navigation is re-loaded with<br />

the updated development model and User A can continue to edit and save changes.<br />

Example 2<br />

<strong>Administration</strong> Console 1: User A opens Application Options and makes a change.<br />

<strong>Administration</strong> Console 2: User B opens Orientation and makes a change. User B is prompted that<br />

User A has the lock and User B takes the lock.<br />

<strong>Administration</strong> Console 1: User A clicks Save. User A is prompted that User B now has the lock<br />

and that changes made by User A will not be saved. The controls on Application Options are dis-<br />

abled.<br />

Example 3<br />

<strong>Administration</strong> Console 1: User A opens Grid Options and makes a change.


Sending Email<br />

<strong>Administration</strong> Console 2: User B opens the Publish Data and selects the detail for a publish. Because<br />

this does not change the development model, there is no need for a lock to be taken. Both User A<br />

and User B can work on the application concurrently.<br />

You can send email to a user defined in an application using your default email program.<br />

Steps<br />

1. Click in the application containing the user you are sending an email to.<br />

2. Click Send email in the toolbar .<br />

The first time you do this, you may be asked to supply your mail provider connection details.<br />

Refer to your mail provider documentation.<br />

3. Choose to send an email to All Users or All Active Users, as defined in the application.<br />

4. Click on the group of users you want to send email to, then click Mail. This opens up a new<br />

email message in your standard email tool.<br />

5. Create and send the email.<br />

You then return to the Send email dialog box.<br />

Chapter 4: Configuring the <strong>Administration</strong> Console<br />

<strong>Administration</strong> <strong>Guide</strong> 63


Chapter 4: Configuring the <strong>Administration</strong> Console<br />

64 <strong>Contributor</strong>


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

To create a <strong>Contributor</strong> application, you perform the following steps:<br />

❑ If you have DBA rights to the datastore server, create a <strong>Contributor</strong> application using the<br />

Application Wizard.<br />

If you do not have DBA rights to the datastore server, create a script using the Application<br />

Wizard, then send the script to the DBA who will run the script. After the script is run, add the<br />

application to the <strong>Administration</strong> Console, "Adding an Existing Application to a Datastore<br />

Server" (p. 60).<br />

These processes create a development application. A development application is not seen by<br />

the users, it is simply the application that you work on in the <strong>Administration</strong> Console.<br />

This means you can make changes to the application without having to take it offline, reducing<br />

the amount of time that users are offline to as little as a minute. This is the period of time taken<br />

to integrate the new e.List items into the hierarchy and set the correct workflow states.<br />

❑ Modify the application.<br />

You can set the order in which the users are asked to go to each cube, click the dimensions that<br />

make up the rows, columns and pages, add and modify the e.List, import users, define access<br />

rights, and import data.<br />

❑ Run the Go to Production process, see "The Go to Production Process" (p. 243).<br />

This activates a wizard which creates a production application. This makes the application<br />

available to end-users.<br />

Creating a <strong>Contributor</strong> Application<br />

Use the Application Wizard to create a <strong>Contributor</strong> Application.<br />

Steps<br />

1. In the <strong>Administration</strong> tree, under the name of the datastore server where the application is to<br />

be created, right-click Applications.<br />

A check is made to see if you are logged on with appropriate rights. If you are not, you are<br />

prompted to log on. Click Next.<br />

2. Select Create New Application from the menu.<br />

A check is made to see if you are logged on with appropriate rights. If you are not, you are<br />

prompted to log on. Click Next.<br />

3. Select a library.<br />

This is the Analyst library that your application will be based on. The D-Cube library list is for<br />

information only and tells you which D-Cubes the selected library contains. Click Next.<br />

<strong>Administration</strong> <strong>Guide</strong> 65


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

66 <strong>Contributor</strong><br />

4. Select the e.List.<br />

The e.List is a dimension that is used to determine the hierarchical structure of an application.<br />

For more information, see "The e.List" (p. 93).<br />

When you select an e.List, the right-hand list tells you which D-Cubes in the application contain<br />

this e.List.<br />

5. Click Next.<br />

The <strong>Administration</strong> Console checks the library to ensure that it can create the application. If<br />

there are any errors or warnings, you can view these and save them to text file for more<br />

investigation. If there are errors, the wizard will terminate. You can continue to create the<br />

application if you only have warnings.<br />

6. A window listing the statistics for the application will be displayed. This is for information<br />

only. See "Model Details" (p. 68) for more information.<br />

You can print the details on this window.<br />

7. Enter application details as follows:<br />

Detail<br />

Application Display<br />

Name<br />

Datastore Name<br />

Application ID<br />

Location of datastore<br />

files<br />

Description<br />

Defaults to the Analyst library name, but you can change this<br />

during application creation. After an application is created, this<br />

cannot be changed. There are no character restrictions. The<br />

maximum length is 250 characters.<br />

The name of the datastore application that contains the Contrib-<br />

utor application database tables.<br />

This defaults to the Analyst library name, stripping out special<br />

characters. Only the following characters are allowed: lowercase<br />

letters a to z and numeric characters. No punctuation is allowed<br />

except for underscore. Maximum 30 characters (18 for DB2<br />

OS390)<br />

SQL Server only: Reserved keyword words are not allowed, see<br />

your SQL Server documentation for more information.<br />

This is used by the Web browser to identify the application. Only<br />

the following characters are allowed: lowercase letters a to z and<br />

numeric characters. No punctuation is allowed except for<br />

underscore.<br />

Enter the file path where the SQL Server datastore files will be<br />

created on the administration server. You can browse the data<br />

server file structure of file locations. Oracle and DB2 users do<br />

not see this box, the location of datastore files is determined by<br />

your datastore structure.


Detail<br />

Location of datastore<br />

backup files<br />

Create and populate<br />

datastore now<br />

Generate datastore<br />

scripts and data files<br />

Save scripts in this<br />

folder<br />

Advanced<br />

Description<br />

Enter a location for the datastore backup files. The file location<br />

must exist prior to creating an application and should be a dif-<br />

ferent location than the datastore location.<br />

This option creates the <strong>Contributor</strong> application and adds it to<br />

the <strong>Administration</strong> Console tree. You only have this option if<br />

you have appropriate DBA rights.<br />

Select this option if you want to create a datastore script and<br />

data files which can be used to create an application at a later<br />

stage. This option is mandatory if you do not have DBA rights.To<br />

create an application at a later date, the script must be run by a<br />

DBA and the application added to the datastore. After the<br />

application is added and you click on any of the branches, you<br />

are prompted to select the package.xml file.<br />

Enter or browse for a file location to save datastore scripts and<br />

data files to.<br />

Oracle applications<br />

Specify the following options:<br />

● Tablespace used for data (defaults to USERS)<br />

● Tablespace used for indexes (defaults to INDX)<br />

● Tablespace used for BLOBS (Binary Large Objects) (defaults<br />

to USERS).<br />

● Temporary tablespace<br />

(Defaults to TEMP). This can be changed. This tablespace is<br />

used for any automatically created publish datastores.<br />

These tablespaces must exist.<br />

DB2 UDB<br />

Specify the tablespace name for data, indexes and BLOBS. It<br />

defaults to USERSPACE1. Customized tablespace names can be<br />

used. Tablespaces need to be at least 8000 pages.<br />

8. Select the job cluster or job server to run the jobs for this application.<br />

Chapter 5: Creating a <strong>Contributor</strong> Application<br />

9. Click Next and then Finish. The application creation progress is displayed.<br />

<strong>Administration</strong> <strong>Guide</strong> 67


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

Application Folders<br />

If you generated datastore scripts and data files, send the script to the DBA. After your DBA<br />

has run the script, you can add the application to the <strong>Administration</strong> Console "Creating, Adding<br />

and Upgrading Applications" (p. 60).<br />

You can use application folders to organize your applications into related groups. You can assign<br />

job server clusters (p. 58) and access rights to groups of applications, making it easier to administer<br />

multiple applications at the same time. For example, you can add or remove a group of applications<br />

from a job server or job server cluster. An application folder can contain applications that exist on<br />

more than one datastore.<br />

When you assign an application to an application folder, it moves from under the Applications<br />

branch of the tree to the relevant application folders.<br />

Tips<br />

Model Details<br />

68 <strong>Contributor</strong><br />

● To remove an application from an application folder, right-click the application and select<br />

Remove application from folder. The application is moved under Applications. If that was the<br />

only application in the folder, the folder is removed from the <strong>Administration</strong> Console.<br />

● To move an application between folders, you must first remove it from the original application<br />

folder.<br />

● Selecting Remove application removes the application from the <strong>Administration</strong> Console.<br />

Steps<br />

1. Before you create an application folder, at least one application must exist.<br />

You must also have the right to assign applications to application folders.<br />

2. Right-click the application name and click Assign Application to an Application Folder.<br />

3. Click Create a new Application Folder and add the Application, or Assign the Application to<br />

an existing Application Folder.<br />

4. If creating an new folder, enter an appropriate folder name.<br />

5. If you selected Assign application to an existing folder, select the folder name from the drop<br />

down list.<br />

6. Click Assign.<br />

The model details window in the Application Wizard displays information about the application.<br />

Exceeding these numbers may not be a problem for your application, but could slow down the end<br />

user, and a redesign of the model in Analyst could help.


Warning<br />

Number of Cubes<br />

Number of D-Links<br />

Total Number of Cells in<br />

Application (per e.List slice)<br />

Largest Cube<br />

Total Number of D-List<br />

Items in Application<br />

Largest Dimension<br />

Number<br />

10<br />

25<br />

500,000<br />

200,000<br />

2,500<br />

1,000<br />

Running the Script.sql file (DBA Only)<br />

Description<br />

This is an optimum number of cubes that can be dis-<br />

played in the <strong>Contributor</strong> <strong>Administration</strong> Console.<br />

A large number of D-Links is not necessarily a problem.<br />

However, greater than 25 D-Links could lead to some<br />

specific performance issues that may be difficult to pre-<br />

dict. This may slow both runtime performance and ini-<br />

tialization times to an unacceptable level.<br />

A large number of cells in the application will lead to<br />

performance problems unless the model builder is able<br />

to use no data settings in access tables to create e.List<br />

specific models that are considerably smaller. Under<br />

certain circumstances it is possible to distribute very<br />

large models with <strong>Contributor</strong>, particularly if bandwidth<br />

and server capacity is not an issue.<br />

This restriction is similar to the Total Number of Cells<br />

in Application. A large single cube can lead to perform-<br />

ance problems at runtime, for example, breakback and<br />

data entry can become slow, unless the cube is cut-down<br />

using no data settings in access tables.<br />

A very large number of dimension items can cause the<br />

model definition to be very large. See also below.<br />

The <strong>Contributor</strong> Web application is not designed to<br />

carry a large number of dimension items across the rows<br />

and/or columns. The application is optimized for views<br />

that fit onto a single window. When lists are large, No<br />

Data settings and cut-down models can be used to<br />

reduce large lists down to a size that is more manageable<br />

for each e.List item. There may also be usability issues<br />

when manipulating large lists in the <strong>Administration</strong><br />

Console.<br />

The DBA runs script.sql file using SQL, Oracle or DB2 as appropriate.<br />

Database<br />

SQL Server<br />

To run the script, do the following:<br />

Chapter 5: Creating a <strong>Contributor</strong> Application<br />

open and run the file using SQL Server Query Analyzer.<br />

<strong>Administration</strong> <strong>Guide</strong> 69


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

Database<br />

Oracle<br />

DB2<br />

To run the script, do the following:<br />

using SQL*Plus, at the prompt, enter @c:\script.sql.<br />

run the script.sql using Command Center.<br />

After you have run the script successfully, add the application to the <strong>Contributor</strong> <strong>Administration</strong><br />

Console. For more information, see "Adding an Existing Application to a Datastore Server" (p. 60).<br />

Application Information<br />

When you click the application name, the following application information is displayed:<br />

Detail<br />

Application Display Name<br />

Datastore Name<br />

Application ID<br />

Library Name<br />

Library Number<br />

The e.List<br />

Description<br />

The name of the application as displayed in the <strong>Administration</strong><br />

Console.<br />

The name of the application datastore.<br />

A unique identifier used to identify the application in the Web<br />

client.<br />

The name of the library in Analyst that is used to create the<br />

application.<br />

The number of the library in Analyst.<br />

The name of the dimension that is used as the e.List place-<br />

holder.<br />

Configuring the <strong>Contributor</strong> Application<br />

70 <strong>Contributor</strong><br />

You can set options for the <strong>Contributor</strong> application:<br />

● set global Web Client settings<br />

● set the order in which the users are asked to go to each cube (p. 71)<br />

● select the cube dimensions that make up the rows, columns, and pages of the cubes (p. 72)<br />

● configure grid settings (p. 72)<br />

● configure application settings (p. 74)<br />

● designate which cubes can be viewed only by the planner (p. 78)<br />

● create text for the users to see in the cube or Web client (p. 78)


Configure the Web Client<br />

Configure Web client settings that apply to all <strong>Contributor</strong> applications. These settings can only<br />

be applied by the <strong>Planning</strong> Rights Administrator.<br />

Steps<br />

1. In the <strong>Administration</strong> tree, click System Settings, and Web Client Settings.<br />

2. To enable the <strong>Contributor</strong> applications to be deployed automatically over the Web, select Allow<br />

automatic downloads and installations.<br />

To enable automatic software updates, select Allow automatic client software updates.<br />

3. To modify the separator that is used between names in emails sent from <strong>Contributor</strong> applications,<br />

enter the separator in the Email character separator box.<br />

4. To restrict the size of attached files:<br />

● In the Attached Documents area, select Limit Document Size.<br />

● Enter an amount (in megabytes) for the Maximum Document Size (MBs).<br />

5. In the Allowable Attachment Types box, choose to either remove a selected file type by clicking<br />

Remove or click Add to add a new allowable attachment type. At the end of the list of file type,<br />

enter a label name and the file type extension in each box. Make sure you append the file type<br />

extension with an asterisk (*).<br />

Note: Changes made to the Attached Documents settings take effect without the need to perform<br />

a Go To Production.<br />

Set the Cube Order for an Application<br />

Set the order in which the users are prompted to go to each cube. We recommend that you set the<br />

order to be the same order in which the links run in the Analyst model. Users can switch between<br />

cubes by clicking tabs.<br />

You can provide instructions to be shown for each cube, see "Creating General Messages and Cube<br />

Instructions" (p. 78).<br />

Steps<br />

1. In the appropriate application, click Development, Web-Client Configuration, and then Navig-<br />

ation.<br />

2. In the Set Cube Order box, click each cube name and move as required using the arrow keys.<br />

3. Click Save.<br />

Chapter 5: Creating a <strong>Contributor</strong> Application<br />

The changes are visible to users after you run the Go to Production process.<br />

<strong>Administration</strong> <strong>Guide</strong> 71


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

Set the Order of Axes<br />

Select the cube dimensions that make up the rows, columns, and pages of the cubes. You can also<br />

nest dimensions.<br />

Steps<br />

1. In the appropriate application, click Development, Web-Client Configuration, and then Orient-<br />

ation.<br />

2. Click the tab for the cube that you want to modify.<br />

3. Click a dimension and move it using the arrows, to set it as a page, row, or column. Repeat<br />

with the other dimensions if required.<br />

4. If you want to create nested (merged) dimensions, place two dimensions under either Row or<br />

Column.<br />

Planners cannot change nested settings, even if they reslice.<br />

5. Click Save.<br />

The changes are visible to users after you run the Go to Production process.<br />

If cubes that have dimensions defined as pages have dimension items with different access set-<br />

tings, such as Read and Write, the cube opens with the first writable page selected by default.<br />

If all items have the same access setting, the cube opens with the first selected page as created<br />

in Analyst.<br />

Change Grid Options<br />

72 <strong>Contributor</strong><br />

If a user moves from one cube to another with the same dimension, the cube opens to the same<br />

item selected in the previous cube.<br />

Change grid options to affect the way the Web application appears and behaves.<br />

Steps<br />

1. In the appropriate application, click Development, Web-Client Configuration, and then Grid<br />

Options.<br />

2. Set the options you want.<br />

3. Click Save.<br />

Changes will be applied to the production application after running the Go to Production<br />

process.


Option<br />

Set Breakback<br />

Option<br />

Allow Multi<br />

e.List Item<br />

Views<br />

Allow Slice and<br />

Dice<br />

Recalculate<br />

After Every Cell<br />

Change<br />

Select Color for<br />

Changed Values<br />

Possible Unexpected Results Example<br />

Description<br />

When breakback is enabled and data is entered into a calculated cell,<br />

data in other cells is automatically calculated from this data.<br />

For example, you can distribute a total annual salary over twelve<br />

months to calculate a monthly payment. For more information, see<br />

the Analyst User <strong>Guide</strong>. All cubes have breakback selected by default.<br />

If the Allow Multi e.List Item Views option is on, the user can select<br />

a multi e.List item view, or a single e.List item view. This means they<br />

can edit or view all contributions they are responsible for in one win-<br />

dow. The default value is off because a large amount of memory may<br />

be needed to open a multi-e.List item view.<br />

If the Allow Slice and Dice option is on, users can swap a row or<br />

column with a page or swap a page with a column or row heading.<br />

The default value is on.<br />

If the Recalculate After Every Cell Change option is on and a user<br />

types data into the application, the data is recalculated as soon as the<br />

focus moves from the cell. The default is Off, meaning that data is<br />

calculated when pressing Enter.<br />

You can specify the color of data in the grid for the following situ-<br />

ations:<br />

Saved data: the color of data with no change. The default is black.<br />

Typed data not entered: the color of data that is typed but not entered.<br />

The default is green.<br />

Data entered but not saved: the color of data entered in the current<br />

session but not saved. The default is blue.<br />

The default colors are different in Analyst where the color of data that<br />

is entered but not saved is red, not blue, and where detail/total is<br />

blue/black, not normal/bold.<br />

You may get unexpected results if you select this option with breakback On. For example, with<br />

Breakback and Recalculate After Every Cell Change selected, in the example shown, type 240,000<br />

in the total, and then type 40,000 in July, the total changes to 260,000 and the remaining months<br />

have 20,000.<br />

Chapter 5: Creating a <strong>Contributor</strong> Application<br />

<strong>Administration</strong> <strong>Guide</strong> 73


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

With Breakback On and Recalculate After Every Cell Change Off, press Enter. The total holds as<br />

240,000 and the remaining months have 18,182.<br />

Change Application Options<br />

74 <strong>Contributor</strong><br />

Change Application Options to affect the operation of the production application.<br />

Steps<br />

1. In the appropriate application, click Development, Web-Client Configuration, and then<br />

Application Options.<br />

2. Set the options you want.<br />

3. Click Save.<br />

Changes will be applied to the production application after running the Go to Production<br />

process.<br />

Option<br />

History Tracking<br />

Description<br />

Use the History Tracking option to track actions per-<br />

formed by users. When you select Action time stamps<br />

and errors and Full debug information with data,<br />

information is recorded in a datastore table named his-<br />

tory.<br />

Choose one of the following:<br />

● No History<br />

Does not track changes<br />

● Action time stamps and errors<br />

Tracks the times of users’ actions, including saves,<br />

submissions, and rejections. This option is selected<br />

by default.<br />

● Full debug information with data<br />

Tracks the times of users’ actions and records a<br />

copy of the data at the point when each action takes<br />

place. Because this option consumes a lot of<br />

memory, use it only for debugging purposes.


Option<br />

Cut-down Models<br />

Allow Reviewer Edit<br />

Allow Bouncing<br />

Prompt to Send E-mail When<br />

User Takes Ownership<br />

Description<br />

If cut-down models (p. 138) are used, a different model<br />

definition is produced for each e.List item. This can<br />

significantly speed up transmission to the client for<br />

models with large e.Lists. However, if used inappropri-<br />

ately, it could slow down performance.<br />

Choose one of the following:<br />

● No cut-down models<br />

● For each aggregate e.List item (review level model<br />

definition)<br />

● For every e.List item<br />

For more information about these options, see "Cut-<br />

down Model Options" (p. 139).<br />

If the Allow Reviewer Edit option is on, users with<br />

review and submit rights to an e.List item can edit an<br />

e.List item up to the review depth level, which is assigned<br />

in the e.List window.<br />

If the Allow Bouncing option is on, someone with<br />

appropriate rights can take ownership of an e.List item<br />

by clicking the Edit or Annotate button while the item<br />

is being edited or reviewed by another owner.<br />

If the Allow Bouncing option if off, ownership of an<br />

e.List item can only be taken when it is not being edited<br />

or reviewed by another user.<br />

When a user has ownership taken away from them, they<br />

are issued a warning and cannot save or submit their<br />

data to the server. They can save to file by right-clicking<br />

in the grid.<br />

Chapter 5: Creating a <strong>Contributor</strong> Application<br />

When this option is selected, user 1 is prompted to send<br />

an email to user 2 when user 1 takes ownership of an<br />

e.List item from user 2. The email is copied to other<br />

people who have submit or save rights.<br />

<strong>Administration</strong> <strong>Guide</strong> 75


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

76 <strong>Contributor</strong><br />

Option<br />

Use Client-side Cache<br />

Prevent Off-line Working<br />

Prompt to Send Email on<br />

Reject<br />

Prompt to Send E-mail on Save<br />

Prompt to Send E-mail on<br />

Submit<br />

Workflow page refresh rate<br />

(minutes)<br />

Web Client Status Refresh Rate<br />

(minutes)<br />

Description<br />

If the Use client-side cache option is on, model defini-<br />

tions and data blocks are cached on the client computer<br />

so that they do not have to be downloaded repeatedly<br />

from the server. This provides a huge reduction in the<br />

network bandwidth required and is invisible to the user.<br />

This is not possible on client computers where a security<br />

policy prevents saves to the hard disk. When the user<br />

requests the data by opening the grid, a mechanism<br />

checks to see whether the data is cached on the client<br />

machine and whether that data changed.<br />

If the Prevent off-line working option is on, users cannot<br />

work offline. Offline working is possible only when<br />

cache on the client server may be used, but the client<br />

side cache does not have to be enabled.<br />

For more information, see "Working Offline" (p. 89).<br />

Reviewers are prompted to send an email message to<br />

current owners of contribution e.List items when they<br />

reject an item. The email is sent to the person who sub-<br />

mitted the e.List item, and copied to other people who<br />

have submit or save rights and people who have assigned<br />

rights for the review e.List item, that is the rights are<br />

not inherited through the hierarchy.<br />

The user is prompted to email all immediate reviewers<br />

and copy (cc) all immediate owners when they save an<br />

item.<br />

The user is prompted to email all immediate reviewers<br />

and copy (cc) all immediate owners when they submit<br />

an item.<br />

You can change the interval of time that the server is<br />

polled to refresh the workflow page.<br />

You can change the interval of time that the server is<br />

polled to refresh the Web client status. Increasing the<br />

refresh interval decreases the amount of Web traffic.<br />

This may be desirable if there are a lot of clients, but it<br />

also reduces the visibility of the data that the user gets<br />

on the workflow state.


Option<br />

Record Audit Annotations<br />

Annotations Import Threshold<br />

Annotations Paste Threshold<br />

Display Audit Annotations in<br />

Web Client<br />

Allow Bouncing Example<br />

Description<br />

This records actions taken in the Web client, such as<br />

typing, copying and pasting data, and importing files.<br />

In addition, system link history is stored as an annota-<br />

tion on the cube that was targeted by the link. When a<br />

link is run, an annotation in the open e.List item. If the<br />

link is rerun, the same annotation is updated. A history<br />

dialog box shows all history related to the links that<br />

apply to the open e.List items.<br />

If enabled, users can view audit annotations for any cells<br />

for which they have at least view access.<br />

This option can greatly increase the size of the applica-<br />

tion datastore, and should be used with care. It is Off<br />

by default.<br />

If a user imports a text file into the Web grid, this option<br />

determines whether each row imported in a single<br />

transaction is recorded separately, or all rows imported<br />

are recorded in a single entry. If the threshold is set to<br />

0, all rows imported in a single transaction are recorded<br />

as a single entry.<br />

If a user copies and pastes data into the Web grid, this<br />

option determines whether each row pasted in a single<br />

transaction is recorded separately, or all rows pasted<br />

are recorded in a single entry. If the threshold is set to<br />

0, all rows pasted in a single transaction are recorded<br />

as a single entry.<br />

Hides or shows audit annotations in the Web Client.<br />

For example, if the Allow bouncing option is disabled, user 1 with edit rights to an e.List item<br />

cannot edit while user 2 is editing. After user 2 stops editing, user 1 can edit after refreshing the<br />

Web client status.<br />

If the Web client status refresh rate is set to 0, user 1 can edit only if they close the grid and reopen<br />

it. If the refresh rate is 0, the status is refreshed only when the user takes an action that connects<br />

to the server. The default interval is five minutes, and the minimum interval is one minute. More<br />

frequent polling has a negative effect on Web traffic over the network.<br />

Chapter 5: Creating a <strong>Contributor</strong> Application<br />

Automatic polling also refreshes the passport. If the Web client is inactive for half the passport<br />

duration value, automatic polling ceases. For example, if default values are used, then after 1800<br />

<strong>Administration</strong> <strong>Guide</strong> 77


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

seconds of inactivity the Web client polling will cease and after an additional 3600 seconds, the<br />

passport expires. Any further Web client activity by the user requires re-authentication.<br />

If you must, ensure that the passport duration value is respected exactly. For example, to comply<br />

with financial regulations, the refresh rate must be set to 0 (zero). The passport then expires after<br />

the configured duration of inactivity.<br />

Create Planner-Only Cubes<br />

Planner-only cubes are cubes that are seen only by the planner, or a reviewer with reviewer edit<br />

rights to an e.List item. The data in a planner-only cube is fed into cubes that are seen by both<br />

planners and reviewers.<br />

You make a cube planner-only to reduce the amount of data a reviewer must view, and reduce the<br />

amount of data that is aggregated, speeding up the aggregation process.<br />

Steps<br />

1. In the appropriate application, click Development, Web-Client Configuration, and then Planner<br />

Only Cubes.<br />

2. Select the cubes you want to make planner-only.<br />

3. Click the Save button on the toolbar.<br />

Creating General Messages and Cube Instructions<br />

78 <strong>Contributor</strong><br />

You can create instructions that are shown to users in the <strong>Contributor</strong> application. An instruction<br />

is text that appears when a user clicks Instructions on the <strong>Contributor</strong> application window.<br />

Cube instructions can be different for every cube. You can create a brief one-line instruction that<br />

appears below the cube tab name in the Web browser. You can also create a detailed HTML<br />

formatted set of instructions, which the user views by clicking Help.<br />

Use the full path to reference an image, so that it appears to all users.<br />

Steps<br />

1. In the appropriate application, click Development, Web-Client Configuration, and then Con-<br />

tributor Help Text.<br />

2. In the Enter Instructions text box, type instructions, using HTML tags if required.<br />

You can create links to other Web pages, for more information, see "Creating Hypertext<br />

Links" (p. 377) and "Customizing IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> Help" (p. 375).<br />

3. To enter help for each cube, click the cube name tab, and enter plain text help in the Simple<br />

cube help or detailed plain text or HTML help in the Detailed cube help window.<br />

Simple cube help must be text only, with no additional formatting. We suggest that you use<br />

fewer than 100 characters to ensure that the text is visible.<br />

Detailed cube help can be either HTML formatted text, or plain text, up to 3000 characters.<br />

For more information, see "Customizing IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> Help" (p. 375).


4. Click Save.<br />

Changes are applied to the production application after running the Go to Production process.<br />

Maintaining the <strong>Contributor</strong> Application<br />

In the Application Maintenance window, you can configure application properties. In particular,<br />

you can<br />

● save application XML for diagnostic purposes (p. 79)<br />

● show information about the application, such as the number of cubes, and the number of D-<br />

Links (p. 79)<br />

● set Admin Options (p. 79)<br />

● select dimensions for publishing (p. 82)<br />

● set Go to Production Options (p. 82)<br />

Save Application XML for Support<br />

Details about the <strong>Contributor</strong> application are held in XML format. If you have a problem with<br />

your <strong>Contributor</strong> application, you can save the XML in the current state, and send the file to your<br />

technical support.<br />

You can automate uploading the Development application XML by using a macro, see "Upload a<br />

Development Model" (p. 210) for more information.<br />

Steps<br />

1. In the appropriate application, click Development, Application Maintenance, and then<br />

Application XML.<br />

2. Click the Application Type: Development or Production.<br />

3. Enter the XML file name.<br />

4. Click Save XML to File.<br />

5. If the file location changed, click Save.<br />

View Application Details<br />

Admin Options<br />

The Application Details window provides information about the number of items in the application.<br />

For more information on this window, see "Model Details" (p. 68).<br />

You can configure import and publish options on the Development, Application Maintenance,<br />

Admin Options window.<br />

These options should only be configured by Database Administrators and are only available to<br />

users with DBA rights. They can also be set directly in the datastore.<br />

Chapter 5: Creating a <strong>Contributor</strong> Application<br />

<strong>Administration</strong> <strong>Guide</strong> 79


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

80 <strong>Contributor</strong><br />

You do not need to run the Go to Production process for these options to apply, they are applied<br />

as soon as you save.<br />

Option<br />

Datastore Version<br />

Number<br />

Import Block Size<br />

Import Location<br />

Import Options<br />

Description<br />

For information only.<br />

The number of rows that are passed to the calculation engine at a time<br />

during import. The default is -1, which means all rows are passed at once.<br />

This is a temporary file location. Files are not deleted, but they may be<br />

overwritten. When you import files, they are copied to this location on<br />

the server.<br />

You can specify parameters to BCP (bulk copy utility for SQL Server<br />

applications) or SQL Loader command line parameters (import tool for<br />

Oracle applications).<br />

A BCP example is:<br />

-B 10000<br />

This parameter sets import text files to be uploaded in batch sizes of<br />

10,000. Other possible parameters are [-h "load hints"] and [-a packet<br />

size].<br />

A SQL Loader example is:<br />

DIRECT=TRUE<br />

This parameter affects the number of lines that are uploaded and may<br />

speed up the import process considerably, but it may require a lot of<br />

memory.<br />

When importing data, ensure that the database code page parameter<br />

reflects the underlying data being imported. For example, when importing<br />

Western European language data, use the Windows code page for Western<br />

European, 1252. The parameter to use is -C=-C1252. For non-Western<br />

European data, verify with your specific database documentation what<br />

code page to use to ensure that it imports correctly.


Option<br />

Publish Options<br />

Optimize Publish Per-<br />

formance<br />

Generate Scripts<br />

Table-only Publish<br />

Post-GTP<br />

Act as System Link<br />

Source<br />

Display Warning<br />

Message on Zero<br />

Data<br />

Base Language<br />

Description<br />

If a foreign locale is used, the CODE PAGE parameter can be set here.<br />

You can also set BCP options here.<br />

You can change the default publish options by changing the resource file<br />

located in \bin\. Each type of database has a different<br />

resource file:<br />

● DB2 - epEAdminDB2v7Resources.xml<br />

● SQL Server - epEAdminSQL7Resources.xml<br />

● Oracle - epEAdminORA8Reousrces.xml<br />

These defaults are only applied if disable or manage indexing optimization<br />

is selected.<br />

You can increase publish performance by managing or dropping indexes<br />

during the publish process. Manage indexing drops the index and then<br />

recreates it when the publish is complete. Disable indexing drops the<br />

index during the publish and enable indexing keeps the indexes intact.<br />

If you select to disable or manage indexing, Publish Options are optimized<br />

to improve publish performance.<br />

Set this option to Yes to generates a script when any actions are performed<br />

that require DDL commands to be run in the datastore, publishing data<br />

and synchronize with Analyst.<br />

Set this option to Yes if you want a full publish to occur after a model<br />

change. The Reporting job detects if the model changes are incompatible<br />

with the publish schema or link definition and performs a full publish to<br />

correct the incompatibilities. If this option is set to No and errors are<br />

detected, incremental publish is disabled.<br />

Set this option to Yes if you want to allow the use of this application as<br />

a source for a System Link.<br />

Setting this option to Yes displays a warning message if you select the<br />

Zero Data option when importing data "Steps to Prepare the Import<br />

File" (p. 177).<br />

This option determines language in which the <strong>Contributor</strong> application is<br />

displayed if the user has not specified a preference in IBM Cognos Con-<br />

nection. For information about translating applications see "Translating<br />

Applications into Different Languages" (p. 185).<br />

Chapter 5: Creating a <strong>Contributor</strong> Application<br />

<strong>Administration</strong> <strong>Guide</strong> 81


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

Option<br />

Scripts Creation Path<br />

Select Dimensions for Publish<br />

Description<br />

This options sets the default location for script creation on the <strong>Contributor</strong><br />

<strong>Administration</strong> server.<br />

You can select the dimension to use when linking to another resource, such as Analyst. Because<br />

fewer records are published, the process is quicker.<br />

Dimensions for Publish apply only for the View publish layout, and not the Table-only publish<br />

layout. Dimensions for Publish in the Table-only layout are selected when you select cubes, or a<br />

default dimension is used.<br />

If a dimension is selected, that dimension is expanded while publishing to the datastore to incorporate<br />

one column for each dimension item. For example, if a dimension named Months is selected, twelve<br />

columns are published for each dimension item, rather than one generic Months column. This sig-<br />

nificantly reduces the number of rows that are published and enabling differing data types per<br />

column.<br />

By default, None is selected. This means that none of the dimensions are expanded while publishing<br />

and only one column exists for each dimension.<br />

If you use data dimensions for publishing, select them before you run the Go to Production process.<br />

You can then publish the production version of the application.<br />

Steps<br />

1. In the appropriate application, click Development, Application Maintenance, and then<br />

Dimensions for Publish.<br />

2. Click a tab to select a cube.<br />

3. Click Select Dimension and click the dimension name to use as a data dimension.<br />

4. Click Save.<br />

Tip: Click Preview to view the data columns that will be published, either with a data dimension<br />

selected or without.<br />

Set Go to Production Options<br />

82 <strong>Contributor</strong><br />

You can set production options prior to producing the application from the Go To Production<br />

branch of the <strong>Administration</strong> Tree, under Development, Application Maintenance in the appropriate<br />

application.


Option<br />

Prevent Client-side<br />

Reconciliation<br />

Copy Develop-<br />

ment e.List Item<br />

Publish Setting to<br />

Production<br />

Application<br />

Use the classic<br />

<strong>Contributor</strong> client<br />

<strong>Planning</strong> Package<br />

Settings<br />

Description<br />

This option prevents client side reconciliation.<br />

Typically, this option is used to prevent reconciliation on client machines<br />

without much RAM. However, we do not recommend it.<br />

When a production application is created, an online job runs that updates the<br />

data blocks sequentially using the reconciliation process, see "Reconcili-<br />

ation" (p. 54). Typically this happens on the job server. However, if an end<br />

user requests an updated model before the reconciliation process takes place,<br />

they are warned that the model is out of date and client side reconciliation<br />

occurs. If the Prevent Client Side Reconciliation option is selected, a message<br />

appears and users cannot access their e.List items until their e.List item is<br />

reconciled.<br />

If a user is editing either offline or online and this option is selected, when Go<br />

to Production is run, they lose their changes. They can save the data by right-<br />

clicking and saving to file.<br />

This option enables you to specify whether the publish setting in the e.List<br />

window should overwrite the settings in the Publish view layout e.List window.<br />

If you import an e.List file with publish settings, or you edit the publish settings<br />

in the e.List window, this option is automatically selected, and any publish<br />

settings you made are carried over to the production application, overwriting<br />

any settings made in the production application. Clear this option if you do<br />

not want to overwrite the settings in the production application.<br />

This option is only applied if changes were made to the <strong>Contributor</strong> application<br />

since the last time Go to Production was run.<br />

This option enables you to use the application with the classic <strong>Contributor</strong><br />

client.<br />

If you switch an application from the web client to the classic client, any users<br />

who are working offline will not be able to bring their data back online. The<br />

application will not be available unless the classic client is installed.<br />

When you set Go To Production Options, you must name the planning<br />

package. Optionally, you can include a screen tip and a description for the<br />

package.<br />

Application access is restricted by the e.List. By default, when you create a<br />

package, Overwrite the package access rights at the next Go To Production<br />

is selected and the package access rights are based on the e.List. If, in IBM<br />

Cognos Connection, you make changes manually to the package access rights<br />

and want your modifications to remain after the next Go to Production, you<br />

must clear this check box. For information about the Got to Production pro-<br />

cess, see "The Go to Production Process" (p. 243).<br />

Chapter 5: Creating a <strong>Contributor</strong> Application<br />

<strong>Administration</strong> <strong>Guide</strong> 83


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

Datastore Options<br />

84 <strong>Contributor</strong><br />

You set the datastore backup location and perform ad-hoc datastore backups in the Datastore<br />

Maintenance branch of the <strong>Administration</strong> Tree, under Development, Datastore Options in the<br />

appropriate application. You can also view information about datastore objects, such as datastore<br />

tables that are associated with cubes in the application, and correct translation problems.<br />

Option<br />

Datastore Files<br />

Location<br />

Datastore<br />

Names<br />

Description<br />

Location of datastore backup files:<br />

Datastore backup backs up the entire application datastore, including the<br />

production application immediately. This does not backup the publish datastore,<br />

nor the CM datastore.<br />

If you are using SQL Server, you can browse for a location. Oracle backups<br />

can only be made to a location on the administration server, or machines with<br />

access to the administration server. For DB2UDB applications, we recommend<br />

that you backup manually. Refer to your provider documentation.<br />

Restoring the application means that any contributions entered since the backup<br />

was made will be lost. It is best practice to make backups when there is unlikely<br />

to be much activity and you can stop the Web application from running while<br />

the backup is being made.<br />

Important: Depending on your organization's datastore policy you may not be<br />

able to perform some datastore maintenance functions. If you do not have DBA<br />

permissions, contact your DBA.<br />

Location for publish container files:<br />

Set the location for datastore container files created during publish.<br />

This table displays Model Objects. In this instance, Model Objects are cubes<br />

and their associated datastore tables.<br />

Datastore Object Name lists the import datastore table names. If you click<br />

Display row counts, the number of rows in each datastore table is displayed,<br />

enabling you to see if, for example, there is any data in the export table without<br />

having to look in the datastore manually. This may take a few minutes to<br />

appear.<br />

Import datastore tables are prefixed with im and import errors are prefixed<br />

with ie.


Option<br />

Translation<br />

Maintenance<br />

Tablespace<br />

Description<br />

When a translation is created, a row is created in the Language table in the<br />

datastore, and some information is added to the model XML about the trans-<br />

lation. If the information in the model XML and the information in the database<br />

get out of step, you cannot run Go to Production. This may happen if you have<br />

a problem in the <strong>Administration</strong> Console, or network problems.<br />

The Datastore language tab box lists the rows that exist in the Language table<br />

in the application datastore. The Model language table box lists the languages<br />

that exist in the Model XML. If the languages listed in the two boxes are dif-<br />

ferent, click Synchronize.<br />

The Tablespace window displays the tablespace options that were chosen<br />

during application creation.<br />

Tablespace Options:<br />

● Data<br />

● Index<br />

● Blobs<br />

Chapter 5: Creating a <strong>Contributor</strong> Application<br />

It also displays the temporary tablespace for the current <strong>Contributor</strong> application.<br />

This window is only visible if the application runs on Oracle or DB2 UDB.<br />

<strong>Administration</strong> <strong>Guide</strong> 85


Chapter 5: Creating a <strong>Contributor</strong> Application<br />

86 <strong>Contributor</strong>


Chapter 6: The <strong>Contributor</strong> Web Client Application<br />

There are two versions of the <strong>Contributor</strong> Web Client you can use, the <strong>Contributor</strong> Web Client<br />

and the classic <strong>Contributor</strong> Web Client. Both clients allow users to contribute to plans and manage<br />

submission and review workflow. However, the <strong>Contributor</strong> Web Client is easier to use while at<br />

the same time it maintains all of the functionality of the classic client. By default all applications<br />

use the <strong>Contributor</strong> Web Client, however, you can specify the classic <strong>Contributor</strong> Web Client for<br />

an application by selecting Use the classic <strong>Contributor</strong> Client option in "Set Go to Production<br />

Options" (p. 82).<br />

For more information about the <strong>Contributor</strong> Web Client, see the IBM Cognos 8 <strong>Planning</strong> - Con-<br />

tributor Web Client User <strong>Guide</strong>. For more information about the classic client, see the IBM Cognos<br />

8 <strong>Planning</strong> - <strong>Contributor</strong> Browser User <strong>Guide</strong> and the IBM Cognos 8 <strong>Planning</strong> <strong>Contributor</strong> Work<br />

Offline <strong>Guide</strong>.<br />

To make a <strong>Contributor</strong> application available to users, you must set up a IBM Cognos 8 Web site,<br />

see IBM Cognos 8 <strong>Planning</strong> Installation and Configuration <strong>Guide</strong>. To make changes to application<br />

settings like cube order and grid options, see "Configuring the <strong>Contributor</strong> Application" (p. 70).<br />

Users can access <strong>Contributor</strong> via the Web or using <strong>Contributor</strong> for Excel. See the <strong>Contributor</strong> for<br />

Microsoft Excel ® Installation <strong>Guide</strong> for more information.<br />

The <strong>Contributor</strong> Web Site<br />

The Tree<br />

The Table<br />

When users log on to a <strong>Contributor</strong> application, they see a graphical overview of all the areas they<br />

are responsible for, and the status of the data.<br />

The <strong>Contributor</strong> interface has two panes. A banner across the top of the browser provides access<br />

to User Instructions and Application Help.<br />

The tree on the left side of the page shows the names of the areas that users are responsible for<br />

contributing to (Contributions) and the areas that they are responsible for reviewing (Reviews).<br />

Both appear in a hierarchical form. Depending on their rights, users may see either one of these<br />

branches, or both. When the user clicks an item in the tree, a table with the details for the item<br />

appears on the right side of the window.<br />

Each item in the tree has an icon that indicates the current workflow state. For more information,<br />

see "Workflow State Definition" (p. 295).<br />

The table gives information such as the workflow state of the item, the current owner, the reviewer,<br />

and when it last changed.<br />

When users open a contribution, they can view or enter data depending on their rights and the state<br />

of the data.<br />

<strong>Administration</strong> <strong>Guide</strong> 87


Chapter 6: The <strong>Contributor</strong> Web Client Application<br />

Data that users can edit has a white background. Read-only data has a pale gray background.<br />

Data can be edited only if the icon indicates that it has a workflow state of Not started or Work<br />

in progress.<br />

Users can annotate data (p. 289).<br />

Users can also reject contributions by clicking a reject button in the table.<br />

If <strong>Contributor</strong> for Excel is installed, users can open their e.List items in Excel from the <strong>Contributor</strong><br />

Web application.<br />

Set Web Site Language<br />

You can set the language of the <strong>Contributor</strong> web site by assigning a translation to a user, group,<br />

or role or in user defined properties in IBM Cognos Configuration. For more information on creating<br />

and assigning a translation, see "Translating Applications into Different Languages" (p. 185). To<br />

set IBM Cognos Connection language preference, see the IBM Cognos 8 <strong>Administration</strong> and<br />

Security <strong>Guide</strong>.<br />

If multiple languages have been assigned to a user, language precedence for the <strong>Contributor</strong> web<br />

site are as follows:<br />

● translation assigned to a user, group, or role<br />

● translation assigned by user specified preference for Product Language defined in IBM Cognos<br />

Configuration<br />

● language selected in user preferences in IBM Cognos Connection<br />

Access <strong>Contributor</strong> Applications<br />

88 <strong>Contributor</strong><br />

You access <strong>Contributor</strong> applications from the IBM Cognos Connection portal.<br />

Steps<br />

1. Type the following URL in the address bar of the browser:<br />

http://server_name/cognos8<br />

2. In the upper-right corner, click Launch, <strong>Contributor</strong>. If you have access to more than one<br />

package, click the package that you require.<br />

If users have access to just one application, they are taken directly to that application. If they<br />

have access to more than one, they are presented with a list to choose from.<br />

3. To return to the IBM Cognos Connection portal, in the upper-right corner, click Launch, IBM<br />

Cognos Connection.<br />

Tip: You can copy the URL of a <strong>Contributor</strong> cell to the clipboard ready to be used by other<br />

applications. This enables you to link directly to the cell from another application. In order for<br />

the link to work, the <strong>Contributor</strong> application must be available on the computer and the user<br />

must have appropriate rights.


Configure Classic <strong>Contributor</strong> Web Client Security Settings<br />

The classic <strong>Contributor</strong> Web Client uses signed and scripted ActiveX controls. Ensure that each<br />

user’s security settings for the Local Intranet zone are set to medium to accept these controls.<br />

Steps to Allow ActiveX Controls<br />

1. In Internet Explorer, select Tools, Internet Options, Security, Custom Level.<br />

2. Under Reset custom settings, select Medium from the list, and then click Reset.<br />

3. Click OK.<br />

The <strong>Contributor</strong> <strong>Administration</strong> Console uses Microsoft Internet Explorer security settings to<br />

communicate with the Web server. This may cause users to be prompted for multiple logons. To<br />

prevent multiple logon prompts, ensure that each user’s browser security settings are set to one of<br />

the automatic logon options.<br />

Steps to Prevent Multiple Logon Prompts<br />

1. In Internet Explorer, select Tools, Internet Options, Security, Local intranet, Custom Level.<br />

2. Under User Authentication, select Automatic logon with current username and password, or<br />

Automatic logon only in Intranet zone.<br />

3. Click OK.<br />

Linking to Earlier Versions of <strong>Contributor</strong> Applications<br />

Working Offline<br />

If you need to link to earlier versions of <strong>Contributor</strong> Applications, the Web sites for these applications<br />

must be configured as described in the same version of the <strong>Contributor</strong> <strong>Administration</strong> <strong>Guide</strong>.<br />

Working offline means that users can continue to work in situations when they are not connected<br />

to the network.<br />

Users can work offline only if the Prevent off-line working option is cleared (p. 74), and if they are<br />

a user, or belong to the group or role associated with offline users. Offline working is possible only<br />

when cache on the client computer can be used. When an e.List item is taken offline, the data is<br />

stored in the offline store on the user's computer, see "The Offline Store" (p. 90). When the user<br />

wants to do some work, they open up the offline application and work and save data as normal.<br />

The application appears to work just like the grid in the Web Client, except there is no submit<br />

button.<br />

Chapter 6: The <strong>Contributor</strong> Web Client Application<br />

Working offline should not be the standard working practice because reviewers cannot view the<br />

current data and planners cannot receive updates when new data is imported. Ideally, users should<br />

bring offline data online as soon as possible to keep the data changes visible.<br />

A user can work offline, save their data, and end their session. If another user logs on to the same<br />

computer as a different user, the new user cannot see the first user’s data.<br />

<strong>Administration</strong> <strong>Guide</strong> 89


Chapter 6: The <strong>Contributor</strong> Web Client Application<br />

Only a single e.List item or a standard multi-e.List item view can be worked with offline.<br />

Working offline does not give access to everything in the cache and does not provide private e.List<br />

item save.<br />

The Offline Store<br />

If there are multiple users with edit rights assigned to the e.List item, another user with appropriate<br />

rights can edit the e.List item. The user who checked out the e.List item cannot check in the changes.<br />

If the administrator makes more than one set of changes to the application before the user attempts<br />

to check in the edited e.List item, the user cannot check in the changes. They can save the changes<br />

to a .csv file when running the Go to Production process; the administrator is warned which users<br />

will be terminated if they proceed. If the administrator made just one set of changes, the user can<br />

check in the changes successfully.<br />

When you bring offline data online, numbers that were changed and saved in the offline application<br />

show as changed in the online grid.<br />

Any annotations become read-only and data changes color to indicate that it was saved. Also,<br />

attached documents are not available when working offline.<br />

If you take an e.List item offline, you can continue working online on other e.List items.<br />

When an e.List item is taken offline, the data is stored in the datastore in files called offlinestore.at<br />

(allocation table) and offlinestore.store (object store) on the user's computer.<br />

By default, the client computer cache is cleared after twelve days of inactivity. You can change this<br />

using IBM Cognos Configuration.<br />

Independent Web Applications<br />

90 <strong>Contributor</strong><br />

You can create an independent Web application to customize an individual Web site for an<br />

application. For example, you may want to use your own images.<br />

Steps<br />

1. Create a new directory and copy the webcontent files to a sub-directory of the new directory,<br />

for example:<br />

\\server\customweb\webcontent\<br />

2. Set up the virtual directory alias to point to the new parent directory.<br />

The virtual directories that you need to create are:<br />

Alias<br />

cognos8<br />

cognos8/cgi-bin<br />

Location<br />

c8_location/webcontent<br />

c8_location/cgi-bin<br />

Permission<br />

Read<br />

Execute


For more information about configuring the Web Server, see the IBM Cognos 8 <strong>Planning</strong><br />

Installation and Configuration <strong>Guide</strong>.<br />

<strong>Contributor</strong> for Microsoft Excel<br />

Users can use <strong>Contributor</strong> for Excel to view and edit <strong>Contributor</strong> data using Excel, getting the<br />

benefit of Excel formatting and <strong>Contributor</strong> linking functionality. Here are some examples of things<br />

you can do:<br />

● Create bar charts and other graphs from <strong>Contributor</strong> data.<br />

● Create dynamic calculations from <strong>Contributor</strong> data.<br />

● Create a calculation in Excel and link it to a <strong>Contributor</strong> cell.<br />

As you update this calculation, you can choose whether to update the value in the <strong>Contributor</strong><br />

cell.<br />

● Reuse custom calculations and formatting by saving the workbook as a template.<br />

● Resize the worksheet so you can see more or less data on a page.<br />

● Save data as an Excel workbook and work locally without a connection to the network.<br />

To use <strong>Contributor</strong> for Excel, administrators must create a <strong>Contributor</strong> Web site, and client users<br />

must install <strong>Contributor</strong> for Excel on their computers. For more information about design consid-<br />

erations, see "Design Considerations When Using <strong>Contributor</strong> for Excel" (p. 311). For more<br />

information about installation, see <strong>Contributor</strong> for Excel Installation <strong>Guide</strong>.<br />

Note: You can only access <strong>Contributor</strong> for Excel from the Workflow page if you are using Microsoft<br />

Internet Explorer. Regardless of Internet browser, you can access <strong>Contributor</strong> for Excel from within<br />

Excel.<br />

Chapter 6: The <strong>Contributor</strong> Web Client Application<br />

<strong>Administration</strong> <strong>Guide</strong> 91


Chapter 6: The <strong>Contributor</strong> Web Client Application<br />

92 <strong>Contributor</strong>


Chapter 7: Managing User Access to Applications<br />

You manage access to <strong>Contributor</strong> applications through e.List and Rights in the <strong>Contributor</strong><br />

<strong>Administration</strong> Console.<br />

The e.List defines the hierarchical structure of an application. It is used to determine who can enter<br />

data, who can submit data, who can read data and so on.<br />

Users are secured by IBM Cognos 8 security (p. 29).<br />

Rights are defined by assigning users, roles, and groups to e.List items, and then by giving each<br />

e.List item, and user, group, or role pairing an access level of Read, Write, Submit, or Review.<br />

The e.List The e.List is used to determine who can enter data, who can only read data, who data is hidden<br />

from, and so on.<br />

An e.List is a dimension with a hierarchical structure that typically reflects the structure of the<br />

organization. An e.List contains items such as departments in a company, for example, Sales,<br />

Marketing, and Development. Each department may be divided into several teams, for example<br />

Sales may be divided into an Internal Sales and an External Sales team.<br />

Planners are assigned to items in the lowest level in the hierarchy. Reviewers are assigned to items<br />

in the parent levels.<br />

Customer<br />

Service<br />

All Departments<br />

Operations Corporate<br />

Production<br />

& Distribution<br />

Procurement<br />

Sales Human<br />

Resources<br />

Finance Marketing IS&T<br />

In the example shown, All Departments, Operations, and Corporate are reviewer e.List items. Any<br />

users assigned to these e.List items with rights greater than view are reviewers. Customer Service,<br />

Production & Distribution, Procurement, Sales, Human Resources, Finance, Marketing, and IS&T<br />

are contribution e.List items, and any users assigned to these items with rights higher than view are<br />

planners.<br />

An e.List is created in two steps:<br />

● The dimension that represents the e.List is created in Analyst.<br />

Note: The e.List name cannot have a bracket ([ ]), brace ({ }), semicolon (;), or at sign (@).<br />

● The file containing e.List data is imported into the <strong>Administration</strong> Console as a text file (p. 99).<br />

The following example shows a simple e.List created in Analyst.<br />

<strong>Administration</strong> <strong>Guide</strong> 93


Chapter 7: Managing User Access to Applications<br />

94 <strong>Contributor</strong><br />

All Departments is the parent of Operations and Corporate. The Analyst calculation for All<br />

Departments is<br />

+Operations+Corporate<br />

Operations is the parent of Customer Service and Production and Distribution. The Analyst calcu-<br />

lation for Operations is<br />

+{Customer Service}+{Production and Distribution}<br />

The D-List used in Analyst to represent the e.List does not need to reflect the full hierarchy of the<br />

e.List, but it must contain at least a parent and a child, and we recommend that it contains at least<br />

one review item and two children so that weighted averages and priorities can be tested.<br />

There are some circumstances when it helps to use the full e.List in the Analyst model, bearing in<br />

mind that you still must import the e.List into the <strong>Administration</strong> Console as a text file or Excel<br />

Worksheet.<br />

For example:<br />

● If you need to bring <strong>Contributor</strong> data back into Analyst for more analysis.<br />

● When data already present in Analyst is needed in <strong>Contributor</strong>.<br />

● If you are using Analyst simply as a tool for staging and tidying up external data.<br />

If you export data from an Analyst D-Cube with the e.List into a <strong>Contributor</strong> cube, ensure that<br />

the e.List item names in Analyst exactly match the <strong>Contributor</strong> e.List id.<br />

The e.List should have a valid hierarchy. It is best practise not to have too many levels in the hier-<br />

archy. Avoid having more than 20 child items assigned to a parent. This improves performance<br />

and improves aggregation speed.


Multiple Owners of e.List Items<br />

Ownership<br />

You can assign more than one user, group, or role to an e.List item through the Rights pane in the<br />

<strong>Administration</strong> Console. Do this when<br />

● multiple e.List item owners share responsibility for a contribution, that is, they are job sharing<br />

● an e.List item owner wants to delegate responsibility for completing a submission<br />

● submission needs to be completed by a number of users sequentially<br />

● the usual e.List item owner is absent, so a substitute user needs to make the submission<br />

You may want to consider creating a group or role to contain multiple users, rather than assigning<br />

multiple users, groups or roles to an e.List item. This allows users to be changed in the authentication<br />

provider without you having to run the Go to Production process for these changes to be reflected<br />

in the <strong>Contributor</strong> application. For more information about users, groups, and roles, see "Users,<br />

Groups, and Roles" (p. 31).<br />

There can be multiple owners of review e.List items as well as contribution e.List items.<br />

Any users with edit rights to an e.List item may edit the contribution e.List item when<br />

● The e.List item is in a Not started state, no changes have been made to it<br />

● The e.List item is in a Work in progress state, changes have been made, but they either have<br />

not been submitted to the reviewer, or have been submitted but have been rejected back to the<br />

planner<br />

● The e.List item has been taken offline for editing by another user, group, or role<br />

Anyone with appropriate rights can take control of an e.List item from another user who may be<br />

editing it. If they try to do this, they will receive a warning.<br />

An owner of an e.List item is a user, group, or role that has rights greater than view. These rights<br />

may be directly assigned, or may be inherited.<br />

If more than one user, group, or role is assigned to an e.List item with rights greater than view, the<br />

first one in the import file is the initial owner of the e.List item in the <strong>Contributor</strong> application. For<br />

more information, see "Reordering Rights" (p. 112).<br />

Unowned Items<br />

If an e.List item has not been opened for edit, it is unowned. After it is opened for edit, the user,<br />

group, or role that opened it is the owner.<br />

Current Owner<br />

The current owner is shown in the <strong>Contributor</strong> application and is the user, group, or role who is<br />

editing or last opened an e.List item for edit. However, after they have opened the e.List item, they<br />

can then choose to edit it, depending on the settings.<br />

Chapter 7: Managing User Access to Applications<br />

Someone can become the current owner by taking ownership of an e.List from another user.<br />

<strong>Administration</strong> <strong>Guide</strong> 95


Chapter 7: Managing User Access to Applications<br />

Note: After subsequent Go to Productions, the current owner is the last user, group, or role to have<br />

edited the e.List item. The current owner is not reset.<br />

Import e.List and Rights<br />

96 <strong>Contributor</strong><br />

You can automate the import of the e.List and rights. For more information, see "Import e.List<br />

and Rights" (p. 208).<br />

Before importing, you must create import files in the correct formats. Note that the default files<br />

supplied with the samples are examples only. For more information, see "e.List File Formats" (p. 99),<br />

and "Rights File Formats" (p. 110).<br />

If you have an e.List with more than 3000 items, they will not be displayed in a hierarchical format,<br />

but in a flat format. Typically, you get the best results if you show fewer than 1000 e.List items in<br />

a hierarchical format. You can set the Maximum e.List items to display as hierarchy under System<br />

Settings.<br />

Steps<br />

1. Click Development, e.List and Rights and then click either e.List, or Rights.<br />

2. Click the Import button.<br />

If no users, groups, or roles have been imported, you cannot import rights.<br />

3. In the appropriate tab, type the name of the source file, or browse for it.<br />

4. Click Import.<br />

5. if your file contains a header row, click the First row contains column headers box. If you<br />

browse for files, the header row is automatically detected.<br />

6. Click Delete undefined items, to delete existing e.List items, or rights that are not included in<br />

the file that is to be imported. For more information, see "Delete Undefined Items<br />

Option" (p. 98).<br />

7. Click Trim leading and trailing whitespace to remove extra spaces at the beginning and end of<br />

text strings on import.<br />

8. Click Quoted strings to remove quotation marks on import.<br />

9. Click Escape Character, and enter a character, if required.<br />

10. Click the File Type.<br />

If you are importing an Excel file, enter the name of the worksheet (within the Excel workbook)<br />

in the box next to the Excel Worksheet option.<br />

You can have e.Lists, and rights in one Excel file, but on separate worksheets. Ensure you name<br />

each worksheet separately in Excel.<br />

11. Click OK and then Save.


File Import Failure Message<br />

If the import of your e.List and rights fails, for example, the structure of the e.List is incorrect, or<br />

the import process itself fails, an error message appears. When errors are reported, some rows of<br />

the file are not imported. You can view the rows that have errors. If the problem is caused by the<br />

structure or contents of the file, correct the file and import again. If the failure was caused by an<br />

application error, try to manually add an item. If this works, try re-importing the files again.<br />

If the CamObjectName (name of user, group, or role) is not found during the import of the rights<br />

file, a log file is created with the message: on the row of the<br />

CamObjectName that is not found. Amend the import file to ensure that the correct CamObjectName<br />

is referenced, or add the missing user to the namespace. This will also result if the user is not logged<br />

into the namespace specified in the file. It is reported as an INVALID CAM OBJECT. To ensure<br />

success, log into the namespace using File- Logon As.<br />

If you still have a failure, contact Technical Support. You may be asked to supply a log file; this<br />

can be opened through the Tools menu by selecting Show local log file. The file is named <strong>Planning</strong>Er-<br />

rorLog.csv.<br />

The following table displays some of the error and warning messages that you may receive when<br />

importing the e.List and rights.<br />

Error or Warning<br />

WARNING<br />

WARNING<br />

ERROR<br />

WARNING<br />

ERROR<br />

ERROR<br />

ERROR<br />

Message<br />

Circular reference at row x column<br />

y EListItemName<br />

Duplicate e-mail address at row x<br />

column y emailaddress<br />

Duplicate item at row x column y<br />

Username<br />

Duplicate item caption at row x<br />

column y UserCaption<br />

Duplicate user logon at row x column<br />

y Userlogon<br />

Empty item name at row x column y<br />

Import table empty<br />

Chapter 7: Managing User Access to Applications<br />

Description<br />

This occurs if e.List item a is the<br />

parent of e.List item b and e.List item<br />

b is the parent of e.List item a.<br />

Duplicate e-mail addresses do not<br />

cause technical problems.<br />

Duplicate items are not allowed. This<br />

row will not be imported.<br />

Duplicate captions are not recommen-<br />

ded.<br />

This row is not imported.<br />

e.List item names are mandatory in<br />

the e.List import file and the rights<br />

import file.<br />

No rows are imported. Note that if<br />

you import a file with just column<br />

headings, it overwrites any existing<br />

items and leaves them blank.<br />

<strong>Administration</strong> <strong>Guide</strong> 97


Chapter 7: Managing User Access to Applications<br />

Search for Items<br />

Error or Warning<br />

ERROR<br />

ERROR<br />

WARNING<br />

ERROR<br />

ERROR<br />

WARNING<br />

Message<br />

Invalid cell at row x column y para-<br />

meter<br />

Invalid characters (ASCII control<br />

characters not permitted) at row x<br />

column y Illegal character z<br />

Invalid parent name at row x column<br />

y<br />

Item name too long (maximum 100<br />

characters) at row x column y<br />

Mandatory columns missing colum-<br />

nheadername<br />

Review depth greater than view depth<br />

at row x column y<br />

Description<br />

This occurs if an expected parameter<br />

is incorrect, misspelled or missing.<br />

See "Illegal Characters" (p. 387).<br />

This message appears if an e.List item<br />

does not have a valid parent. The<br />

e.List item is still imported, but it<br />

does not have a parent (and so is not<br />

part of the hierarchy).<br />

Item names and captions have a limit<br />

of 100 characters.<br />

A mandatory column is missing.<br />

Review depth cannot be greater than<br />

view depth. View depth is increased<br />

to review depth.<br />

You can search for items in the e.List and Rights panes in the <strong>Administration</strong> Console. You select<br />

a column to search in, then type the first few characters that you want to find. You cannot do a<br />

wild card search.<br />

For example, if you want to find 001 New York, typing York will not find this string. Instead, type<br />

001 New.<br />

Steps<br />

1. Select the column to search in, for example, Item Display Name.<br />

2. Type the character string you want to search for.<br />

3. Click Find.<br />

Delete Undefined Items Option<br />

98 <strong>Contributor</strong><br />

This option deletes any existing items that are not included in import files. Selecting Delete undefined<br />

items when importing e.Lists and rights only has an effect when the e.List and Rights tables are<br />

already populated.<br />

If importing an updated e.List which has e.List items removed, any children of this item become<br />

the top level (they are not removed).


e.List File Formats<br />

If you delete e.List items from the e.List pane of the <strong>Administration</strong> Console, any children of these<br />

items are also removed.<br />

You can delete the entire e.List and rights tables by importing files containing only headings and<br />

selecting the Delete undefined items check box. However, if you import a file that is completely<br />

blank (no headings), you will receive warnings to say that compulsory columns are missing and the<br />

e.List or rights remain unchanged.<br />

The e.List can be in either Excel worksheet (xls) format, or in text file format, and you can specify<br />

the delimiter.<br />

The use of column headings is optional. If you use column headings, the columns can be in any<br />

order. The column heading names must be the same as the column headings listed below. If no<br />

column headings are used the column order shown below must be used.<br />

Column heading in import file<br />

EListItemName (p. 100)<br />

EListItemParentName (p. 100)<br />

EListItemCaption (p. 100)<br />

EListItemOrder (p. 100)<br />

EListItemViewDepth (p. 100)<br />

EListItemReviewDepth (p. 101)<br />

EListItemIsPublished (p. 101)<br />

Example e.List File<br />

Display Name<br />

Item ID<br />

Not applicable<br />

Item Display Name<br />

Not applicable<br />

View Depth<br />

Review Depth<br />

Publish<br />

The following table provides an example of an e.List file.<br />

EListItem<br />

Name<br />

ALL<br />

AMX<br />

EAX<br />

EUR<br />

EListItem<br />

ParentName<br />

ALL<br />

ALL<br />

ALL<br />

ALL<br />

EListItem<br />

Caption<br />

All Sales<br />

Regions<br />

Americas<br />

Asia<br />

Pacific<br />

Europe<br />

EListItem<br />

Order<br />

1<br />

2<br />

3<br />

4<br />

Chapter 7: Managing User Access to Applications<br />

EListItem<br />

ViewDepth<br />

-1<br />

1<br />

1<br />

1<br />

Status<br />

Compulsory<br />

Compulsory<br />

Compulsory but may be left<br />

blank.<br />

Optional<br />

Optional<br />

Optional<br />

Optional<br />

EListItem<br />

ReviewDepth<br />

-1<br />

1<br />

1<br />

1<br />

EListItemIsPub-<br />

lished<br />

YES<br />

YES<br />

YES<br />

YES<br />

<strong>Administration</strong> <strong>Guide</strong> 99


Chapter 7: Managing User Access to Applications<br />

100 <strong>Contributor</strong><br />

CEU<br />

NEU<br />

SEU<br />

EUR<br />

EUR<br />

EUR<br />

EListItemName<br />

Central<br />

Europe<br />

Northern<br />

Europe<br />

Southern<br />

Europe<br />

5<br />

6<br />

7<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

YES<br />

A unique identifier for each e.List item. This is an editable box and is case sensitive.<br />

The following constraints apply:<br />

● Must not be empty.<br />

● Must contain no more than 100 characters.<br />

● Must not contain control characters, that is, below ASCII code 32, see "Illegal Charac-<br />

ters" (p. 387).<br />

● Must be unique. Although it is case sensitive, differences in case do not count - the characters<br />

must be unique.<br />

This name is used when publishing data.<br />

EListItemParentName<br />

This column identifies which e.List item is the parent by referring to the e.List item name. The top<br />

reviewer item refers to its own e.List item name as the parent name. This is case sensitive.<br />

EListItemCaption<br />

The name of the e.List item as it appears in the <strong>Contributor</strong> application.<br />

The following constraints apply:<br />

● May be empty (but will give a warning).<br />

● Must not be more than 100 characters long (will be truncated, with a warning).<br />

● Must not contain control characters (below ASCII code 32).<br />

● May contain duplicates (will give a warning).<br />

EListItemOrder<br />

This is the order in which the e.List items appear in the application. This is optional, the default is<br />

the order in the file.<br />

EListItemViewDepth<br />

The View depth column indicates how far down a hierarchy a user can view the submissions of<br />

planners and reviewers.<br />

The following values may be used:<br />

YES<br />

YES


● -1 indicates all descendant hierarchy levels.<br />

● 0 indicates no hierarchy levels.<br />

● 1...n where n is a whole number.<br />

Defaults to 1.<br />

EListItemReviewDepth<br />

The Review depth column indicates how far down a hierarchy a reviewer can reject, annotate and<br />

edit (if they have appropriate rights) contributions and reject and annotate submissions of reviewers.<br />

The following values may be used:<br />

● -1 indicates all descendant hierarchy levels.<br />

● 0 indicates no hierarchy levels.<br />

● 1...n where n is a whole number.<br />

Defaults to 1.<br />

EListItemIsPublished<br />

This indicates whether an e.List Item will be published. Possible values are Yes, Y, No, N (not case<br />

sensitive).<br />

Defaults to No.<br />

Export the e.List and Rights<br />

You can export the e.List and rights as tab separated files, with or without column headings. This<br />

enables you to update them in an external system such as Excel.<br />

Steps<br />

1. In the <strong>Administration</strong> Console, click the application name, Development, e.List and Rights and<br />

then either e.List or Rights.<br />

2. Click Export.<br />

3. Enter or browse for the filename and location under e.List or Rights, for example: c:\temp\<br />

export_rights.txt.<br />

4. Select the Include column headings box if you want the column headings exported.<br />

5. Select the Export box for each file you want to export.<br />

6. Click OK.<br />

Managing the e.List<br />

The e.List table is populated with e.List data when you import an e.List.<br />

Changes to the e.List are made to the production application following running the Go to Production<br />

process (p. 243).<br />

Chapter 7: Managing User Access to Applications<br />

<strong>Administration</strong> <strong>Guide</strong> 101


Chapter 7: Managing User Access to Applications<br />

Insert New e.List items<br />

102 <strong>Contributor</strong><br />

To manage the e.List you can<br />

● Insert new e.List items (p. 102).<br />

● Manually reorder e.List items in the hierarchy (p. 103).<br />

● Define whether data will be published to a datastore or not (p. 102).<br />

● Delete an e.List item (p. 104).<br />

● Preview an e.List item (p. 104).<br />

● Find items in an e.List (p. 98).<br />

We recommend that you modify the file used to import the e.List items and import the changed file<br />

when you add new e.list items.<br />

When this is not possible, you can manually insert new e.List items as described in the following<br />

steps:<br />

Steps<br />

1. In the application tree, click Development, e.List and Rights and then click e.List.<br />

2. Expand the parent e.List item and click in the e.List table where you want to insert the new<br />

item. The new item appears above the item you clicked.<br />

3. Click Insert and enter the information as follows:<br />

Details<br />

Item Display Name<br />

Item ID<br />

Publish<br />

Description<br />

The name of the e.List item (typically a business location) as it is<br />

displayed to the user in the Web browser.<br />

The name of the e.List item when imported from an external<br />

resource such as a general ledger system or datastore. This may<br />

be a code or a name independent of the e.List item name.<br />

You can edit the item ID, but it must be unique within the<br />

application.<br />

This name is used when publishing data.<br />

Indicates whether this e.List item is to be published to a datastore<br />

or not. This only applies to the View Layout, not the Table Layout.<br />

For more information, see "Selecting e.List Items to Be Pub-<br />

lished" (p. 261).<br />

Click either Yes or No.


Details<br />

View Depth<br />

Review Depth<br />

4. To apply changes, click Save.<br />

Description<br />

Indicates how far down a hierarchy a user can view submissions<br />

from planners and reviewers.<br />

To assign view depth:<br />

1. Click the View Depth cell of the appropriate e.List item.<br />

2. Select or enter a number. To allow the reviewer to view all<br />

levels below, click All. To disable the reviewer from viewing<br />

the levels below, click None.<br />

Note: When importing the e.List, All and None are represented<br />

by -1 and 0 respectively.<br />

For more information, see "View Depth Example" (p. 106).<br />

Indicates how far down a hierarchy a reviewer can reject (or edit<br />

if allowed), submissions from planners and reviewers.<br />

Note that this setting is also influenced by the user's rights and<br />

whether Reviewer Edit is allowed in Application Options (p. 74).<br />

To assign review depth:<br />

1. Click the Review Depth cell of the appropriate e.List item.<br />

2. Select the required review depth. To allow the reviewer to<br />

review all levels below, click All. To disable the reviewer from<br />

reviewing the levels below, click None.<br />

Note: When importing the e.List, All and None are represented<br />

by -1 and 0 respectively.<br />

Manually Reordering e.List Items in the Hierarchy<br />

You can manually reorder e.List items.<br />

For more information, see "Review Depth Example" (p. 105).<br />

When you change the order of the e.List, making a planner a reviewer for example, a warning is<br />

issued and any data that was entered with the user as a planner is lost when you run Go to Produc-<br />

tion.<br />

If a contribution e.List item becomes a review e.List item, the rights of a user assigned to that e.List<br />

item are changed to the equivalent review rights as shown in the following table.<br />

Contribution e.List item<br />

View<br />

Review e.List item<br />

View<br />

Chapter 7: Managing User Access to Applications<br />

<strong>Administration</strong> <strong>Guide</strong> 103


Chapter 7: Managing User Access to Applications<br />

Contribution e.List item<br />

Edit<br />

Submit<br />

Steps<br />

Deleting an e.List item<br />

1. Click the e.List item.<br />

Review e.List item<br />

Review<br />

Submit<br />

2. Use arrows to move it to the required position.<br />

The up and down arrows change the order of items, and the left and right arrows demote and<br />

promote items in the e.List.<br />

You can delete e.List items manually.<br />

If you delete an e.List item, any data associated with this item is deleted when you run Go to Pro-<br />

duction. When you save changes, a dialog box is displayed. This warns you of any data you could<br />

lose, and you have the option to continue, and so lose the data, or cancel the deletion. If you delete<br />

a review e.List item, all the items that make up the review e.List item are also deleted.<br />

You cannot select and delete multiple e.List items at the same level, you can only delete one branch<br />

of the e.List at a time.<br />

Step<br />

Previewing an e.List item<br />

● In the e.List pane, click the e.List item and then click Delete and Save.<br />

You can show individual e.List items in a pop-up box as they would be seen in the Web browser,<br />

so you can see orientation, and the effects of selection.<br />

Step<br />

● In the e.List pane of the <strong>Administration</strong> Console, click an item and then click Preview.<br />

The Effect of Changes to the e.List on Reconciliation<br />

104 <strong>Contributor</strong><br />

If contribution or review e.List items have been added, deleted or moved, reconciliation takes place<br />

after running the Go to Production process. Reconciliation is only required for the affected e.List<br />

items. For more information, see "Reconciliation" (p. 54).<br />

If e.List items have been added, new contribution or review e.List items and existing parents of new<br />

e.List items are reconciled.


Rights by e.List Item<br />

If e.List items have been deleted, parents of the deleted e.List items are reconciled. Note that the<br />

actual e.List items are removed during the Go to Production process.<br />

If e.List items have been moved, ancestor review e.List items of any moved e.List item are reconciled<br />

(re-aggregated). This covers both review e.List items which were ancestors of the moved e.List item<br />

in the old production application and review e.List items which are now ancestors of the moved<br />

e.List item in the new production application. Note that if e.List items are simply reordered within<br />

a hierarchy branch, no reconciliation is needed.<br />

Moving an e.List item may also impact the pattern of No Data cells through saved selections and<br />

access tables. Renaming an e.List item does not require reconciliation, but may impact the pattern<br />

of No Data cells if saved selections based on names are used. If the pattern of No Data cells is<br />

affected, reconciliation of all e.List items is required (data block transformation of contribution<br />

e.List items and re-aggregation of review e.List items). Adding or deleting e.List items does not<br />

affect the pattern of No Data cells.<br />

Rights by e.List item displays the users, groups, and roles that are assigned to the selected e.List<br />

item, and their rights. It can be displayed by selecting items in the e.List or Rights panes in the<br />

<strong>Administration</strong> Console, and clicking Rights Summary.<br />

A table with the following information is displayed:<br />

Details<br />

e.List Item Display Name<br />

User, group, or role<br />

Rights<br />

Inherit from<br />

Review Depth Example<br />

Description<br />

The e.List item display name.<br />

The user, group, or role assigned to the e.List item.<br />

The level of rights that the user, group, or role has to the<br />

e.List item. See "Rights" (p. 107) for more information.<br />

If the rights have been directly assigned to the user, group,<br />

or role, this cell will be blank. If the rights have been inher-<br />

ited, this indicates the name of the e.List item the rights have<br />

been inherited from.<br />

You can save the information on this screen to a text file by clicking Save to file and entering a file<br />

name and location.<br />

You can also print the screen by clicking Print.<br />

The Review Depth column indicates how far down a hierarchy a reviewer can reject, edit or submit<br />

submissions from planners and reviewers, depending on their rights (p. 107). The review depth must<br />

be less than, or equal to the view depth.<br />

Chapter 7: Managing User Access to Applications<br />

<strong>Administration</strong> <strong>Guide</strong> 105


Chapter 7: Managing User Access to Applications<br />

View Depth Example<br />

106 <strong>Contributor</strong><br />

For example, you are the owner of the e.List item named Country A, which contains the regions<br />

R1, R2, and R3, and each region has two cost centers: C1 and C2, and within each cost center are<br />

three divisions: D1, D2, and D3 as shown below.<br />

● If the review depth for Country A is 1, then the owner can only review the regions (the children<br />

of your e.List item).<br />

● If the review depth for Country A is 2, then the owner can only review the regions and the cost<br />

centers.<br />

● If the review depth for Country A is 3, then the owner can review the regions, cost centers, and<br />

divisions.<br />

● If the review depth for Country A is All (-1 in the import file), then the owner can review all<br />

preceding e.List items. You can only set the review depth at All (-1) if the view depth is also<br />

set at All (-1).<br />

The View Depth column in the e.List section indicates how far down a hierarchy a user can view<br />

planners' and reviewers' submissions.<br />

For example, you are the owner of the e.List item named Country A, which contains the regions<br />

R1, R2, and R3. In each region are two cost centers: C1 and C2, and within each cost center are<br />

three divisions: D1, D2, and D3 as shown below.<br />

● If the view depth for Country A is 1, then the owner can only view the regions (the children of<br />

your e.List item).<br />

● If the view depth for Country A is 2, then the owner can only view the regions and the cost<br />

centers.<br />

● If the view depth for Country A is 3, then the owner can view the regions, cost centers, and<br />

divisions.<br />

● If the view depth for Country A is All (-1 in the import file), then the owner can view all pre-<br />

ceding e.List items.


Rights by User<br />

Rights<br />

Rights by User displays the level of rights for a user to an e.List item. It can be displayed by selecting<br />

items from the Rights pane and clicking Rights Summary.<br />

A table with the following information is displayed:<br />

Details<br />

User, group, or role<br />

e.List item Display Name<br />

Rights<br />

Inherit from<br />

Description<br />

The name of the user, group, or role assigned to the e.List<br />

item.<br />

The e.List item display name.<br />

The level of rights that a user has to the e.List item. For more<br />

information, see "Rights" (p. 107).<br />

If the rights have been directly assigned, this cell will be<br />

blank. If the rights have been inherited, this indicates the<br />

name of the e.List item the rights have been inherited from.<br />

You can save the information to a text file by clicking Save to file and entering a file name and<br />

location.<br />

Rights for planners are determined by the settings in the Rights pane. By assigning rights, you can<br />

configure user roles in the <strong>Administration</strong> Console, determining whether users can view, edit, review,<br />

and submit.<br />

Typically, you import a rights file. But you can also manually insert rights, and modify or delete<br />

existing rights. If you want to make changes to the rights file, we recommend that you export the<br />

file to ensure you have correct information, modify this file using an external tool such as Excel<br />

and then import the file again.<br />

You can assign more than one user, group, or role to an e.List item. For more information, see<br />

"Multiple Owners of e.List Items" (p. 95).<br />

Rights for reviewers are determined by the following settings:<br />

● The rights setting.<br />

● The view and review depth setting. Review depth gives the right to reject (or edit if reviewer<br />

edit is on) to a specified depth. This is set in the e.List pane in the <strong>Administration</strong> Console.<br />

● The Allow Reviewer Edit option in the Application Options pane in the <strong>Administration</strong> Console<br />

(p. 74).<br />

Chapter 7: Managing User Access to Applications<br />

● If a reviewer has two different levels of rights for the same e.List item the higher rights applies.<br />

Rights may be assigned directly or inherited. See the following example:<br />

<strong>Administration</strong> <strong>Guide</strong> 107


Chapter 7: Managing User Access to Applications<br />

If the reviewer is assigned with submit rights to a parent e.List item which has a review edit depth<br />

of 1, and reviewer edit is allowed, the reviewer has the right to view, edit, reject and submit the<br />

child e.List item. These are inherited rights.<br />

The reviewer is also directly assigned to the child item with view rights. These are declared, or directly<br />

assigned rights.<br />

You can only directly assign one set of rights to a user for a specific e.List item. If you insert a<br />

duplicate record you receive a warning, and the rights that appear lower down in the rights table<br />

are deleted.<br />

Tip: If you specify more than one reviewer for an e.List item, in the workflow page for the Contrib-<br />

utor application, an email link named email all is displayed. If you specify one reviewer, the name<br />

chosen in the User, Group, or Role column is displayed, and you should ensure that a descriptive<br />

name for the user, group or role is chosen.<br />

Submit Rights<br />

An e.List item can have no user with submit rights (that is, no user is assigned or has resolved rights<br />

through Reviewer depth to the e.List item). If this is the case, contributions are not submitted and<br />

the item and its parents in the hierarchy cannot be locked.<br />

We recommend that the administrator reviews the Rights pane to ensure every e.List item has at<br />

least one user with submit rights (who may be a reviewer with appropriate rights).<br />

Inherited Rights<br />

If the reviewer is assigned with submit rights to a parent e.List item which has a review depth of 1,<br />

reviewer edit is allowed, the reviewer will have the right to view, edit (contribution e.List item only)<br />

reject and submit the child e.List item. These are inherited rights.<br />

The following tables explains what rights mean when they are assigned to planners and to reviewers.<br />

It also explains how the rights can be affected by different settings.<br />

Actions Allowed for Review e.List Items<br />

108 <strong>Contributor</strong><br />

The following actions are allowed for Review e.List items.


Rights<br />

Submit<br />

Review<br />

View<br />

Reviewers<br />

With reviewer edit on (p. 74), reviewers can<br />

● edit, submit, and reject contribution e.List items if the e.List item they are<br />

assigned to has sufficient review depth<br />

● submit or reject child review e.List items if the e.List item they are assigned to<br />

has sufficient review depth<br />

● submit their own review e.List item<br />

● annotate their own review e.List item and children to review depth<br />

With reviewer edit off, reviewers can<br />

● submit their own review e.List item<br />

● reject children to review depth<br />

● annotate their own review e.List item and children to review depth<br />

Note: when Reviewer edit is off, reviewers cannot edit contribution items<br />

With reviewer edit on, reviewers can<br />

● edit contribution items if the e.List item has sufficient review depth<br />

● submit and reject child e.List items if the e.List item they are assigned to has<br />

sufficient review depth<br />

● annotate their own review e.List item and children to review depth<br />

With reviewer edit off, the reviewer cannot edit or submit any e.List items, but can<br />

● reject child e.List items if the e.List item they are assigned to has sufficient<br />

review depth<br />

● annotate their own review e.List item and children to review depth<br />

View assigned e.List items and children to view depth. Cannot annotate, reject, edit,<br />

or submit.<br />

Actions Allowed for Contribution e.List items<br />

The following actions are allowed for Contribution e.List items.<br />

Rights<br />

Submit<br />

Edit<br />

Planner<br />

Chapter 7: Managing User Access to Applications<br />

View, edit and save, submit and annotate assigned e.List items.<br />

View, and edit assigned contribution e.List items. Can annotate. Cannot submit.<br />

<strong>Administration</strong> <strong>Guide</strong> 109


Chapter 7: Managing User Access to Applications<br />

Rights<br />

View<br />

Rights File Formats<br />

110 <strong>Contributor</strong><br />

Planner<br />

View assigned e.List items. Cannot annotate.<br />

Rights can be in Excel Worksheet (xls) and text file format.<br />

The use of column headings is optional. If you use column headings, the columns can be in any<br />

order. The column heading names must be the same as the column headings listed below and are<br />

case sensitive. If no column headings are used, the order shown below must be used.<br />

Column heading in<br />

import file<br />

EListItemName<br />

CamObjectName<br />

EListItemUserRights<br />

CamNamespaceName<br />

CamObjectType<br />

EListItemUserRights<br />

Display heading<br />

Item ID<br />

User, Group, Role<br />

Rights<br />

Namespace<br />

CAM Object Type<br />

Description<br />

Identifies the e.List item that you are setting rights<br />

for. This must match an e.List item id in the e.List<br />

import file and is case sensitive.<br />

The display name of the user, group, or role as it<br />

appears in IBM Cognos 8.<br />

See "EListItemUserRights" (p. 110).<br />

The display name of the security namespace as it<br />

appears in IBM Cognos 8.<br />

Either User, Group, or Role.<br />

The following rights can be used. These are not case sensitive.<br />

Rights<br />

View<br />

Edit<br />

Review<br />

Submit<br />

Description<br />

View rights only.<br />

View and edit, but cannot submit.<br />

View, submit contribution e.List items<br />

(not review e.List items) if they have<br />

sufficient reviewer depth rights, reject<br />

and save to edit depth.<br />

View, save changes, submit.<br />

Applies to<br />

Contribution and review e.List items.<br />

Contribution e.List items.<br />

Reviewer e.List items.<br />

Contribution and review e.List items.


EListItemName<br />

Finance<br />

Production<br />

Marketing<br />

If the import file specifies Review for a contribution e.List item, or Edit for a review e.List item, on<br />

import, the <strong>Administration</strong> Console changes the settings so that Review becomes edit for a planner<br />

and Edit becomes Review for a reviewer.<br />

For more detail, see "Rights by User" (p. 107).<br />

You can assign more than one user to an e.List item. If more than one user is assigned to an e.List<br />

item with rights higher than View, the user that is first in the import file is the initial owner of the<br />

e.List item in the <strong>Contributor</strong> Web application. When you insert rights manually, they are appended<br />

to the bottom of the rights table and it is not possible to reorder the rights at e.List level. The only<br />

way to reorder the rights at e.List level is to export the file, delete the existing rights, modify the<br />

import file and import the new file.<br />

Note: If you have imported a rights file containing three columns, and you import a new rights file<br />

containing only the first two columns, any new rights added will take the default value of submit,<br />

and any rights that existed in both the old and the new files will remain unchanged. If the new<br />

rights file contains three columns, any rights existing in both the old and the new files are overwritten<br />

with the new rights.<br />

For more information, see "Multiple Owners of e.List Items" (p. 95) and "Rights by e.List<br />

Item" (p. 105).<br />

Example Rights File<br />

CamObjectName<br />

Mary<br />

<strong>Planning</strong> <strong>Contributor</strong><br />

Users<br />

Administrator<br />

Modify Rights Manually<br />

EListItemUserRights<br />

SUBMIT<br />

EDIT<br />

VIEW<br />

CamNamespaceName<br />

ntlm<br />

Cognos<br />

ntlm<br />

CamObjectType<br />

User<br />

Role<br />

User<br />

In the Rights pane in the <strong>Administration</strong> Console, you can specify whether users can view, save,<br />

submit and so on.<br />

Typically, you assign rights to users by importing a rights file, (p. 96). But it is also possible to<br />

manually insert rights, and modify or delete existing rights. You can also export a rights file that<br />

can be modified and imported again if necessary. For a detailed description of each level of rights,<br />

see "Rights" (p. 107).<br />

The rights that a user may have are also affected by the view and review depth settings, set in the<br />

e.List pane in the <strong>Administration</strong> Console, and the Reviewer edit setting, set in the Application<br />

Options pane, (p. 74). A user may have directly assigned rights, or inherited rights. You can see a<br />

summary of the rights per user and per e.List item by selecting a line and then clicking the Rights<br />

Summary (p. 107).<br />

Chapter 7: Managing User Access to Applications<br />

<strong>Administration</strong> <strong>Guide</strong> 111


Chapter 7: Managing User Access to Applications<br />

You can assign more than one user to an e.List item, see "Multiple Owners of e.List Items" (p. 95)<br />

for more information.<br />

Steps<br />

Reordering Rights<br />

112 <strong>Contributor</strong><br />

1. In the appropriate application tree, click Development, e.List and Rights, and then Rights.<br />

2. Click Insert.<br />

A blank line is inserted into the rights table.<br />

3. Select the e.List item by clicking in the Item Display Name cell.<br />

4. Select the User, Group, or Role.<br />

● Click the name you want to add.<br />

If the name is not displayed, you can add a role or group to the filter.<br />

Tip: You can choose to display the complete list of users, groups, or roles by selecting Show<br />

all descendants. Depending on the number of items in the list, this may take a while to<br />

display. If this box is not selected, only the direct members of the group or role are shown.<br />

● Click the browse button (...).<br />

● Click the appropriate namespace.<br />

● Select the group, or role. Any users who are members of the group or role that you select<br />

will be added to the list.<br />

● Click the green arrow button and click OK.<br />

You can now select the user, group, or role that has rights to the e.List item.<br />

5. Select the rights by clicking the rights cell. If the e.List item is a Review item, you can choose<br />

View, Review, or Submit. If the e.List item is a contribution item, the rights you can select are<br />

View, Edit, or Submit. For more information, see "Actions Allowed for Contribution e.List<br />

items" (p. 109).<br />

The Item ID is the external identifier for the e.List item and is the key for importing.<br />

6. To order the rights by hierarchy, click Order by hierarchy.<br />

If you assign multiple users to an e.List item, the user that is highest in the rights table is the<br />

current owner (p. 95) when the Go to Production process is run.<br />

You can search for rights in this window. For more information, see "Search for Items" (p. 98).<br />

If more than one user, group, or role is assigned to an e.List item with rights higher than View, the<br />

user, group, or role that is first in the import file is the initial owner of the e.List item in the Con-<br />

tributor application. When you insert rights manually, they are appended to the bottom of the rights<br />

table and it is not possible to reorder the rights at e.List level. The only way to reorder the rights<br />

at e.List level is to export the file, modify the import file and import the new file.


Viewing Rights<br />

You can view rights listed by e.List item and rights listed by user by selecting one or more lines in<br />

the rights table and clicking Rights Summary.<br />

You can print and save to file the rights by e.List (p. 105) and rights by user (p. 107).<br />

Validating Users, Groups and Roles in the Application Model and Database<br />

You can validate users, groups, and roles that are used by the application model and database<br />

against the Cognos 8 namespace.<br />

The validate function checks name information used by the <strong>Contributor</strong> <strong>Administration</strong> Console<br />

against the Cognos 8 namespace. If any names have been changed or removed, you can update the<br />

information used by the <strong>Contributor</strong> <strong>Administration</strong> Console to match the namespace.<br />

If there are no invalid items and only changed items, the database table only is updated. Therefore,<br />

there will be no cut-down models job run during Go to Production.<br />

If there are invalid items and changes then the model is also updated and a cut-down model job<br />

will run during next Go to Production.<br />

Steps<br />

1. In the tree for the application, click Development, e.List and Rights and then Rights.<br />

2. Click Validate.<br />

3. If there are invalid or out-of-date users, groups, or roles, and you want to update them, click<br />

Update or click Cancel.<br />

Chapter 7: Managing User Access to Applications<br />

<strong>Administration</strong> <strong>Guide</strong> 113


Chapter 7: Managing User Access to Applications<br />

114 <strong>Contributor</strong>


Chapter 8: Managing User Access to Data<br />

You control access to cells in cubes, whole cubes, and assumption cubes using access tables. Saved<br />

selections are groups of dimension items that support and simplify access tables.<br />

For example, in an Overheads dimension, you might want to show only those items relating to<br />

travel expenses. This allows you to show users only those items that are relevant to them.<br />

You define access for contribution e.List items, but access is automatically derived for review e.List<br />

items.<br />

The key difference between using saved selections and defining access directly in access tables is<br />

that saved selections created on dimensions within an application are dynamic. That is, they change<br />

when definitions in the dimension upon which they are made are changed (when an application is<br />

synchronized following changes to the IBM Cognos 8 <strong>Planning</strong> - Analyst model).<br />

Imagine the following scenario:<br />

You have a dimension that contains:<br />

● Product 1<br />

● Product 2<br />

● Total Products (sum of all)<br />

● A saved selection is made which is the enlargement of the "Total Products" subtotal.<br />

If a change is made to the Analyst model which modifies the dimension to now contain:<br />

● Product 1<br />

● Product 2<br />

● Product 3<br />

Saved Selections<br />

● Total Products (sum of all)<br />

The saved selection, which is the enlargement of the "Total Products" subtotal, now includes all<br />

three products without any change being made to it. In other words, it is dynamic and changes as<br />

the definitions in the application change following synchronization.<br />

When you create saved selections, you name and save selections of items from a dimension. A<br />

selection is a collection of dimension items, and could be lists of:<br />

● Products sold by a particular outlet<br />

● Product/Customer combinations<br />

● Channel/Market combinations<br />

<strong>Administration</strong> <strong>Guide</strong> 115


Chapter 8: Managing User Access to Data<br />

● Employee lists<br />

● Range of months for a forecast<br />

Once you have created a saved selection, you can set levels of access to this item. For more<br />

information, see "Creating Access Tables" (p. 123).<br />

You cannot explicitly define an access table on a review e.List item. If you create a saved selection<br />

on the dimension selected as the e.List, you cannot select any review e.List items.<br />

Steps<br />

1. In the application tree, click Development, Access Tables and Selections, and Saved Selections.<br />

2. Click New in the Saved Selections form, and enter details as shown below:<br />

Detail<br />

Selection name<br />

Dimension<br />

3. Click OK<br />

Description<br />

Enter a name for the selection<br />

Click this box to show a list of dimensions, then click one.<br />

4. To edit the selection rules, click in the Selection Name or Dimension column of the saved<br />

Editing Saved Selections<br />

116 <strong>Contributor</strong><br />

selection, then click Edit. This opens the Dimension Selection Editor, see "Editing Saved<br />

Selections" (p. 116) for more information.<br />

In the Dimension Selection Editor, you edit and refine saved selections.<br />

The steps to edit a saved selection are:<br />

● Open the Edit window for the selection you will be editing.<br />

● Choose the items you want to show from the Show list box.<br />

● Make your first selection.<br />

● Refine your selection by making a second selection if required.<br />

Steps<br />

1. Open the Edit window.<br />

In Saved Selections, click on the selection you are going to edit, then click Edit.<br />

2. Choose the items you want to show.<br />

The default settings show all the items in the dimension, or e.List. In the case of long and<br />

complex lists, narrow down the dimension items you want to display by selecting an item under<br />

Show. A check mark indicates either a first or second selection, or a result (=).


Item<br />

All<br />

Detail<br />

Calculated<br />

Filter<br />

First Selection<br />

Second Selection<br />

Results Selection<br />

Description<br />

Displays the full list of items in the dimension.<br />

Only Detail items are displayed, that is, all dimension items except<br />

for calculations.<br />

Only those items that are calculated are displayed.<br />

Shows a selection of items based on a filter that you define.<br />

When you select a filter, the filter works on the dimension item<br />

name only, not the values contained in the cells.<br />

Select: = to select items that equal the criteria. to select items<br />

that do not equal the criteria.<br />

Use: ? to represent any single character. * to represent any series<br />

of characters. This must not be used as the first character in a string.<br />

Enter the filter, for example selection = O* (with Case Sensitive<br />

selected shows all items beginning with a capital O). You can make<br />

the filter case sensitive by selecting the Case Sensitive box.<br />

Example: Type 015* to filter on all dimension item names beginning<br />

with 015.<br />

Displays the results of the first selection.<br />

Displays the results of the second selection.<br />

Displays the results of the first and second selections.<br />

3. To make a selection, click one of the following options from the First Selection list box:<br />

Option<br />

All<br />

Description<br />

Selects all items in the dimension. This is useful when used in<br />

combination with Second Selections, for example:<br />

● First Selection: All<br />

● Except check box: selected<br />

● Second Selection: Filter =9*<br />

Chapter 8: Managing User Access to Data<br />

This selects all items except those beginning with 9.<br />

<strong>Administration</strong> <strong>Guide</strong> 117


Chapter 8: Managing User Access to Data<br />

118 <strong>Contributor</strong><br />

Option<br />

Detail<br />

Calculate<br />

List of items<br />

Enlarge<br />

Filter<br />

Description<br />

Selects all detail items (all dimension items that are not calculations).<br />

The benefit of using this item is that if the list of detail items change,<br />

the saved selection is updated automatically.<br />

Selects all calculated items. If the list of the calculated items change,<br />

the saved selection is updated automatically.<br />

Click the items to be selected then move them to the First Selection<br />

list box by clicking the right arrow.<br />

If the list changes, the saved selection must be updated.<br />

Includes all items that make up a calculated item, either directly or<br />

indirectly.<br />

Click one or more calculated items and then move them to the First<br />

Selection list box by clicking the right arrow.<br />

Shows a selection based on a text search criterion. For more<br />

information, see "Editing Saved Selections" (p. 116).<br />

If this is a simple saved selection, click OK to close the Edit window and then click Save to save<br />

the selection.<br />

4. Create a selection rule:<br />

● Select one of the options:<br />

Option<br />

Except<br />

Union<br />

Intersect<br />

Description<br />

All items selected in the first selection, except those selected in the<br />

second selection.<br />

The union of all items included in both selections.<br />

Items that are the same in both the first selection and second<br />

selections.<br />

● Make a selection from the Second Selections list box.<br />

The results are displayed under Show. Click OK.<br />

● Click Save.


Deleting Saved Selections<br />

You cannot delete a saved selection that is used in an access table. It must be removed from the<br />

access table first.<br />

Step<br />

Access Tables<br />

● In the Saved Selections pane, click the selection and then click Delete.<br />

Create access tables to determine the level of access users have to cubes, saved selections, and<br />

dimension items. access tables can reduce the volume of data a user has to download, especially<br />

when used in conjunction with cut-down models and the No Data setting. For more information,<br />

see "Cut-down Models" (p. 138).<br />

You can set access levels for an entire cube (contribution or assumption cubes) or for specific<br />

selections of cells in a cube (contribution cubes only).<br />

For entire cubes, you can choose Write, Hidden, or Read for contribution cubes and Hidden or<br />

Read for assumption cubes (p. 120). Access set at cube level applies to all planners.<br />

Access to specific selections of cells is controlled using access tables. Do this by choosing one or<br />

more dimensions, and defining access to sets of items in these dimensions.<br />

If you need cube-level access to vary by planner, select the Include e.List option. You must also<br />

include one of the other dimensions of the cube (preferably the smallest), and select All items for<br />

this dimension when creating the access table.<br />

Access tables using more than two dimensions (this includes the e.List) should be avoided where<br />

possible. This is because when you perform an action in the <strong>Administration</strong> Console that makes<br />

use of the access tables, the system needs to resolve the access tables in order to determine what<br />

access level applies to each cell, and which cells have data. If an access table is very large, this can<br />

slow down the system considerably. For more information, see "Large Access Tables" (p. 129).<br />

It is not possible to create planner-specific views of assumption cubes. If this is required, you should<br />

convert the assumption cube to a contribution cube in Analyst by adding the placeholder e.List.<br />

Then you should move any assumption data present in the Analyst D-Cube into <strong>Contributor</strong> using<br />

Analyst<strong>Contributor</strong> links (p. 347).<br />

You cannot explicitly define an access table on a review e.List item. If you create a saved selection<br />

on the dimension selected as the e.List, you cannot select any review e.List items.<br />

Access Tables and Cubes<br />

There are two types of cubes in <strong>Contributor</strong>: assumption cubes and all other cubes. The access level<br />

you can set depends on the type of cube.<br />

Assumption Cubes<br />

An assumption cube contains data that is moved into the <strong>Contributor</strong> application on application<br />

creation and on synchronization.<br />

● They do not contain the e.List, therefore data applies to all e.List items.<br />

Chapter 8: Managing User Access to Data<br />

<strong>Administration</strong> <strong>Guide</strong> 119


Chapter 8: Managing User Access to Data<br />

● They are not writeable. The default level is read-only.<br />

● You can only set access levels to a whole assumption cube.<br />

Other Cubes<br />

All of the other types of cubes used in <strong>Contributor</strong>:<br />

● Must contain the e.List.<br />

● Are writeable by default, but can also be set to be read-only, contain no data, or be hidden.<br />

● May contain imported data.<br />

● Are usually used for data entry.<br />

● You can set access to selections of cubes, whole cubes, and to dimension items.<br />

● Can be set to planner-only cubes (p. 78) which hides the cube from the reviewer.<br />

When you create an access table, you select one or more dimensions, and define access to sets of<br />

items in these dimensions. By default, access tables include the e.List, so you can vary access setting<br />

by planner. You can opt not to include an e.List in an access table, in which case the setting applies<br />

to all planners, for example, you might want to make a budget version read-only for everyone. You<br />

cannot have planner specific access settings for assumption cubes. This is because they do not contain<br />

the e.List.<br />

Rules for Access Tables<br />

120 <strong>Contributor</strong><br />

You can apply rules when setting access levels for cubes, saved selections, and dimension items.<br />

You can set the following levels of access for cubes, saved selections, and dimension items.<br />

● Write<br />

● Read<br />

● Hidden<br />

● No Data<br />

Access rules are resolved in the order in which they appear in the table. If more than one rule is<br />

applied to an item, the last access rule assigned is given priority. For example, you might want to<br />

set all items to No Data, and then subsequently set individual items to Read, Write, or Hidden.<br />

If you have defined more than one access table for a cube, the access setting that will apply is the<br />

lowest level of access amongst all the access tables, for example, a hidden access setting has priority<br />

over write.<br />

No two access tables that control the same dimension can be applied to the same cube.<br />

You receive a warning if you create an access table that contains more than two dimensions as this<br />

can slow down the <strong>Administration</strong> machine if you import large amounts of data.<br />

If no access levels are set, the following defaults apply:<br />

● All cubes apart from assumptions cubes have a global access level of Write.


● Assumption cubes (cubes used to bring data into an application) have a global access level of<br />

Read.<br />

Access Level Definitions<br />

Selecting access levels of Read, Write, and Hidden have no affect on the way links or importing<br />

data work. The access level No Data does affect links and importing data.<br />

You can set the access level to write, read, hidden, or no data.<br />

Write<br />

This is the default for all cells in a planner’s model. Write access means that users with appropriate<br />

rights can write to this item, provided the e.List item is not locked (the locked state occurs when<br />

data is submitted to a reviewer).<br />

This option cannot be set on assumption cubes.<br />

You can breakback from a writeable calculation, unless all detail items used by the calculation are<br />

set to read or hidden.<br />

If a calculation uses other calculated items, and these calculated items are set to read or hidden,<br />

breakback is possible. If all of the items used by the calculation, either detail or calculated, are set<br />

to read, breakback is not possible.<br />

Read<br />

Cells marked as read are visible but cannot be changed by the planner. For example, a planner<br />

cannot type into any read-only cells, paste will skip read-only cells, and breakback will treat read-<br />

only detail cells as held. However, planners can change values in read-only calculated cells; read-<br />

only calculated items will still be recalculated (they are not treated as held). This extends to break-<br />

back: if a writeable calculation, for example, Grand Total, uses some read-only calculated items<br />

such as Total Group A, a planner can breakback from the writeable calculation (Grand Total)<br />

through the read-only calculations (Total Group A). This is only possible when at least some of the<br />

items feeding a calculation are writeable. It is never possible to change a cell that is a D-Link target.<br />

Detail cells targeted by D-Links are read-only as normal and will treated as held by breakback.<br />

Calculated cells targeted by D-Links are also read-only, but these cells cannot be changed by forward<br />

calculation or breakback due to planner entry.<br />

Read is the default value for the following:<br />

● All cells in assumption cubes.<br />

● All cells targeted by D-Links. These can never be changed directly by the planner.<br />

● Calculated cells in cubes for which breakback is disabled.<br />

● Calculated items are read-only when none of their precedent items are writeable. For example,<br />

a subtotal will automatically be read-only if all items summed by the subtotal are read-only<br />

(whether they are read-only due to access tables or D-Link targets, or due to submission of<br />

contribution e.List items).<br />

Chapter 8: Managing User Access to Data<br />

<strong>Administration</strong> <strong>Guide</strong> 121


Chapter 8: Managing User Access to Data<br />

● Calculated items are set to read-only when breakback is not possible because of the type of<br />

calculation; in particular, the result and outputs of BiFs, and constant calculations are always<br />

read-only.<br />

● A planner can never change read-only detail cells, but read-only totals are not held.<br />

Hidden<br />

Hidden cells are not visible to a planner, but otherwise they are treated in the same way as read-<br />

only cells. For example, breakback does not target hidden detail cells, but goes through hidden<br />

calculated cells if at least some of the detail cells the calculation uses are writeable. Hidden calculated<br />

cells are recalculated, and so on. This means that intermediate calculations, for example, in a cube,<br />

or even entire intermediate calculation cubes, can be hidden without affecting model calculation<br />

integrity.<br />

If all cells in a cube are hidden for a particular planner, the cube is removed entirely in the Web<br />

browser view, but the data is still downloaded to planners.<br />

Note: You cannot breakback over hidden detail cells.<br />

No Data<br />

No Data cells do not contain any data. When used by calculations they are assumed to contain<br />

zero.<br />

No Data access settings, regardless of whether cut-down models are used, can affect:<br />

● Volume of data processed in memory, which in turn affects calculation speed. This is because<br />

the calculation and link engines do not process No Data cells where possible, so No Data areas<br />

in general reduces memory requirements and speeds up recalculation.<br />

● Data block size, which in turn affects download and upload speed (when opening the grid and<br />

when saving or submitting) can reduce network traffic. This also affects the speed of aggregation<br />

when data is saved, and hence reduces the load on the run time server components.<br />

No Data access settings, used in conjunction with cut-down models, can affect the model definition<br />

size. It can improve download speed on opening and reduce network traffic, but also increases the<br />

time it takes to run the Go to Production process.<br />

Updating No Data Access Settings<br />

122 <strong>Contributor</strong><br />

When you run Go to Production and changes have been made to access tables that result in a dif-<br />

ferent pattern of No Data cells, a reconcile job is run for all e.List items. This process updates the<br />

contribution e.List item data blocks and reaggregates all of the review e.List items.<br />

In order to understand how No Data access settings affects memory use and calculation speed, it<br />

is necessary to describe how the access settings are applied. The process is as follows:<br />

● All access tables are resolved.<br />

● Items which are entirely No Data for all cubes in the model are identified.<br />

● Items are removed from dimensions where possible (see restrictions below).


As a rough guide, each e.List item is approximately 1 KB per item (this is where you have roughly<br />

one user per e.List item). Each dimension item is between 100 and 250 bytes. The e.List item is<br />

larger because it contains extra information.<br />

How No Data Access Setting Affects Data Block Size<br />

Creating Access Tables<br />

Data blocks are used for persistent storage of the data in the model. The data block contains data<br />

for each cube for a particular e.List item, and only includes items for which there is data in that<br />

cube. The restrictions relating to cut-down dimensions that apply when working with the model<br />

in memory do not affect the persistent data blocks. These are cut-down to the maximum extent<br />

possible to reduce the data block size.<br />

The data blocks are created during the reconciliation process.<br />

Reducing the data block size affects download and upload speed (when opening the grid and when<br />

saving or submitting) and can reduce network traffic. It also affects the speed of aggregation up the<br />

e.List when contribution data is saved, and hence reduces the load on the run time server components.<br />

Create access tables to determine the level of access that users have to cubes, saved selections, and<br />

dimension items.<br />

The access tables pane is divided up into the following areas:<br />

● Cubes With Access Tables<br />

Assign access levels to dimensions and saved selections either using the <strong>Administration</strong> Console,<br />

or by importing simple access tables.<br />

● Cubes Without Access Tables<br />

Assign an access level to a whole cube, if it has no individual access tables.<br />

● Assumption Cubes<br />

Set access levels for the whole cube (you cannot create individual access tables for assumption<br />

cubes).<br />

Note: If you create an access table after you have imported data, the entire import queue is deleted.<br />

Making changes to access tables, e.List items or saved selections that affect the pattern of no data<br />

in a cube can also result in data loss, see "Changes to Access Tables That Cause a Reconcile Job<br />

to Be Run" (p. 136) for more information.<br />

Before you can set access levels, you must first have imported an e.List.<br />

Steps<br />

1. In the application's tree, click Development, Access Tables and Selections, and Access Tables.<br />

2. You can do any of the following:<br />

● Assign access levels to dimensions and saved selections. You can either create rule based<br />

access tables using the <strong>Administration</strong> Console, or you can import access tables created in<br />

external applications.<br />

Chapter 8: Managing User Access to Data<br />

<strong>Administration</strong> <strong>Guide</strong> 123


Chapter 8: Managing User Access to Data<br />

Cubes With Access Tables<br />

124 <strong>Contributor</strong><br />

● Assign an access level to a whole cube if it has no individual access tables. For more<br />

information, see "Cubes Without Access Tables" (p. 126).<br />

● Set access levels for the whole cube (you cannot create individual access tables for<br />

assumption cubes). For more information, see "Assumption Cubes" (p. 126).<br />

In the Cubes With Access Tables section you click the dimensions that you want to set access for,<br />

then you click the cubes that this access level applies to. You can choose to make the access table<br />

applicable to any or all these cubes by selecting the relevant cubes in Candidate Cubes.<br />

You can choose to make the access table applicable to all or part of the e.List. If you click Include<br />

e.List, you can select which parts of the e.List the access table is applicable to. If you do not include<br />

this option, it will apply to all parts of the e.List.<br />

The default value for cubes with access tables is write.<br />

You can also import access tables, see "Importing Access Tables" (p. 126).<br />

Steps to Create Access Tables<br />

1. In the appropriate application tree, click Development, Access Tables and Selections, and then<br />

Access Tables.<br />

2. Select one or more dimensions in Available Dimensions.<br />

To select more than one dimension, hold down the CTRL key and then click the dimensions.<br />

The list of Candidate Cubes shows which cubes contain all the selected dimensions. If you have<br />

selected more than one dimension, only those cubes that contain all these dimensions can be<br />

selected. Note that the more dimensions you include in an access table, the bigger the access<br />

table will be. Large access tables can slow the system down considerably, see Large Access<br />

Tables for more information.<br />

3. Select one or more cubes from the Candidate Cubes list. Normally you will apply an access<br />

table to all candidate cubes.<br />

4. Ensure that Create rule based access table is selected (this is the default).<br />

5. Choose whether to include the e.List. The default is for the e.List not to be included meaning<br />

that the access settings apply across the whole e.List.<br />

Note: If you create access level rules with the e.List included, clear the Include e.List option<br />

and save, then subsequently decide to include the e.List again, you must reenter any e.List<br />

specific access settings.<br />

6. Click Add. This adds your selection to the list of access tables.<br />

Tip: In the access tables list, you can edit the name of the access table.<br />

7. Select one or more of the rows that you have just added to the access tables list and click Edit.<br />

The next step is to assign access to dimension items, or to saved selections. For more information,<br />

see Exporting Access Tables.


After you have edited the access table, you should save. If there is data in the import data queue,<br />

you will receive a warning that the import data queue will be deleted. You may also receive a<br />

warning such as:<br />

Saving these changes will require a reconcile job to run next time you Go to Production. Do<br />

you want to continue?<br />

Reconciliation ensures that the copy of the application that the user accesses on the Web is up<br />

to date. If you click Yes, the changes are saved and when you run Go to Production, a recon-<br />

ciliation job is created. If you click No, the changes to the access table pane are discarded.<br />

Change the Cubes to Which an Access Table Applies<br />

You can choose specific cubes to which you want the access tables to apply.<br />

Steps<br />

Editing Access Tables<br />

1. Select the access table that you want to change.<br />

2. Click the Cubes button.<br />

3. Check those cubes you want the access table to apply to.<br />

When you edit an access table, you set access levels for selections and for combinations of dimension<br />

items.<br />

To reach the access table editor, you must first create an access table. For more information, see<br />

Creating Access Tables.<br />

Steps<br />

1. Select the access level, and then click the saved selection, or dimension items from each list,<br />

and e.List items (if included).<br />

2. In any one list of saved selections and dimension items, you can click either one saved selection,<br />

or a combination of dimension items. Selecting applies a rule to all items in the<br />

dimension or e.List.<br />

3. Click Add to create the access rule.<br />

4. Repeat until you have created all the rules for the access table.<br />

These access rules are resolved in the order in which they were assigned. If more than one rule<br />

is applied to an item, the last access rule assigned is given priority and will apply. Use the arrows<br />

to change the order in which access rules apply. If no rules are set, an access level of Write<br />

applies.<br />

Warning<br />

Once you have created access level rules for an access table, if you decide to remove or include the<br />

e.List again, you will lose any rules that you have set for this table and will have to reset them. See<br />

Rules for Access Tables for more information.<br />

Chapter 8: Managing User Access to Data<br />

<strong>Administration</strong> <strong>Guide</strong> 125


Chapter 8: Managing User Access to Data<br />

Changes to Access Tables<br />

If you edit access tables, and the change would make the memory usage of the model significantly<br />

greater (for example, adding another dimension), you may find that, although the access table saves<br />

correctly, the next time you open the <strong>Administration</strong> Console, it may fail to load properly. If this<br />

happens, you can do the following:<br />

Note that Reset Development to Production removes all changes made since the last time Go to<br />

Production was run.<br />

Steps<br />

1. Check the log file. Click the Tools, Show Local Log File menu to see if it shows an "Out of<br />

memory" message.<br />

Cubes Without Access Tables<br />

2. If this happens, you can work around it by clicking the Reset Development to Production button<br />

in the toolbar and reapplying any changes that you had made.<br />

The Cubes without Access Tables section allows you to set the access levels for whole cubes that<br />

do not have any access tables defined for them. The default access level for a cube is Write.<br />

See Rules for Access Tables for information on the access levels and priorities.<br />

Steps<br />

Assumption Cubes<br />

1. In Access Tables, click in the appropriate Access Level cell.<br />

2. Click a new value from the list.<br />

If you set the global access level for cubes without access tables from Write to a different level,<br />

such as Read, and then subsequently create and access table for this cube, See Cubes With<br />

Access Tables, the global setting for the cube is reset to Write.<br />

Any assumption cubes in the application are listed in the Assumption Cubes section. The default<br />

access value for assumption cubes is Read, and the other available value is Hidden. You can change<br />

the default access value.<br />

Steps<br />

Importing Access Tables<br />

126 <strong>Contributor</strong><br />

1. Click in the appropriate Access Level cell.<br />

2. Click either Read or Hidden from the list.<br />

Assumption cubes contain data that is moved into the <strong>Contributor</strong> application when you run<br />

the Go to Production process and when you synchronize. They do not contain the e.List.<br />

You can import access tables that already exist into cubes. They can be in Excel worksheet, comma,<br />

delimited, tab delimited, or custom delimited format.<br />

For information on the format of access tables, see "Format of Imported Access Tables" (p. 127).


You can also automate the import of access tables. See "Import Access Table" (p. 206) for more<br />

information.<br />

Steps<br />

1. Click Development, Access Tables and Selections, and Access Tables.<br />

2. In the Access Tables pane, click one or more dimensions in Available Dimensions. The Candidate<br />

Cubes list shows which cubes contain all the selected dimensions with no conflicting access<br />

tables. If you have selected more than one dimension, only those cubes that contain all these<br />

dimensions are selectable.<br />

3. Select one or more cubes from the Candidate Cubes list. Normally, you apply an access table<br />

to all candidate cubes.<br />

4. Select Import access table, and Include e.List if required.<br />

5. Click Add. Once you have clicked the Add button, you cannot change whether a table is rule<br />

based or imported. There is a check mark to indicate if you have selected Import access table<br />

in the access table grid.<br />

6. Click Import. The Import Access Table dialog box is displayed.<br />

Note: You can use this dialog box to set the base access level without importing an access table.<br />

7. Enter the file name and location for the access table import file.<br />

8. Select the First row contains column headers box if required.<br />

9. Select the file format. If the file format is Excel Worksheet, enter the name of the worksheet<br />

containing the access table.<br />

10. Select the Import option.<br />

11. Delete undefined items: If an access table file has previously been imported for the access table<br />

and you are importing a new one, existing settings are updated with the new settings specified,<br />

and any previous settings that do not exist in the new file are kept, or if Delete undefined items<br />

is checked, are deleted.<br />

12. Select the Base access level. This is the default level that is applied to any undefined items. No<br />

Data is the default access level. See "Access Level Definitions" (p. 121) for more information.<br />

13. Click OK.<br />

14. To view the access table, click View. You can print this file and save to file. You can also export<br />

Format of Imported Access Tables<br />

the access table, see "Exporting Access Tables" (p. 129).<br />

The Imported access tables file can be in any of the following formats: Excel worksheet, comma<br />

delimited, tab delimited, custom delimited.<br />

Chapter 8: Managing User Access to Data<br />

● A column for every dimension that the access table will apply to (mandatory). The names of<br />

dimension items must be identical in spelling and case to the way they are in the Analyst model.<br />

<strong>Administration</strong> <strong>Guide</strong> 127


Chapter 8: Managing User Access to Data<br />

● A column containing e.List items. If omitted, the access level applies to the whole e.List.<br />

● A column containing access levels (optional). The following access levels can be set Hidden,<br />

Read, Write, No Data (these are not case sensitive). If omitted, a default of Write will apply.<br />

It is important to note that you cannot import saved selections or rule based access tables. Each<br />

line of the access table (barring the headings, if used) contains the following information:<br />

dimension item name (from dimension a) [tab] dimension item name (from dimension b) [tab] e.List<br />

item (if e.List included) [tab] AccessLevel<br />

Column Order<br />

The required order for the dimension columns in an access table import file with no column headers<br />

is the same as the dimension order of the access table. You can see this in the access table setup<br />

pane:<br />

This shows the order of dimensions as: Versions and Channels. In this case, the third column would<br />

be the e.List, and the fourth column would contain the access levels.<br />

If the import file contains column headings, the columns can be in any order.<br />

Column Headings<br />

The use of column headings in the import file is optional. If column headings are used they should<br />

be in the following format:<br />

Name of the Dimension<br />

This must be the same spelling<br />

and case as in the Analyst<br />

model.<br />

Name of e.List<br />

This must be the same spelling<br />

and case as in the Analyst<br />

model.<br />

Column headings are the first row in the file.<br />

Viewing Imported Access Tables<br />

128 <strong>Contributor</strong><br />

You can view access tables that you have imported.<br />

AccessLevel<br />

As shown. Default is Write.<br />

You cannot edit an imported access table within the <strong>Administration</strong> Console. To make changes,<br />

you should edit the source file and import again.<br />

Steps<br />

1. Click Development, Access Tables and Selections, and Access Tables) to open the access tables<br />

pane.<br />

2. Click the access table that you want to view and click View.


Exporting Access Tables<br />

3. The View button is not enabled for access tables created in the <strong>Administration</strong> Console (rule-<br />

based tables). To view rule-based access tables, click Edit.<br />

You can export access tables in a format that you will be able to import. You will be able to export<br />

both simple and rule-based access tables. Once you have exported a rule-based access table, you<br />

can import it again, but it will be imported as a simple access table that you will not be able to edit<br />

in the <strong>Administration</strong> Console.<br />

Steps<br />

1. Click Development, Access Tables and Selections, Access Table), and click the access table to<br />

be exported.<br />

2. Click Export.<br />

3. Enter or browse for a file location and file name. You can export to a text file, tab separated<br />

format.<br />

Large Access Tables<br />

4. If you want to include column headings, select Include column headings.<br />

5. Click OK to create the file and Close to close the dialog box.<br />

It is important to use small access tables where possible rather than single multi-dimensional access<br />

tables. In general it is far easier to understand and maintain several small access tables than a single<br />

large one. Replacing a large access table with small access tables may improve the performance of<br />

the <strong>Administration</strong> Console. However, in some cases, access to items in one dimension cannot be<br />

defined without reference to the items of another dimension, and so a multi-dimensional access<br />

table must be used.<br />

The following issues are associated with using large access tables:<br />

● They can cause substantial performance problems in the <strong>Administration</strong> Console.<br />

● They can increase the physical size of the cut-down model so that the benefit of using cut-down<br />

models is lost. This is because the access table needs to be resolved.<br />

● If cut-down models are not used, resolving access tables on client machines can cause perform-<br />

ance problems.<br />

<strong>Administration</strong> Console Performance Issues<br />

Chapter 8: Managing User Access to Data<br />

When you perform an action in the <strong>Administration</strong> Console that makes use of the access tables,<br />

the system resolves the access tables in order to determine what access level applies to each cell,<br />

and which cells contain data. A large access table can slow down the system considerably. This is<br />

because a check is made whenever you load the development application in the <strong>Administration</strong><br />

Console, and whenever you save, to see whether the pattern of access has changed to or from No<br />

Data. This check is made primarily to determine whether a reconciliation job is required on running<br />

Go to Production, and which e.List items must be reconciled. For example, if you are importing<br />

<strong>Administration</strong> <strong>Guide</strong> 129


Chapter 8: Managing User Access to Data<br />

data, a check is run to see if any changes have been made to the access tables, saved selections, or<br />

the e.List that result in a different pattern of No Data cells. Another example is when you run Go<br />

to Production and the cut-down models job is run (if set on). The cut-down models process uses<br />

the information from access tables to determine what information each user will get.<br />

Memory Needed to Resolve an Access Table<br />

The amount of memory needed to resolve an access table is determined by the product of the<br />

number of items in each dimension multiplied by four bytes, and so the greater the number of<br />

dimensions you include in an access table, the greater the memory required. If you use more access<br />

tables with fewer dimensions in each, the memory requirements are reduced considerably (this is<br />

demonstrated in Example 1). If the memory needed to resolve an access table is more than two GB<br />

it will fail on the server. This is because two GB is the maximum memory that can be addressed by<br />

the operating system. For example, an access table that includes an e.List of 1500, a products list<br />

of 1500 and 250 channels would exceed this limit. (1500 x 1500 x 250 x4 = 2,250,000,000).<br />

Impact of Large Access Tables on Cut-down Models<br />

Examples<br />

130 <strong>Contributor</strong><br />

Using large access tables can have a negative impact on the physical size of the individual cut-down<br />

model definitions that are downloaded to the client computer.<br />

The cut-down model definition contains the information about the individual e.List item or review<br />

e.List item and its immediate children. It also contains the relevant part of the resolved access table<br />

and is cut-down to the necessary D-List items needed to control access. In the example shown below,<br />

the resolved access table will contain the products list of 1500 items and the channel with 250 items.<br />

Multiplied by four bytes, this may increase the size of the cut-down model definition by as much<br />

as 1.5 Mb (this could be less, for example, if only some of the channels apply).<br />

These examples set access to the following dimensions in a Revenue Plan cube:<br />

The e.List (named Stores) contains these saved selections:<br />

● High street<br />

● Superstores<br />

● Telesales Centers<br />

These saved selections are subsets of the total (All). There are 1200 items in Stores.<br />

Channels contains these saved selections:<br />

● Retail<br />

● Discount<br />

● Mail Order<br />

These saved selections are subsets of the total (All). There are 12 items in Channels.<br />

Products contains these saved selections:<br />

● Sanders


● Drills<br />

Note that not all items are included in these saved selections. There are 400 items in Products.<br />

Example 1<br />

This example shows two different ways of setting access where available Channels vary by store<br />

and product selection is the same for all stores and channels. Tables A and B achieve this in the<br />

most efficient way. See the following calculations:<br />

The size of Access Table A is:<br />

● 12 (Channels) x 1200 (Stores) x 4 (bytes) = 57.6 KB<br />

The size of Access Table B is:<br />

● 400 (Products) x 4 (bytes) = 1.6 KB<br />

The total size of the two access tables is 59.2 KB.<br />

The size of the Access Table C is:<br />

● 400 x 12 x 1200 x 4 = 23.04 MB<br />

This means that Access Table C takes 22.98 MB more memory to resolve than tables A and B<br />

together.<br />

Using two separate access tables is also easier to maintain. For example, if High Street stores started<br />

selling through the Mail order channel, using two tables, you just add one line to Access Table A.<br />

But for Access Table C, you must add three lines.<br />

A. Access Table for Channels<br />

No Data<br />

Write<br />

Write<br />

Write<br />

Write<br />

Channel<br />

All<br />

Retail<br />

Retail<br />

B. Access Tables for Products<br />

Write<br />

Read<br />

Discount<br />

Mail order<br />

Product<br />

All<br />

Sanders<br />

Chapter 8: Managing User Access to Data<br />

Store<br />

All<br />

High Street<br />

Superstores<br />

Superstores<br />

Telesales<br />

<strong>Administration</strong> <strong>Guide</strong> 131


Chapter 8: Managing User Access to Data<br />

132 <strong>Contributor</strong><br />

Hidden<br />

Product<br />

Drills<br />

C. Combined Access Table (Not Recommended)<br />

No Data<br />

Write<br />

Read<br />

Hidden<br />

Write<br />

Read<br />

Hidden<br />

Write<br />

Read<br />

Hidden<br />

Write<br />

Read<br />

Hidden<br />

Example 2<br />

Product<br />

All<br />

All<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Drills<br />

Channel<br />

All<br />

Retail<br />

Retail<br />

Retail<br />

Retail<br />

Retail<br />

Retail<br />

Discount<br />

Discount<br />

Discount<br />

Mail order<br />

Mail order<br />

Mail order<br />

Store<br />

All<br />

High Street<br />

High Street<br />

High Street<br />

Superstore<br />

Superstore<br />

Superstore<br />

Superstore<br />

Superstore<br />

Superstore<br />

Telesales<br />

Telesales<br />

Telesales<br />

This shows two different ways of setting access where products vary by store and channels vary by<br />

store.<br />

In this case, the e.List (Store) must be included in both access tables. Superstores can write to drills.<br />

Note that it is not necessary to put the line Write, Drills (Retail/Discount), Superstores into the<br />

access table, you can leave this out. This was added for illustrative purposes.<br />

The size of Access Table D is:<br />

● 12 (Channels) x 1200 (Stores) x 4 (bytes) = 57.6 KB<br />

The size of Access Table E is:


● 400 (Products) x 1200 (Stores) x 4 (bytes) = 1.92 MB<br />

The size of the Access Table F is:<br />

● 400 x 12 x 1200 x 4 = 23.04 MB<br />

So even with the additional dimension in Access Table E, the combined total of Tables D and E of<br />

1.98 MB is still 21.06 MB less than Access Table F.<br />

Note that although tables D and E are separate, they interact. The channels that are shown are<br />

dependent on which product is viewed, and the e.List item that is selected.<br />

Access Tables for Channels<br />

No Data<br />

Write<br />

Write<br />

Write<br />

Write<br />

Access Tables for Products<br />

No Data<br />

Write<br />

Read<br />

Hidden<br />

Write<br />

Read<br />

Hidden<br />

Write<br />

Read<br />

Channel<br />

All<br />

Retail<br />

Retail<br />

Discount<br />

Mail order<br />

Products<br />

All<br />

All<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Chapter 8: Managing User Access to Data<br />

Store<br />

All<br />

High Street<br />

Superstores<br />

Superstores<br />

Telesales<br />

Store<br />

All<br />

High Street<br />

High Street<br />

High Street<br />

Telesales<br />

Telesales<br />

Telesales<br />

Superstores<br />

Superstores<br />

<strong>Administration</strong> <strong>Guide</strong> 133


Chapter 8: Managing User Access to Data<br />

134 <strong>Contributor</strong><br />

Write<br />

Products<br />

Drills<br />

Combined Access Tables (Not Recommended)<br />

No Data<br />

Write<br />

Read<br />

Hidden<br />

Write<br />

Read<br />

Write<br />

Write<br />

Read<br />

Write<br />

Write<br />

Read<br />

Hidden<br />

Example 3<br />

Product<br />

All<br />

All<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Drills<br />

Channel<br />

All<br />

Retail<br />

Retail<br />

Retail<br />

Retail<br />

Retail<br />

Retail<br />

Discount<br />

Discount<br />

Discount<br />

Mail order<br />

Mail order<br />

Mail order<br />

Store<br />

Superstores<br />

In this example, the product selection varies by Store and Channel.<br />

Superstores can write to sanders for the discount channel only.<br />

An Access Table that Cannot be Split<br />

No Data<br />

Write<br />

Products<br />

All<br />

All<br />

Channels<br />

All<br />

Retail<br />

Store<br />

All<br />

High Street<br />

High Street<br />

High Street<br />

Superstore<br />

Superstore<br />

Superstore<br />

Superstore<br />

Superstore<br />

Superstore<br />

Telesales<br />

Telesales<br />

Telesales<br />

Store<br />

All<br />

High Street


Read<br />

Hidden<br />

Write<br />

Read<br />

Hidden<br />

Write<br />

Read<br />

Hidden<br />

Write<br />

Read<br />

Write<br />

Products<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Drills<br />

All<br />

Sanders<br />

Drills<br />

Channels<br />

Retail<br />

Retail<br />

Retail<br />

Retail<br />

Retail<br />

Mail Order<br />

Mail Order<br />

Mail Order<br />

Discount<br />

Discount<br />

Discount<br />

Store<br />

High Street<br />

High Street<br />

Superstores<br />

Superstores<br />

Superstores<br />

Telesales<br />

Telesales<br />

Telesales<br />

Superstores<br />

Superstores<br />

Superstores<br />

You do not need to include the Write, Drills, Discount, Superstores line. This is included for illus-<br />

trative purposes.<br />

Multiple Access Tables<br />

Example<br />

No two access tables that control the same dimension can be applied to the same cube, but you can<br />

have multiple access tables using different dimensions that apply to the same cube. Where this is<br />

the case, a planner will get the lowest level of access amongst all the access tables.<br />

In the Versions dimension, item Budget version 1 is writeable for the planner and item Budget version<br />

2 is read-only. In the Expenses dimension, the item Telephone is writeable and the item Donations<br />

is hidden. The planner will get the following resolved access:<br />

Telephone<br />

Donations<br />

Budget version 1<br />

Write<br />

Hidden<br />

Chapter 8: Managing User Access to Data<br />

Budget version 2<br />

Read<br />

Hidden<br />

When access to the cells of a cube needs to be controlled using more than one dimension (as in the<br />

example above), you must decide whether to use multiple access tables (one for each dimension),<br />

<strong>Administration</strong> <strong>Guide</strong> 135


Chapter 8: Managing User Access to Data<br />

or one access table using all the dimensions. You should choose multiple access tables (one for each<br />

dimension) wherever possible (as in the example above). In general this will be much easier to<br />

understand and maintain. You should only use a multi-dimensional access table in circumstances<br />

when access to items in one dimension cannot be defined without reference to the items of another<br />

dimension.<br />

Conflicting access tables are not allowed. You cannot apply multiple access tables to one cube using<br />

the same dimension. For example, if you have applied an access table for the dimension Months to<br />

a cube, you cannot apply another access table using Months, nor one that uses Months and Versions,<br />

and so on. After choosing the dimension for an access table, your choice of cubes which the access<br />

table can be applied to is limited to those cubes that contain the chosen dimensions.<br />

Changes to Access Tables That Cause a Reconcile Job to Be Run<br />

Changes to access tables may or may not cause a reconcile job to be run, depending on whether<br />

they impact the pattern of No Data cells in a model.<br />

When access tables are changed, the system determines whether there is any impact on the pattern<br />

of No Data cells in a model. If the system determines that there is no impact, then no reconciliation<br />

takes place.<br />

If however the system determines that there is an impact, then all e.List items must be reconciled.<br />

All contribution e.List item data blocks are updated, and all review e.List items are re-aggregated.<br />

If an access table definition is changed, the system compares the resolved pattern of No Data cells<br />

before and after the changes. If the pattern of No Data cells is not identical, then reconciliation of<br />

all e.List items is required. If the pattern of No Data cells is identical, then no reconciliation is<br />

required.<br />

This comparison is only made for e.List items shared by the development application and the current<br />

production application. So, the addition or deletion of e.List items in itself will not require recon-<br />

ciliation of all e.List items.<br />

If an entire access table is added or deleted, then reconciliation of all e.List items is required.<br />

If the set of cubes an access table applies to is changed, then reconciliation of all e.List items is<br />

required.<br />

Changes to the e.List which affect saved selections used in access tables can also cause a reconciliation<br />

job to run. For more information, see "The Effect of Changes to the e.List on Reconciliation" (p. 104).<br />

Changes to Access Tables and Saved Selections and the Effect on Reconciliation<br />

136 <strong>Contributor</strong><br />

When access tables are changed, the system determines whether there is any impact on the pattern<br />

of No Data cells in a model.<br />

If an impact is determined, all e.List items are reconciled. Access tables can be changed indirectly<br />

by changing a saved selection used by an access table, or by making certain changes to the e.List if<br />

the e.List is used in a saved selection used by an access table. Note that changes made to other<br />

dimensions may impact access tables via saved selections, but these changes are introduced via<br />

synchronize which always requires full reconciliation.<br />

The cases where changes to the e.List affect saved selections are:


● A saved selection on the e.List uses Filter and existing contribution items are renamed in the<br />

e.List.<br />

● A saved selection on the e.List uses Enlarge (of a review e.List item) and existing contribution<br />

items are moved between review items in the e.List.<br />

This only applies to e.List items shared by the development application and the current production<br />

application. The addition or deletion of e.List items in itself will not require reconciliation of all<br />

e.List items.<br />

Access Tables and Import Data<br />

The import queue can be deleted in two ways. It will be deleted if any changes are made to existing<br />

saved selections or e.Lists that result in a different pattern of No Data cells for contribution e.List<br />

items that are common to both the development and production applications. It will also be deleted<br />

if any access tables are created after data has been imported.<br />

Access Levels and <strong>Contributor</strong> Data Entry<br />

Force to Zero<br />

With the exception of the No Data access setting, access settings affect only planner data entry.<br />

They do not affect D-Links, import or data to be published:<br />

● D-Links may target Hidden or Read-only cells.<br />

● Import may target Hidden or Read-only cells. All valid data present in an ASCII file is imported<br />

into a cube. To limit the selection of cells targeted you should cut-down the ASCII file to contain<br />

only the required data. Data in the source ASCII file that does not match an item in a cube is<br />

not imported and is reported as an error. Import cannot target formula cells in a cube. Import<br />

data should not target No Data cells.<br />

● Published data can include cells that are marked as Hidden, Read-only, or Writeable.<br />

In Analyst, the calculation option Force to Zero forces calculations in other dimensions to return<br />

zero. <strong>Contributor</strong> interprets this option differently, effectively as Force to No Data. This can cause<br />

items to disappear from the <strong>Contributor</strong> grid.<br />

If you do not want such items to disappear in <strong>Contributor</strong>, you should remove the Force to Zero<br />

setting in Analyst.<br />

Reviewer Access Levels<br />

Chapter 8: Managing User Access to Data<br />

Access for reviewers cannot be defined explicitly. Access for any review e.List item is automatically<br />

derived from access settings applied to the planners below the particular review e.List item.<br />

<strong>Administration</strong> <strong>Guide</strong> 137


Chapter 8: Managing User Access to Data<br />

Cut-down Models<br />

Cut-down models are customized copies of the master <strong>Contributor</strong> model definition that have been<br />

cut-down to include only the specific elements required for a particular e.List item. Note that the<br />

e.List is also cut-down.<br />

Cut-down models can substantially reduce the size of the model that the Web client has to download<br />

when there are large dimensions containing hundreds or thousands of items, of which only a few<br />

are required for each planner.<br />

However, the cut-down model process significantly increases the amount of time it takes to run the<br />

Go to Production process.<br />

The process of creating the cut-down model for a particular e.List item is as follows:<br />

● All access tables are resolved.<br />

● Items which are entirely No Data for all cubes in the model are identified.<br />

● Items are removed from dimensions where possible, for more information, see "Restrictions<br />

to Cutting Down Dimensions" (p. 140).<br />

● The cut-down model definition is saved in the datastore.<br />

When Does the Cut-down Models Process Happen?<br />

Limitations<br />

138 <strong>Contributor</strong><br />

The cut-down model process is triggered in the following circumstances:<br />

● The first time Go to Production is run and one of the cut-down model options have been<br />

selected.<br />

● Changes have been saved to the <strong>Contributor</strong> model, one of the cut-down model options have<br />

been selected, and Go to Production is run.<br />

Cut-down models are not created if no changes have been saved to the <strong>Contributor</strong> model. A change<br />

is any action which allows you to click Save after completing. Changes to existing translations are<br />

exceptions to this rule. If the only change you make is to an existing translation, the cut-down<br />

model process will not be triggered.<br />

Importing data does not change the <strong>Contributor</strong> model. This means that you can import data and<br />

run the Go to Production process without causing the cut-down model process to be triggered.<br />

When changes are saved to the <strong>Contributor</strong> model, the package GUID in the model (a unique<br />

identifier that is used to reference objects) is also changed, causing cut-down models jobs to be<br />

created. If no changes have been made, the GUID does not change so there is no need for cut-down<br />

models to be run.<br />

The cut-down model process can cause the runtime load on the server to be adversely affected.<br />

Without cut-down models there is a single model definition which can be cached in memory on the<br />

server, reducing the number of calls to the datastore. When cut-down model definitions are used,<br />

there are too many of them to cache in memory on the server. As a result, the particular model<br />

definition must be retrieved from the datastore every time.


Even if cut-down models are not being used, the same process of cutting down the dimensions<br />

happens anyway when the model definition is loaded. The benefits of using No Data access settings<br />

to reduce memory requirements and decrease block size apply regardless of whether cut-down<br />

models are being used. See "Restrictions to Cutting Down Dimensions" (p. 140) for more information.<br />

Cut-down Model Options<br />

The following options are available for cut-down models:<br />

● No cut-down models (default).<br />

● For each aggregate e.List item (p. 139).<br />

● For every e.List item (p. 139).<br />

Create Cut-down Model Definition for Each Aggregate e.List Item (Review Level Model Definition)<br />

In review-level model definitions, separate model definitions are produced for each review e.List<br />

item and its immediate children. In this case all contribution e.List items below a particular review<br />

item use the same model definition.<br />

Because review-level model definitions require considerably fewer model definitions, they take less<br />

time to produce or recreate. They should be used when the selections are not small subsets of those<br />

required at parent level, or when it would take too long to produce or recreate the planner-specific<br />

model definitions - typically with e.Lists with thousands of items. This option is a compromise<br />

between no cut-down models and fully cut-down models.<br />

Create Cut-down Model Definition for Every e.List Item<br />

In planner-specific model definitions, all required model definitions are individually produced:<br />

● One for each contribution e.List item.<br />

● One for each review e.List item with its immediate children (extra model definitions for the<br />

individual review e.List items are not required).<br />

● One for each multi-e.List item "my contributions" view (where a planner has responsibility for<br />

multiple e.List items). Model definitions are not produced where a planner owns all the children<br />

of a particular review e.List item. The review and children model definition will be used instead.<br />

The benefit of creating a cut-down model definition for every e.List item is that performance is<br />

optimized for each planner. But it may take some time to produce or recreate the model definitions.<br />

This option should be used when the appropriate selections for the children of one review e.List<br />

item are small subsets of the selections required for the parent review e.List item.<br />

Cut-down Models and Translation<br />

A cut-down model is only created for the base language. When a user wants to view their slice of<br />

the <strong>Contributor</strong> model in the Web client, the language that they see is determined at the point when<br />

they ask to see the model. This means that the language can be changed without having to run Go<br />

to Production.<br />

Chapter 8: Managing User Access to Data<br />

<strong>Administration</strong> <strong>Guide</strong> 139


Chapter 8: Managing User Access to Data<br />

Cut-down Models and Access Tables<br />

When a planner opens the grid for a <strong>Contributor</strong> application the Web client receives two pieces of<br />

data from the server. The first is the model definition (also referred to as the XML package) and<br />

the second is the data block that contains the values that will populate the grid. Together these are<br />

referred to as the model.<br />

Access tables control which cells in a model are Writeable, Read-only, Hidden, or contain No Data.<br />

Cut-down models are customized copies of the master model definition that have been cut-down<br />

to include only the specific elements required for a particular e.List item.<br />

Access tables must be carefully considered when setting up cut-down models. Potentially, the<br />

overhead in terms of model size and memory usage for using access tables can be higher than the<br />

benefit gained from using cut-down models.<br />

When models are large, you should use access tables along with cut-down models so that the size<br />

of the model to be downloaded to each client is reduced.<br />

Cut-down model options are set in the Application Options pane, see "Change Application<br />

Options" (p. 74).<br />

Restrictions to Cutting Down Dimensions<br />

140 <strong>Contributor</strong><br />

There are restrictions involved when cutting down dimensions.<br />

Certain types of dimension are never cut down:<br />

● Timescales (cutting down timescales would affect the result of BiF calculations).<br />

● The data dimension of the source cube for an accumulation link (that is, the dimension which<br />

contains D-List format items that are treated as if they were dimensions of the source cube).<br />

● The data dimension of the target cube for a lookup link (that is, the dimension which contains<br />

D-List format items that are treated as dimensions of the target cube).<br />

● A dimension used in an assumption cube.<br />

● A dimension that is also used as a D-List format.<br />

Certain items will not be removed:<br />

● Items are not removed if they are used in a calculation that is not a simple sum, unless the cal-<br />

culation itself is also being removed.<br />

● Items that are the weighting for a weighted average are not removed unless the average is also<br />

removed.<br />

The level of cut-down applied per dimension is the resolved level across all cubes. This is why it is<br />

impossible to cut down a dimension that is used in both an assumption cube and a contribution<br />

cube, because the entire dimension is required for the assumption cube. Where the same dimension<br />

occurs in two or more contribution cubes with different access tables, it will only be cut-down to<br />

remove items that are not required in any cube. As a result, there are cases where dimensions are<br />

not cut-down as much as might be expected, resulting in greater memory usage. However, there<br />

are ways in which to structure the model to avoid this situation.


Example 1<br />

Example 2<br />

Cutting Down a D-List to No Data<br />

If a D-List is used by multiple cubes but is cut down to no data in just one of them, then the cube<br />

with no data will be hidden from users of that e.List, but the size of the dimension will be the full<br />

size of the D-List. Therefore, the size of a D-List for any e.List item will always be that size, even<br />

if set as No Data.<br />

For example, if you have a model where:<br />

● cube 1 contains a dimensions e.List, products, customers, and timescale<br />

● cube 2 contains e.List customers and timescale<br />

● cube 3 contains e.List products and timescale<br />

If you create an access table on customers and products and include the e.List and then add a line<br />

to make all products and customers No Data for just one of the e.List items, it will apply to cube<br />

1, as that is the only cube that has both products and customers.<br />

If you then log on to the grid as the user of the e.List item, you will not see cube 1 because it will<br />

be hidden. But the size of the dimension will be unchanged because the customer list is needed in<br />

full for cube 2 and products list is needed in full for cube 3.<br />

If you have a dimension that can not be cut down because it is used by an assumption cube, you<br />

could create an identical dimension to substitute in to the assumption cube leaving the dimension<br />

in other contribution cubes to be cut down.<br />

If the assumption cube is causing the problem, an alternative is to add the e.List to the assumption<br />

cube and apply access settings to this cube so that the dimension can be cut down.<br />

Estimating Model and Data Block Size<br />

The size of the model definition XML is primarily dependent on the total number of items in the<br />

dimensions of the model, and is not dependent on the number of data cells. The master model<br />

definition can be between 100 bytes and 250 bytes per dimension item. e.List items are generally<br />

larger as they contain more information--approximately 1 KB if there is roughly one user per e.List<br />

item. Typically most of the other information in a model definition is small in relation to the<br />

dimension items. The only exception is that the data for assumption cubes is stored in the model--<br />

approximately 10 bytes per data cell. If the assumption cubes are large, allow for this when estim-<br />

ating the model XML size.<br />

If cut-down models are used, the same rule still applies for each cut-down model, but the number<br />

of items in the dimensions will be reduced as a result of access tables. Even without access tables<br />

the e.List will be cut-down.<br />

Chapter 8: Managing User Access to Data<br />

The size of the XML data block for a particular e.List item is proportional to the number of dense<br />

data cells in all the cubes for that item. However, this is very hard to estimate, because it depends<br />

on the proportion of cells that contain non-zero data, and also the pattern of how these cells are<br />

spread through the cube. Also certain data values take less space than others (small integers are<br />

<strong>Administration</strong> <strong>Guide</strong> 141


Chapter 8: Managing User Access to Data<br />

packed more efficiently than large integers or floating point values). As an approximate upper<br />

bound, a data block should not be much larger than 16 bytes per cell in all the cubes.<br />

Please note these are rough estimates.<br />

Cut-down Model Example<br />

142 <strong>Contributor</strong><br />

In the e.List shown, Total and Div1 to Div10 are review e.List items, and CC1a to CC10j are con-<br />

tribution e.List items (all owned by different users):<br />

The model is an Employee Plan cube with an Employees dimension. Each cost center (CC) has 100<br />

of its own employees with no access to other employees. You would use an access table to give each<br />

CC write access to the appropriate 100 employees, with no data access to the rest.<br />

With no cut-down models, each planner will receive a model definition including a 10,000-item<br />

employee list, which is large in size (approximately 2.5 MB). Only one model definition needs to<br />

be produced and updated.<br />

With planner-specific model definitions, each planner’s model definition will contain only the<br />

required 100 items from the Employees dimension (approximately 25KB). Each dimension item is<br />

around 250 bytes. One hundred and eleven model definitions must be produced and updated.<br />

With review-level model definitions, the dimension definition downloaded to each planner will<br />

contain 1,000 items--thus this element of the model definition will be ten times larger than it needs<br />

to be, but still ten times smaller than the full version (approximately 250kb). Eleven model definitions<br />

must be produced and updated.<br />

To decide which cut-down method to use, consider these factors:<br />

● The application structure itself.<br />

● Whether bandwidth is an issue (are there many dial-up connections?)<br />

● Time taken to produce and re-create the cut-down models.<br />

● Number of e.List items.<br />

● e.List hierarchy, for example, with review-level model definitions it may be sensible to reduce<br />

the number of contribution e.List items per review e.List items by introducing dummy review<br />

e.List items to reduce the size of the model definitions.


Chapter 9: Managing Data<br />

The following types of data can be imported into and exported from IBM Cognos 8 <strong>Planning</strong> -<br />

<strong>Contributor</strong>.<br />

Type of data<br />

Data in other <strong>Contributor</strong><br />

applications and cubes<br />

Data in other <strong>Contributor</strong><br />

applications and cubes<br />

Data in <strong>Contributor</strong> applic-<br />

ations, Analyst libraries,<br />

macros, and administration<br />

links<br />

Text files<br />

IBM Cognos 8 Business<br />

intelligence data sources,<br />

including SAP BW<br />

Text files and <strong>Contributor</strong><br />

cubes<br />

Data in Analyst<br />

Function and target version<br />

<strong>Administration</strong> links (p. 147) in the <strong>Administration</strong> Console<br />

Targets either the production or development application<br />

System links (p. 144) in the <strong>Administration</strong> Console. Executed in<br />

the Web client using Get Data.<br />

Targets the production application<br />

Export a <strong>Contributor</strong> model or model with data, administration<br />

links, macros, and Analyst libraries and import them into a target<br />

environment or to IBM Cognos Resource Center (p. 170)<br />

Targets the development, test, or production applications<br />

Import Data (p. 173) in the <strong>Administration</strong> Console<br />

Targets the development application<br />

Import from IBM Cognos Package using <strong>Administration</strong> links<br />

(p. 147)<br />

Targets the development application<br />

Local links in the Web client.<br />

Targets the production application<br />

Analyst><strong>Contributor</strong> links in Analyst. See Analyst User <strong>Guide</strong><br />

Targets either the production of development application<br />

If you are moving data between <strong>Contributor</strong> cubes and not making model changes, use an admin-<br />

istration link to move data into the production application. The data is processed using an activate<br />

process, you do not have to run Go to Production. Note that there is no option to back up the<br />

datastore when targeting the production application. You can target only the development applic-<br />

ation if you are importing data from IBM Cognos 8 Packages.<br />

Administrators can also set up links that are run from a Web client session enabling Web client<br />

users to move data between a cube in a source application and a cube in a target application. For<br />

more information, see "<strong>Administration</strong> and System Links" (p. 144).<br />

<strong>Administration</strong> <strong>Guide</strong> 143


Chapter 9: Managing Data<br />

When you import data into a <strong>Contributor</strong> application, the data is first put into an import queue.<br />

There are two import queues, one for the development version of an application and one for the<br />

production version of an application. The import queues are independent of each other and contain<br />

the data in import blocks that are applied to an e.List item during a reconcile job.<br />

For each e.List item in an application, there is a model import block. Prior to being moved into the<br />

cube by a reconcile job (p. 54), the data from importing data, administration links, Analyst to<br />

<strong>Contributor</strong>, or <strong>Contributor</strong> to <strong>Contributor</strong> links, is placed into this model import block. For links<br />

that target the development application, the reconcile job is created during the Go to Production<br />

process. For links that target the production application, an activate process creates a reconcile job.<br />

Important: Be aware that if two reconcile jobs are run while users are working offline, the users<br />

will be unable to bring the data online. See "Editor Lagging " (p. 255) for more information.<br />

Because you can have multiple cube import blocks per cube, you can run administration links and<br />

Analyst to <strong>Contributor</strong> links, as well as import data, concurrently.<br />

Note that a model import block is represented by a row in the import queue table in the application<br />

datastore. An individual cube import block cannot be seen in the datastore.<br />

Understanding <strong>Administration</strong>, System, and Local Links<br />

144 <strong>Contributor</strong><br />

Administrators can move data between <strong>Contributor</strong> cubes and applications using administration<br />

and system links. Administrators can also import data from IBM Cognos 8 packages using admin-<br />

istration links. Web client users can move data between <strong>Contributor</strong> cubes using local links.<br />

<strong>Administration</strong> and System Links<br />

Administrators can create and run administration links to move large amounts of data between a<br />

source application and a target application and from an IBM Cognos 8 package to a <strong>Contributor</strong><br />

application.<br />

Administrators can create system links to allow users to move small amounts of data between a<br />

cube in a source application and a cube in a target application. These links are run from a Web<br />

client session by the Web client user.<br />

Note: For Classic <strong>Contributor</strong> Web Client users, the Get Data extension must be configured before<br />

you can create a system link. For more information about configuring the Get Data extensions, see<br />

"Configure Classic Client Extensions" (p. 301).<br />

The difference between administration and system links is described in the following table.<br />

<strong>Administration</strong> Links<br />

Run by the administrator in the <strong>Contributor</strong><br />

<strong>Administration</strong> Console, and by using macros.<br />

Designed to move large amounts of data and<br />

can be scheduled.<br />

System Links<br />

Run on the <strong>Contributor</strong> Web client by the<br />

<strong>Contributor</strong> application user (but created by<br />

the administrator in the <strong>Contributor</strong> Adminis-<br />

tration Console).<br />

Designed to move small amounts of data on an<br />

ad-hoc basis.


<strong>Administration</strong> Links<br />

Run on the job servers.<br />

Stored in the Content Manager datastore.<br />

When moving data between <strong>Contributor</strong><br />

applications, can contain multiple elements (sub-<br />

links) enabling a single link to have many cubes<br />

as the source and target.<br />

Can map an e.List dimension to a non-e.List<br />

dimension, enabling you to move data between<br />

applications that do not share an e.List.<br />

Can run a link to a locked e.List item.<br />

When moving data between <strong>Contributor</strong><br />

applications, can be tuned for optimal perform-<br />

ance.<br />

Can be sourced from IBM Cognos Packages.<br />

Local Links<br />

System Links<br />

Run on the Web client computer.<br />

Stored with the target application.<br />

Can contain only one element, and as a result<br />

can contain only one source and target cube.<br />

Can only map an e.List to an e.List dimension.<br />

Cannot run a link to a locked e.List item.<br />

Cannot be tuned for optimal performance.<br />

Cannot be sourced from IBM Cognos Packages.<br />

Local links allow Web client users to load data into the <strong>Contributor</strong> application from external data<br />

sources, and from the active <strong>Contributor</strong> grid. You create and run local links in the Web client<br />

using Get Data. These are similar to system links. For best performance, we recommend that users<br />

import into one e.List item at a time from external sources.<br />

Local links are similar to system links, except for the following differences:<br />

● Local links are created in the Web client, and not the <strong>Contributor</strong> <strong>Administration</strong> Console.<br />

● Local links can be used to import data from external data sources.<br />

● In local links, users can only import data from tabs in the active <strong>Contributor</strong> grid (system links<br />

can import data from source cubes to which the user has no access rights).<br />

Using Links to Move Data Between Cubes and Applications<br />

The kind of link you use depends on your role and what you want to do.<br />

Web client users (planners) can move data into the Web client from external sources, or from the<br />

active <strong>Contributor</strong> grid, using local links. For best performance, we recommend that users import<br />

data into one e.List item at a time from external sources. Web client users can also move data<br />

between cubes for one e.List item at a time, using system links created by the administrator.<br />

Note: For Classic <strong>Contributor</strong> Web Client Users, the Get Data extension "Configure Classic Client<br />

Extensions" (p. 301) must be configured before users can create a local link.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 145


Chapter 9: Managing Data<br />

Administrators move data from one production application to another, or to development applica-<br />

tions by using administration links. The administration link process uses the job system and so<br />

enables you to move large amounts of data. It is quicker to move data into the production application<br />

if you have no model changes to make. This is because if you move data into the development<br />

application, you must run Go to Production before the data is available to the Web client.<br />

You can copy commentary between applications using administration, system, or local links. After<br />

you run a link that includes commentary, annotation or attached documents, the target will have<br />

the same value and commentary as the source. This means that commentary in the target is removed<br />

if there is no commentary in the source. If you target a cell with more than one source cell, it will<br />

contain the aggregated value and the commentary from all the source cells. If you select only one<br />

type of commentary in the link, then the other type of commentary is not affected by running the<br />

link. You will not have multiple copies of commentary in target cells if you rerun the link.<br />

If you run more than one link to the same application, and the same cell is targeted, the most recent<br />

value is returned.<br />

You can import local links from <strong>Contributor</strong> for Microsoft Excel or the Classic <strong>Contributor</strong> Web<br />

Client or export links into those tools. Local links that you want to import and export between the<br />

Classic <strong>Contributor</strong> Client or <strong>Contributor</strong> for Microsoft Excel and the Contribuor Web Client must<br />

be saved as link definition files (*.cld). You can save the link as XML if you are importing and<br />

exporting links using the <strong>Contributor</strong> Web Client.<br />

Administrators can also move smaller amounts of information from Analyst to <strong>Contributor</strong> using<br />

Analyst ><strong>Contributor</strong> links. This process does not use the job system. We recommend that you use<br />

the @SliceUpdate macro to split one large link (across the entire e.List) into smaller links that deal<br />

with smaller numbers of e.List items at a time. A slice update sample is available on the IBM Cognos<br />

Resource Center Web site.<br />

For more information, see the IBM Cognos 8 <strong>Planning</strong> - Analyst User <strong>Guide</strong>.<br />

Using Links in Model Design<br />

146 <strong>Contributor</strong><br />

If you are designing new models, you can use administration and system links to improve perform-<br />

ance and make security maintenance easier.<br />

Instead of creating one large model targeted at many Web client users and then controlling what<br />

they can see through access tables, you can create several smaller models, each targeted at a smaller<br />

specific user group, and link the models together.<br />

Smaller models mean there is less need for access tables to control what users see. Go to Production<br />

times are typically faster because you can have a shorter e.List. The number of cut-down models<br />

can also be reduced, shortening the processing time.<br />

The following examples show how you can use links in your model design.<br />

Cascaded Models<br />

Using administration links, you can create several small models that contain a high level of detail,<br />

targeted at regional managers, and roll them up into a larger application with less detail so that the<br />

top executives see only the numbers that they are interested in.<br />

For example, you can have America, Asia, and Europe models rolling up into a Corporate model.


Matrix Management<br />

Using administration links, you can create models that allow data to roll up both on a regional and<br />

departmental basis, with approvals from both organization structures.<br />

For example, you can have a Company model where Human Resources reports into Country, and<br />

this can be linked into a Corporate model where Country reports into Human Resources.<br />

Enhanced Security<br />

<strong>Administration</strong> links allow you to separate cubes into applications by purpose. For example, you<br />

can have a sales forecasting application, a travel planning application, and a salary planning<br />

application. This separation of duty can improve security maintenance. An application containing<br />

a salary plan model may require many access tables to specify who can view the cube, you can<br />

simplify the cube by separating the access tables from the cube.<br />

<strong>Administration</strong> Links<br />

An administrator can use administration links to copy data between <strong>Contributor</strong> applications<br />

without having to publish data first. You can also use administration links to import data from<br />

IBM Cognos 8 data sources such as Oracle data stores, SQL Server data stores, SAP BW, or IBM<br />

Cognos TM1. If importing from IBM Cognos 8 data sources, you must first create a Framework<br />

Manager model and publish it as a package to IBM Cognos Connection. For more information,<br />

see "Importing Data from IBM Cognos 8 Data Sources" (p. 164).<br />

If you are importing data from a <strong>Contributor</strong> application, you must have sufficient access rights to<br />

select applications as the source and target of a link. If you are importing from IBM Cognos 8 data<br />

sources, you must have the rights to select applications as the target of a link, and be able to access<br />

the source package in IBM Cognos Connection. For more information, see "Configuring Access to<br />

the <strong>Contributor</strong> <strong>Administration</strong> Console" (p. 38). If you have appropriate rights, you can secure<br />

the ability to create, edit, execute, delete, import, and export administration links. You can also<br />

secure previously created administration links (administration link instances).<br />

Because data can be moved around easily, you can create smaller applications. Smaller applications<br />

can improve performance because shorter e.Lists have quicker reconciliation times. Additionally,<br />

smaller applications usually do not need as many access tables and cut-down models, so the time<br />

taken to run Go to Production is reduced. You can tune administration links for optimal perform-<br />

ance. For more information, see "Tuning <strong>Administration</strong> Links" (p. 158). For more information see<br />

"Cut-down Models and Access Tables" (p. 140).<br />

The source application must be a production application, which means that Go to Production must<br />

be run at least once. Also, all e.List items in the source application must be reconciled, otherwise<br />

the link will not run. The target application can be either the production application or the devel-<br />

opment application. An e.List must be defined. You can map an e.List dimension to a non e.List<br />

dimension to move data between applications that do not share an e.List. This is not possible in a<br />

system link.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> links are similar to D-Links defined between Analyst D-Cubes, except that look-up<br />

links, and Fill, Add, and Subtract modes are not supported. <strong>Administration</strong> links are also similar<br />

to system links. Administrators set up system links between applications that can be run on the<br />

Web client by end-users using Get Data. System links also have to have a source e.List mapped to<br />

<strong>Administration</strong> <strong>Guide</strong> 147


Chapter 9: Managing Data<br />

148 <strong>Contributor</strong><br />

a target e.List. Unlike administration links, a single system link can run from only one source cube<br />

in one application to one target cube in one application. You can, however, set up multiple system<br />

links.<br />

Administrators set up a series of elements that define sub-links from the production versions of<br />

applications to either the development or production versions of target applications.<br />

If the elements are grouped together into a single link so they can be run at the same time, you can<br />

move data simultaneously between multiple applications. For example, you may want to move data<br />

between the following applications:<br />

● Sales > Profit and Loss<br />

● Marketing > Profit and Loss<br />

● Personnel > Profit and Loss<br />

<strong>Administration</strong> links do not run unless one or more Job servers are monitoring the <strong>Planning</strong> Content<br />

Store. For information about adding the <strong>Planning</strong> Content Store to a Job server, see "Add Applica-<br />

tions and Other Objects to a Job Server Cluster" (p. 57).<br />

Order of Link Elements<br />

The order of elements in a link is important if individual elements in a link target the same applic-<br />

ation, cube, and e.List item. The order matters because of the way the data load works. When the<br />

administration link is run, each element of the link creates an individual cube import block for each<br />

e.List item that it is targeting.<br />

For example, if three link elements all target two e.List items in the Expenses cube in the Sales<br />

application, six cube import blocks are created.<br />

● When link element one is run, two cube import blocks are created, one for each e.List item.<br />

● When link element two is run, it targets the same e.List items, and overlaps some of the cells<br />

targeted by link element one. A further two cube import blocks are created.<br />

● When link element three is run, it also targets the same e.List items, and two cube import blocks<br />

are created.<br />

Any cells that are updated by link element three that overlap elements two and one take the value<br />

from element three. Each e.List item in the target application can have only one Model Import<br />

Block (MIB) per application state type. The MIBs are stored in the import queue. There could be<br />

one for development and one for production. Each MIB can hold many cube import blocks (CIB).<br />

CIBs are inserted into the MIB in chronological order. There is no specific precedence related to<br />

the source of the data. Each link element has its own CIB, as does each Analyst link, plus another<br />

CIB for the relational import.<br />

Note: Where multiple link elements exist in a link, the CIBs for those links will be in the order that<br />

the link elements are defined in the link.


Link Mode<br />

<strong>Administration</strong> links run in Substitute mode. This means that data in cells in the target area of the<br />

D-Cube are replaced by the transferred data. If no data is found in the source for a particular cell,<br />

the data in that cell is left unchanged.<br />

If data is imported into a read-only cell that is a target of a D-Link, the D-Link will override the<br />

current import value.<br />

Link Order<br />

The order in which you run links is important. For example, if you run the Analyst><strong>Contributor</strong><br />

link before the administration link, the Analyst><strong>Contributor</strong> link is applied to the cube first.<br />

However, if you run the same Analyst > <strong>Contributor</strong> Link again after the administration link, the<br />

first Analyst > <strong>Contributor</strong> link is overwritten, and the second Analyst > <strong>Contributor</strong> link is run<br />

after the administration link.<br />

Analyst><strong>Contributor</strong> links are activated automatically after every administration link if run using<br />

the Analyst user interface rather than macros.<br />

Running Import Data and Links more than Once<br />

You can import data into specific cells only once before running Go to Production. If you run<br />

import data to the same cells twice, the cells affected by earlier cube import blocks are overwritten.<br />

You can run multiple administration links and Analyst > <strong>Contributor</strong> links before running Go to<br />

Production. If you are targeting the production version of the application with an administration<br />

link, you do not need to run Go to Production. A reconcile job is triggered by an activation process.<br />

If the links target different cells, separate cube import blocks are created. If the links target the same<br />

cells, the link-affected cells from the first link are overwritten.<br />

Making Changes to the Development Application After a Link or Import Data Process<br />

Some changes to a development application may affect a link or import data in the following ways:<br />

● If the cubes that you are importing data into changed, you may have to re-create the source<br />

files and go through the complete import data process, or re-create and re-execute links. If<br />

changes were made that do not affect the cube that you are importing data into, you need only<br />

to rerun Prepare Import.<br />

● Any changes made to access tables, saved selections, or the e.List that result in a different pattern<br />

of No Data cells for contribution e.List items that are common to both the development and<br />

production applications result in the import queue being deleted.<br />

● Creating access tables after Prepare Import has run causes the import queue to be deleted.<br />

● If the items are mapped manually, the link must be updated or recreated after any changes to<br />

dimension items.<br />

Create an <strong>Administration</strong> Link<br />

Chapter 9: Managing Data<br />

Create an administration link to copy data between cubes in <strong>Contributor</strong> applications, or to import<br />

data from IBM Cognos 8 data sources. If you are importing data from IBM Cognos 8 data sources,<br />

<strong>Administration</strong> <strong>Guide</strong> 149


Chapter 9: Managing Data<br />

150 <strong>Contributor</strong><br />

you must first create and publish an IBM Cognos package containing the data you want to import.<br />

For more information, see "Importing Data from IBM Cognos 8 Data Sources" (p. 164).<br />

<strong>Administration</strong> links will not run unless one or more job servers are monitoring the <strong>Planning</strong> content<br />

store. For information about adding the <strong>Planning</strong> content store to a job server, see "Add Applications<br />

and Other Objects to a Job Server Cluster" (p. 57).<br />

Note: Any changes to the source file are not reflected in the target unless the administration link is<br />

rerun.<br />

Steps to Create Links Between Cubes<br />

1. Click <strong>Administration</strong> Links, Manage Links.<br />

2. Under the <strong>Administration</strong> Links pane, choose whether to add a link or edit an existing one:<br />

● To add a link, click New.<br />

● To edit a link, click Edit.<br />

If the link definition specifies an application that no longer exists, the Select Link Source/Target<br />

dialog box appears. Select a different source application, target application, or both, and then<br />

click OK.<br />

If you chose an application with an incompatible model structure, a message appears indicating<br />

that the selected application is invalid and that the editor is empty. Close the editor, click Edit,<br />

and then select a different application. Type a brief description of the source and target of the<br />

link element.<br />

3. Enter or edit the name and description of the link.<br />

Both can have up to 250 characters. Link names must be unique and must not be empty or<br />

consist only of spaces.<br />

4. Choose <strong>Contributor</strong> Application as the data source type.<br />

5. To tune administration link performance, click the Advanced button to adjust the amount of<br />

e.List items that load in a single batch for both the source and target.<br />

Note: this is not needed if selecting an IBM Cognos Package.<br />

For more information on tuning batch sizes, see "Tuning <strong>Administration</strong> Links" (p. 158).<br />

6. To use the standard configuration, click OK.<br />

The <strong>Administration</strong> Link-Element dialog box appears.<br />

7. Select the source application and cube.<br />

The source application must be a production application. You can preview the dimensions of<br />

a cube in the right pane.<br />

8. Select the target application and a target cube.<br />

The application can be either the production or development application.


9. Click Map to map source dimensions to a target dimension manually (p. 154), or click Map All<br />

to map dimensions with the same name. You need at least one set of matching dimensions in<br />

order to use the Map All feature.<br />

The mapped dimension pairs now appear in the lower set of Map source to target dimensions<br />

lists. A single line connects paired dimensions.<br />

Tips:<br />

● Double-click the connecting line (or either dimension) to confirm that the items in the<br />

dimensions are mapped correctly.<br />

● To edit the properties of a mapped dimension, click the source, target, or line between the<br />

source and target dimension names and click edit.<br />

● To remove a map, click the map and click Clear. Clear all created maps by clicking Clear<br />

All.<br />

10. In the Additional Options window of the <strong>Administration</strong> Link-Element dialog box, you can<br />

choose to include annotations or attached documents. Do one of the following:<br />

● To only include annotations, click Include Annotations.<br />

● To only include attached documents, click Include Attached Documents.<br />

11. Click Finish when you are done configuring the link element.<br />

12. If you want to add a new element, click Yes. To return to the main <strong>Administration</strong> Links win-<br />

dow, click No.<br />

Note: If you add a new element, it must match the source used in the original link.<br />

Both actions save the current element. You can change the order in which the elements are run<br />

using the arrow buttons. For more information, see "Order of Link Elements" (p. 148).<br />

13. If you want to execute the link, click Execute.<br />

If you want to monitor the progress of an administration link, under <strong>Administration</strong> links, click<br />

Monitor Links. For more information, see "Jobs" (p. 49).<br />

Tip: If you receive an error message stating that the batch sizes are too large to load data, you<br />

need to adjust the batch sizes. For more information, see "Tuning <strong>Administration</strong> Links" (p. 158).<br />

To automate this process, see "Execute <strong>Administration</strong> Link" (p. 219).<br />

Note: Applications defined in a link may no longer be available since the administrator last<br />

created or modified the link. An application becomes invalid when the application ID is changed<br />

because the application was transferred from a development environment to a production<br />

environment.<br />

Steps to Create Links with IBM Cognos Package as the Source<br />

1. Click <strong>Administration</strong> Links, Manage Links.<br />

2. Under the <strong>Administration</strong> Links pane, choose whether to add a link or edit an existing one:<br />

● To add a link with IBM Cognos Package as the source, click New.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 151


Chapter 9: Managing Data<br />

152 <strong>Contributor</strong><br />

● To edit a link, click Edit.<br />

If the link definition specifies an application or package that no longer exists, the Select Link<br />

Source/Target dialog box appears. Select a different source package, target application, or both,<br />

and then click OK.<br />

If you chose a package with an incompatible model structure, a message appears indicating<br />

that the selected package is invalid and that the editor is empty. Close the editor, click Edit,<br />

and then select a different application. Type a brief description of the source and target of the<br />

link element.<br />

3. In the <strong>Administration</strong> Link Properties dialog box, enter or edit the name and description of the<br />

link.<br />

Both can have up to 250 characters. Link names must be unique and must not be empty, or<br />

consist only of spaces.<br />

Select IBM Cognos Package as the Data Source Type.<br />

By default, the administration link will run a prepare import job to process import data ready<br />

for reconciliation. Click the Advanced button and clear the Run Prepare Import Job check box<br />

to change the default setting.<br />

4. In the Select an IBM Cognos Package as the Link Source dialog box, browse for an IBM Cognos<br />

Package in IBM Cognos Connection by clicking the ellipses button.<br />

If you select a package not published from Framework Manager you will get an error message<br />

stating that the package you have selected cannot be used as a source for an administration<br />

link because it was not published from Framework Manager.<br />

5. Click a query subject.<br />

6. Select the available query items in the query subject and move them to the Selected Query Items<br />

pane.<br />

Select the Display preview of selected query item check box to preview the query items. The<br />

preview option only works with query items that have not been selected, and helps you select<br />

the correct query items.<br />

7. Click OK to bring the query items into the link.<br />

8. In the <strong>Administration</strong> Link-Element dialog box, select the target application and a target cube.<br />

The application has to be Development.<br />

9. Click Map to map source dimensions to a target dimension manually (p. 154), or click Map All<br />

to map dimensions with the same name. You need at least one set of matching dimensions in<br />

order to use the Map All feature.<br />

The mapped dimension pairs now appear in the lower set of Map source to target dimensions<br />

lists. A single line connects paired dimensions.<br />

Tips:<br />

● Double-click the connecting line (or either dimension) to confirm that the items in the<br />

dimensions are mapped correctly.


● To edit the properties of a mapped dimension, click the source, target, or line between the<br />

source and target dimension names, and click edit.<br />

● To remove a map, click the map and click Clear. Clear all created maps by clicking Clear<br />

All.<br />

10. If you want to select the columns containing the data, click Mark Data.<br />

There are two main principles on how to structure these links:<br />

● Anything which is a measure in the FM model must be marked as data before it can be<br />

paired with a dimension.<br />

● Anything marked as data must be paired with a dimension and must not be left unpaired<br />

in the link.<br />

Note: Mark Data is not available once you have mapped your data.<br />

11. In the <strong>Administration</strong> Link - Element dialog box, click Next to pick unmapped source Query<br />

Items and unmapped target dimension items.<br />

12. In the Additional Options window of the <strong>Administration</strong> Link-Element dialog box, you can<br />

choose to include annotations or attached documents. Do one of the following:<br />

● To include only Annotations, click Include Annotations.<br />

● To include only Attached Documents, click Include Attached Documents.<br />

13. Click Finish when you are done configuring the link element.<br />

14. If you want to add a new element, click Yes. To return to the main <strong>Administration</strong> Links win-<br />

dow, click No.<br />

Note: If you add a new element, it must match the source used in the original link.<br />

Both actions save the current element. You can change the order in which the elements are run<br />

using the arrow buttons. For more information, see "Order of Link Elements" (p. 148).<br />

15. If you want to execute the link, click Execute.<br />

If you want to monitor the progress of an administration link, under <strong>Administration</strong> links, click<br />

Monitor Links. For more information, see "Jobs" (p. 49).<br />

To automate this process, see "Execute <strong>Administration</strong> Link" (p. 219).<br />

Note: Applications defined in a link may no longer be available since the administrator last<br />

created or modified the link. An application becomes invalid when the following occurs:<br />

● The application ID is changed because the application was transferred from a development<br />

environment to a production environment.<br />

● When changing the package or target application, you chose a package with an incompatible<br />

model structure.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 153


Chapter 9: Managing Data<br />

Map Dimensions Manually<br />

154 <strong>Contributor</strong><br />

Manually mapping dimensions may bring performance improvements for some links. A manually<br />

mapped link filters the data at the source, so less data is moved. Auto-mapped links do not perform<br />

such filtering, so it is possible that more data is moved than if the same link used manual mapping.<br />

Steps<br />

1. Click Map.<br />

The Map Items dialog box appears. Any matching dimension items are highlighted.<br />

If a source dimension does not map to any target dimension, it can be treated as an extra source<br />

dimension. If the items in the source and target dimensions do not match, either a manual map<br />

or loading of an allocation table is required. For example, if the source item is Jan-03 and the<br />

target item is 1-03, a manual map is required. Or you can load an allocation table from Analyst<br />

that you have already created to map those source and target items that don’t match.<br />

If items in a source or target of the manually mapped link are added, the link must be manually<br />

updated to account for the new items in order to correctly run the load.<br />

If you load an allocation table, you can synchronize to ensure the source allocation table in<br />

Analyst matches the allocation table in <strong>Contributor</strong>.<br />

2. If you want to map items based on capitalization, select Case Sensitive.<br />

3. If you want to include calculated items (shown in bold) select Calculated items.<br />

4. If matching dimensions are highlighted, click OK to accept them.<br />

The Map Items dialog box closes and returns you to the Map Source to Target dialog box.<br />

5. If some unmatched items remain in the Map Items dialog box, click Manually Map.<br />

If you select Manually Map, then select a source dimension and target dimension, click Add,<br />

and click OK.<br />

Note: It is okay to have unmapped items.<br />

The matching pairs of dimensions move under the unmapped dimension fields to the mapped<br />

dimension fields and a line connects the two dimensions.<br />

If you have a long list of dimension items to map, you can filter them based on the first characters<br />

in the item name.<br />

Note: This filter applies only to items that appear in the Dimension Items list. It does not affect<br />

what is loaded into the target.<br />

6. In the Filter box, type the character you want to filter with.<br />

Only the items that begin with that character appear.<br />

Tip: To remove the filter, delete the character in the Filter box .<br />

7. In the Map Items dialog box, click Substring.<br />

The Select Substring dialog box appears with the longest item name in the dimension list.


When you use a substring, all the items that match the substring are rolled up into one item.<br />

For example, if you have dimension items named Budget 1, Budget 2, and Budget 3 and you<br />

applied the substring BUD, all three items are rolled into one dimension item to be loaded into<br />

the target dimension.<br />

Note: Unlike filtering by characters, using a substring applies to what is included in the load<br />

as well as what is viewed in the Dimension Items list. You can use a substring when mapping<br />

dimensions manually or automatically.<br />

8. Click in the Substring box to place bars at the beginning and end of the substring. If the substring<br />

appears in the front of the string, place a single bar at the end of the substring.<br />

To remove the bar, right-click it.<br />

9. Click OK. The dimension items are now filtered by the number of characters you selected.<br />

Use an Allocation Table in <strong>Administration</strong> Links<br />

If you have two lists in <strong>Contributor</strong> that do not have an exact character match during mapping,<br />

you can load an allocation table from Analyst into an administration link. You can map between<br />

two e.Lists, an e.List to a D-List, a D-List to an e.List, and a D-List to a D-List. The allocation table<br />

allows you to map items from one list to another list that do not match up, allowing you the ability<br />

to reuse a mapping.<br />

While a manual mapping can be set up in an administration link, it cannot be reused. With an<br />

allocation table, you can set up a mapping in Analyst and reuse this not only in Analyst but also<br />

in administration links. Once you load the allocation table into the administration link, you can<br />

ensure that the data remains up to date by using the Synchronize functionality in <strong>Administration</strong><br />

Links, Manage Links. If the underlying allocation table in Analyst has changed, synchronizing<br />

updates it in <strong>Contributor</strong>.<br />

Steps<br />

1. Create a new administration link, select <strong>Contributor</strong> Application as Data Source Type.<br />

2. Select the source and target dimensions (e.List to e.List, e.List to D-List, D-List to e.List, or D-<br />

List to D-List).<br />

3. Click Map.<br />

The Map Items dialog box appears. Any matching dimension items are highlighted.<br />

4. To load an allocation table, click Allocation Table.<br />

5. In the Select A-Table dialog box, select the library that contains the A-Table and then select<br />

the A-Table that you want to use. You will see a preview of the A-Table in the Preview field.<br />

6. Click OK.<br />

7. The A-Table you selected is loaded into the Map Items dialog. Create mappings for any other<br />

dimensions in the source and target applications, then click Finish.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 155


Chapter 9: Managing Data<br />

Validate <strong>Administration</strong> Links<br />

If you have an administration link based on an Analyst A-Table, if a synchronize with Analyst has<br />

been done, or if there have been any changes to the A-Table in Analyst, you should validate the<br />

link.<br />

Steps<br />

1. In the <strong>Administration</strong> Console, expand the <strong>Administration</strong> Link tree and click Manage Links.<br />

2. In the Select for Validation or Synchronization column, select the check box for each of the<br />

links that you want to validate.<br />

3. Click Validate.<br />

4. You can monitor the progress of the validation by clicking Monitor Links in the tree under<br />

<strong>Administration</strong> Links. After the validation job runs, if the link or links are valid the edit status<br />

will be COMPLETE. If it the link or links are invalid, the edit status will be UNKNOWN.<br />

Check the administration link to ensure that the source and target cubes are available, and all<br />

dimensions are mapped.<br />

Note: You can automatically validate one or more administration links by using the Validate<br />

<strong>Administration</strong> Links macro.<br />

Synchronize <strong>Administration</strong> Links<br />

156 <strong>Contributor</strong><br />

When you create an administration link based on an Analyst A-Table, there is always the possibility<br />

that the underlying A-Table in Analyst can change over time. You can synchronize to ensure the<br />

A-Table you have used in an administration link is up to date.<br />

Steps<br />

1. From the <strong>Contributor</strong> <strong>Administration</strong> Console, under <strong>Administration</strong> Links, click Manage<br />

Links.<br />

2. In the Select for Validation and Synchronization column, select the check box for each admin-<br />

istration link that you want to synchronize.<br />

3. Click Synchronize.<br />

4. You can monitor the progress of the synchronization by clicking Monitor Links in the tree<br />

under <strong>Administration</strong> Links. After the synchronization job runs, the edit status will be COM-<br />

PLETE.<br />

Note: You can automatically synchronize one or more administration links by using the Syn-<br />

chronize <strong>Administration</strong> Links macro.


View Items in a Dimension<br />

You can preview the items that appear in a dimension.<br />

Steps<br />

Remove a Dimension<br />

1. Select either a source or target dimension.<br />

2. Click the preview button .<br />

You can remove a selected dimension.<br />

Steps<br />

1. In the Map Source to Target dialog box, click the source dimension you want to remove.<br />

2. Click the remove button .<br />

This removes the description designation from a row or column. The row or column is now<br />

treated as values.<br />

Running <strong>Administration</strong> Links<br />

<strong>Administration</strong> links are run using the job system and are scalable. They can be automated using<br />

macros, see "Administrator Links (Macro Steps)" (p. 219).<br />

When a link targets the development application, you must run Go to Production so that users can<br />

access the linked data. The data is moved into the prepared import blocks. You can see the results<br />

of this in the Import Data, Prepared Data Blocks tab (p. 176).<br />

When a link targets the Production application, there is no need to run Go to Production. An<br />

administration link job is created when the link is executed. At the job end, an activate process is<br />

called. This moves the data into the import production queue and creates a snapshot of the data at<br />

the time the link was executed. Then a reconcile job is triggered, which updates the e.List items<br />

with the new data.<br />

Exporting and Importing <strong>Administration</strong> Links<br />

You can export administration links from one application and import it into another using the<br />

<strong>Administration</strong> Console or the Deployment Wizard (p. 170). To import administration links using<br />

the <strong>Administration</strong> Console, the source and target applications must still exist, and the metadata<br />

must be unchanged. In addition, the application IDs must remain the same.<br />

The process is as follows.<br />

❑ Export the administration links from the <strong>Contributor</strong> applications. You can only export one<br />

link at a time.<br />

❑ Backup and remove the <strong>Contributor</strong> applications from the current <strong>Planning</strong> Content Store and<br />

add them to the new <strong>Planning</strong> Content Store.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 157


Chapter 9: Managing Data<br />

❑ Import the administration links into the new <strong>Planning</strong> Content Store.<br />

Steps<br />

1. Click <strong>Administration</strong> Links, and Manage Links.<br />

2. Click the administration link and then click Export.<br />

3. Enter the name and location, and click Save. The administration link is saved with a .cal<br />

extension.<br />

4. To import an administration link, click the Import button, and select the administration link<br />

file.<br />

Tuning <strong>Administration</strong> Links<br />

If it is given an edit status of UNKNOWN, check the administration link to ensure that the<br />

source and target cubes are available, and all dimensions are mapped.<br />

Note: When importing an administration link created using IBM Cognos <strong>Planning</strong> version 7.3<br />

SP3 or earlier, the source and target batch size setting is 1, which loads one target/source e.List<br />

item into batch. This was the default behavior of the previous versions of IBM Cognos <strong>Planning</strong>.<br />

For more information, see "Tuning <strong>Administration</strong> Links" (p. 158).<br />

<strong>Administration</strong> link performance can vary for a number of reasons, such as the e.List length of both<br />

the target and source, the size of the target and source, and the complexity of the link. Link perform-<br />

ance may be slower on certain configurations, even when large data volumes are not present. For<br />

more information, see "Variables That Affect Performance of <strong>Administration</strong> Links" (p. 159).<br />

If necessary, you can tune administration links to get the best performance. e.List items are loaded<br />

into memory in batches. By adjusting the batch sizes for both the source and target e.List items that<br />

are loaded to process the administration link, you may improve performance. This is particularly<br />

true when there is a link with a many-to-many relationship (p. 159), because these typically need a<br />

lot of processing power.<br />

Note: You cannot tune administration links that use IBM Cognos Packages as their source. This is<br />

because IBM Cognos Packages do not load from e.List items.<br />

A batch is a set of data that is to be transferred can include data from more than one e.List item.<br />

A batch can also target multiple e.List items.<br />

Important: When changes occur to a model you should evaluate whether you need to retune the<br />

administration link.<br />

When Should You Tune an <strong>Administration</strong> Link?<br />

158 <strong>Contributor</strong><br />

Many administration links do not require tuning adjustments. This is because the default settings<br />

work well for many administration link scenarios. To see if your administration link needs tuning,<br />

we recommend that you create and run the administration link using the default settings, and then<br />

adjust the batch size limits if performance becomes an issue. For more information, see "Determine<br />

Optimal Batch Size" (p. 160).<br />

Tuning administration links affects only the time taken to move data, not the time taken to incor-<br />

porate the data into the target application via a reconcile job. Links that target the development


application spend all their time moving data through the inter_app_links job. These links do not<br />

trigger reconcile jobs, so if they are taking a long time, you should consider tuning them.<br />

Links targeting the production application spend time in two actions:<br />

● moving data via an inter_app_links job<br />

● incorporating that data into the target application via a reconcile job<br />

To determine whether or not tuning the administration link will be beneficial, review the amount<br />

of time it takes to move data versus any time spent on the reconciliation.<br />

You can see the inter_app_links job in the Monitor Links window, and the reconcile job in the Job<br />

Management window of the target application.<br />

Tip: If you are running multiple administration links that target the same application, consider<br />

targeting the development application and running Go to Production. This means that reconciliation<br />

is run once instead of multiple times. Alternatively, instead of having multiple links, you can have<br />

multiple link elements from different applications in the same link targeting the production applic-<br />

ation. In this case, reconciliation is run only once.<br />

Variables That Affect Performance of <strong>Administration</strong> Links<br />

Many factors play a role in determining optimal performance for running administration links.<br />

Types of Link<br />

Where the source and target applications share the same e.List, and each source e.List item is mapped<br />

to its matching target e.List item, the link has a one-to-one relationship. The amount of effort to<br />

run this link is determined by the number of mappings between e.List items.<br />

Links that have a single source e.List item targeting multiple e.List items have a one-to-many rela-<br />

tionship. The effort required to run this link is determined by the number of target e.List items.<br />

Links where many source e.List items target a single e.List item have a many-to-one relationship.<br />

The effort required to run this kind of link is determined by the number of source e.List items.<br />

Links where multiple source e.List items are mapped to multiple e.List items have a many-to-many<br />

relationship. These links typically need a lot of processing power, because the effort needed to run<br />

them is calculated by the number of source e.List items multiplied by the number of target e.List<br />

items. You may get the most benefit from tuning these links.<br />

Number of Processors<br />

The number of processors and the amount of available RAM directly affect performance.<br />

If any of your servers in the job server cluster have more than 4 CPUs available, we recommend<br />

that you increase the Job Item Count multiplier per machine setting in the epInterAppLinkResources.<br />

xml file (\cognos\c8\bin). The default setting is 4 CPUs per job server. However,<br />

having fewer than 4 CPUs does not negatively affect performance.<br />

The file is installed as read-only. We recommend that you back up the file and reset the read-only<br />

flag to write in order to change the CPU number. You must make the same change to the file on<br />

all servers in the cluster.<br />

Note: This setting only affects the administration link performance.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 159


Chapter 9: Managing Data<br />

Model Changes<br />

Changes to the model affect how the administration link performs. If you tune an administration<br />

link and it shows improved performance and then a change occurs in the model, the optimization<br />

may become invalid. This is because the change can affect the overall shape of the administration<br />

link (e.List length, cube size, and so on) that the tuning was based on.<br />

Determine Optimal Batch Size<br />

Adjust the number of source and target e.List items processed at one time to optimize performance<br />

of the administration link.<br />

Steps<br />

1. While the administration link runs, monitor the memory utilization on the least powerful server<br />

in the job server cluster on which administration links run.<br />

2. Adjust the batch size for both the source and target and rerun the administration link. We<br />

suggest that you increase the source batch size where possible before increasing the target batch<br />

size.<br />

Set Source Batch Size<br />

160 <strong>Contributor</strong><br />

● For the source, if there are 150 source e.List items, try entering 75. If that does not work,<br />

try 50 and so on.<br />

● For the target, divide the number of e.List items by the number of physical processors<br />

multiplied by 2.<br />

For example: 2250 e.List items/(14 processors*2)<br />

This gives you a figure of 80.<br />

If this is too large, try multiplying the number of processors by 4 and so on.<br />

The values for Limit To must be positive whole numbers and greater than zero in order for the<br />

tuning settings to be valid.<br />

3. Monitor the memory utilization on the same server to see if it has improved.<br />

4. If not, adjust the numbers and run the administration link again.<br />

You can set the amount of source e.List items that are processed at one time. By default, all<br />

applicable e.List items are processed into the relevant target(s). The default setting is No Limit for<br />

newly created administration links, which loads all the source e.List items in one batch. This means<br />

that each e.List item is read once and then grouped and loaded into the targets. This setting may<br />

work well, but if you have a large model, you may need to reduce the setting.<br />

Note: When importing an administration link created using IBM Cognos <strong>Planning</strong> version 7.3 SP3<br />

or earlier, the source and target batch size setting is 1, which loads only one source e.List item at<br />

a time. This was the behavior of the previous versions of IBM Cognos <strong>Planning</strong>.<br />

Steps<br />

1. In the Create New Link dialog box, click Advanced.


Set Target Batch Size<br />

2. If you want to load all e.List items at once, ensure that No Limit is selected.<br />

3. If you want to divide your loads into batches, type a number into the Limit To box.<br />

Note: The values for the Limit To box must be positive whole numbers and greater than zero<br />

in order for the tuning settings to be valid.<br />

4. If the performance is acceptable, leave the Source Batch Size as no limit. If you get errors, reduce<br />

the size.<br />

You can set the number of target e.List items that are processed in one batch. The default setting<br />

is 1 for newly created administration links, which loads the source batch into one e.List item at a<br />

time.<br />

Steps<br />

1. In the Create New Link dialog box, click Advanced.<br />

2. If you want to target all e.List items at once, select No Limit.<br />

3. If you want to divide your loads into batches, enter a number into the Limit to box.<br />

4. Click OK.<br />

You now need to configure the link element (p. 149).<br />

Tuning Existing <strong>Administration</strong> Links<br />

<strong>Administration</strong> links created using IBM Cognos <strong>Planning</strong> version 7.3 SP3 or earlier can be imported<br />

(p. 157) and reused. These administration links processed each source and target item one at a time.<br />

Now, source and target items are processed in batches and those items are held in memory, reducing<br />

the number of transfers that occur.<br />

When a previously created administration link is imported, the source and target batch size is set<br />

to 1, which loads one source and target item into a batch for processing. We recommend that you<br />

change the source batch size to No Limit, which is the default value for any newly created adminis-<br />

tration link. By adjusting this setting you should see performance gains. You can then try to adjust<br />

the batch size settings to further improve performance.<br />

Troubleshooting Tuning Settings<br />

An administration link will fail if the source and/or target batch sizes are too large because too<br />

much data will be loaded into memory. If you receive the following error message, set the source<br />

and target batch sizes to a smaller number and rerun the administration link.<br />

Failed to load source data.<br />

You could try setting the source batch size to less than<br />

X so that fewer source e.List items are loaded at the same time.<br />

You could try reducing the target batch size to less<br />

than X so that fewer target e.List items are processed at the same<br />

time.<br />

(where X is the recommended batch size)<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 161


Chapter 9: Managing Data<br />

Troubleshooting Remote Call Time-Out<br />

System Links<br />

162 <strong>Contributor</strong><br />

When you build or modify links in the <strong>Contributor</strong> <strong>Administration</strong> Console, queries are executed<br />

to get metadata into the <strong>Contributor</strong> <strong>Administration</strong> Console for display. On particularly large<br />

queries, you may receive a message similar to the following:<br />

502 - Bad Gateway URL: http://localhost:80/cognos8/cgi-bin/cognos.cgi,<br />

SOAP action: http://developer.cognos.com/schemas/<br />

planning<strong>Administration</strong>ConsoleService/1~~HTTP/1.1<br />

502 Bad Gateway~~Content-Length: 252~~Content-Type: text/html~~Server: Microsoft-<br />

IIS/6.0~~~~~~<br />

We recommend that you increase the Remote Call Time-out in Seconds setting to 7200 seconds in<br />

the epAdminLinksResources.xml file, located at \cognos\c8\bin. The file is installed<br />

as read-only. We recommend that you back up the file and reset the read-only flag to writable.<br />

After changing this setting, the <strong>Planning</strong> service needs to be stopped and restarted on the machine<br />

that is building or modifying the link.<br />

Administrators can set up links that are run from a Web client session so that Web client users can<br />

move data from a cube in a source application to a cube in a target application. A system link is a<br />

pull link, rather than a push link.<br />

A system link can target hidden, read-only, and writable cells.<br />

System links move date from one source cube in one application to one target cube in another<br />

application. System links are stored with the application, whereas administration links are stored<br />

in a separate datastore. The target for system links must be in the production version of the<br />

application, whereas the target for administration links can be in the production or development<br />

version of the application.<br />

You cannot map an e.List dimension to an ordinary dimension in a system link, unlike in an<br />

administration link. This is for performance reasons. If many e.List items must be loaded for a link,<br />

this potentially takes a lot of resources. An administration link can run across job servers and is<br />

scalable, so resources are usually not a problem. But a system link runs on the Web client computer<br />

and is not scalable. If you must map an e.List dimension to an ordinary dimension, use an adminis-<br />

tration link. A target e.List item can have only one source e.List item mapped to it, but one source<br />

e.List item can be mapped to many e.List items.<br />

To create a link, administrators must be granted the access rights System link as source, and System<br />

link as target, for the relevant applications. In addition, the Admin Options setting Act as system<br />

link source must be set to Yes for source applications. For more information, see "Admin<br />

Options" (p. 79). Otherwise, you can still create links using this source, but the Web user cannot<br />

run the link. You assign the link to an e.List item in the target application.<br />

To run the link, the user must have write access to the e.List item that the link is assigned to. They<br />

do not require rights to the source cube. Links are executed on the client computer through Get<br />

Data. They can be run only from the target application. Client users cannot edit system links.<br />

Note: For Classic <strong>Contributor</strong> Web Client users, the Get Data extension must be configured before<br />

you can run a system link or local link. For more information about configuring the Get Data<br />

extensions, see "Configure Classic Client Extensions" (p. 301).


The Go to Production process does not have to be run after you set up a system link.<br />

The history of system link actions is stored as an annotation for cubes and targeted e.List items, if<br />

enabled. When a system link is run, a new annotation is created for that link in the open e.List item.<br />

If the link is executed again by the same user or another user, the same annotation is updated. In<br />

addition, a separate history dialog shows all history related to the links that apply to the open e.List<br />

items.<br />

Create a System Link<br />

Applications defined in a link may no longer be available since the administrator last created or<br />

modified the link. An application becomes invalid when the following occurs:<br />

● The application ID is changed because the application was transferred from a development<br />

environment to a production environment.<br />

● When changing the source or target application, you chose an application with an incompatible<br />

model structure.<br />

To use an application as a source for a System Link, you must first set Act as system link source in<br />

Admin Options to Yes. For more information, see "Admin Options" (p. 79).<br />

Steps<br />

1. Click Production, System links.<br />

2. Choose whether to add a system link or edit an existing one:<br />

● To add a link, click New.<br />

● To edit a link, click Edit.<br />

3. If the link definition specifies an application that no longer exists, the Select Link Source/Target<br />

dialog box appears. If this happens, select a different source application, target application, or<br />

both, and then click OK.<br />

If you chose an application with an incompatible model structure, a message appears indicating<br />

that the selected application is invalid and that the editor is empty. Close the editor, click Edit,<br />

and then select a different application.<br />

4. Type a descriptive name for the system link.<br />

5. Select a Source Application, and a Source Cube.<br />

The source application must be a production application, which means it contains an e.List<br />

and Go to Production was run.<br />

6. Select a Target Cube.<br />

7. Map the source dimensions to the target dimensions manually (p. 154), or click Map All to map<br />

dimensions with the same name.<br />

You must have at least one set of matching dimensions to use Map All.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 163


Chapter 9: Managing Data<br />

The mapped dimension pairs move to the fields below, and a line connects the two. This line<br />

signifies that these dimensions are a matched pair.<br />

Note: The Substring option is unavailable to system links on the e.List dimensions because you<br />

cannot have multiple sources or multiple targets due to the potentially large number of nodes<br />

that would need to be downloaded to the client in order to execute the system link.<br />

8. In the Additional Options window of the <strong>Administration</strong> Link-Element dialog box, you can<br />

choose to include annotations or attached documents. Do one of the following:<br />

● To include only Annotations, click Include Annotations.<br />

● To include only Attached Documents, click Include Attached Documents.<br />

9. In the System Link dialog box, click Finish.<br />

Importing Data from IBM Cognos 8 Data Sources<br />

164 <strong>Contributor</strong><br />

You can import data into IBM Cognos 8 <strong>Planning</strong> - Analyst and IBM Cognos 8 <strong>Planning</strong> - Contrib-<br />

utor from any data source that can be published as an IBM Cognos 8 package.<br />

For more information about supported data sources, visit the IBM Cognos Resource Center (http:<br />

//www.ibm.com/software/data/support/cognos_crc.html).<br />

There are additional considerations when importing SAP BW data into IBM Cognos 8 <strong>Planning</strong>.<br />

For more information, see "Working with SAP BW Data" (p. 167).<br />

For information on IBM Cognos 8 <strong>Planning</strong> configuration requirements for SAP BW, see the IBM<br />

Cognos 8 <strong>Planning</strong> - Installation and Configuration <strong>Guide</strong>.<br />

You must have Framework Manager installed. If you are working with SAP BW data, you must<br />

install the SAP gateway functions. For more information, see the IBM Cognos 8 <strong>Planning</strong> - Install-<br />

ation and Configuration <strong>Guide</strong>.<br />

Importing data from IBM Cognos 8 data sources involves the following tasks.<br />

❑ In IBM Cognos Connection, create a data source connection, see the IBM Cognos 8 Adminis-<br />

tration and Security <strong>Guide</strong> for more information.<br />

❑ In Framework Manager, create a new project and import the metadata into the project (p. 165).<br />

❑ In Framework Manager, model the source. See the Framework Manager User <strong>Guide</strong> for more<br />

information.<br />

❑ Create and publish the IBM Cognos package to IBM Cognos Connection (p. 166).<br />

❑ If importing into a <strong>Contributor</strong> application, in the <strong>Contributor</strong> <strong>Administration</strong> Console, create<br />

and run an administration link.<br />

Tip: You can create and schedule macros that run administration links.<br />

❑ If importing into an Analyst model, choose one of the following options:<br />

● Select an IBM Cognos package as a source in a D-List Import.<br />

● Select an IBM Cognos package as a source in a D-Link.


● Select an IBM Cognos package as a source in an A-Table, or import an IBM Cognos<br />

package as a Source in an A-Table.<br />

You can also automate the import of IBM Cognos packages using the @DListItemImportPackage<br />

macro.<br />

Create a Framework Manager Project and Import Metadata<br />

A project is a set of models, packages, and related information for maintaining and sharing model<br />

information.<br />

Steps<br />

1. From the Windows Start menu, click Programs, IBM Cognos 8, Framework Manager.<br />

2. In the Framework Manager Welcome page, click Create a new project, and specify a name and<br />

location.<br />

You can add the new project to a source control repository, see the Framework Manager Help<br />

for more information.<br />

3. In the Select Language page, click the design language for the project.<br />

You cannot change the language after you click OK, but you can add other languages.<br />

Note: If an SAP BW server does not support the selected language, it uses the content locale<br />

mapping in IBM Cognos Configuration. If a mapping is not defined, Framework Manager uses<br />

the default language of the SAP BW server.<br />

4. In the metadata source page, select Data Sources.<br />

5. Select a data source connection and click Next.<br />

If the data source connection you want is not listed, you must first create it, see the IBM Cognos<br />

8 <strong>Administration</strong> and Security <strong>Guide</strong>.<br />

6. Select the check boxes for the tables and query subjects you want to import.<br />

Tip: For usability, create a package that exposes only what is required.<br />

7. Specify how the import should handle duplicate object names.<br />

Choose either to import and create a unique name, or not to import. If you choose to create a<br />

unique name, the imported object appears with a number. For example, you see QuerySubject<br />

and QuerySubject1 in your project.<br />

8. If you want to import system objects, select the Show System Objects check box, and then select<br />

the system objects that you want to import.<br />

9. Specify the criteria to use to create relationships and click Import.<br />

For more information, see the Framework Manager User <strong>Guide</strong>.<br />

10. Click Next and then Finish.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 165


Chapter 9: Managing Data<br />

Note: You save the project file (.cpf) and all related XML files in a single folder. When you<br />

save a project with a different name or format, ensure that you save the project in a separate<br />

folder.<br />

Create and Publish the IBM Cognos Package<br />

166 <strong>Contributor</strong><br />

You create and publish a package to make the data available to IBM Cognos 8 <strong>Planning</strong>.<br />

Steps to Create a Package<br />

1. Click the Packages folder, and from the Actions menu, click Create, Package.<br />

2. In the Provide Name page, type the name for the package and, if you want, a description and<br />

screen tip, and click Next.<br />

3. Specify whether you are including objects from existing packages or from the project and then<br />

specify which objects you want to include.<br />

4. Choose whether to use the default access permissions for the package:<br />

● To accept the default access permissions, click Finish.<br />

● To set the access permissions, click Next, specify who has access to the package, and click<br />

Next.<br />

You can add users, groups, or roles. See the Framework Manager User <strong>Guide</strong> for more<br />

information.<br />

5. Move the language to be included in the package to the Selected Languages box, and click<br />

Next.<br />

6. Move the sets of data source functions you want available in the package to the Selected function<br />

sets box.<br />

If the function set for your data source vendor is not available, make sure that it was added to<br />

the project.<br />

7. Click Finish and choose whether to publish the package.<br />

Steps to Publish a Package<br />

1. Select the package you want to publish.<br />

2. From the Actions menu, click Package, Publish Packages.<br />

3. Choose where to publish the package:<br />

● To publish the package to the report server, click IBM Cognos 8 Content Store. Click<br />

Public Folders to publish the package to the public folder. You can create folders in the<br />

public folder also. Click My Folders to create your own folder and publish the package to<br />

it.<br />

● To publish the package to a network location, click Location on the network.


4. To enable model versioning when publishing to the IBM Cognos 8 Content Store, select the<br />

Enable model versioning check box and type the number of model versions of the package to<br />

retain.<br />

Tip: To delete all but the most recently published version on the server, select the Delete all<br />

previous model versions check box.<br />

5. If you want to externalize query subjects, select the Generate the files for externalized query<br />

subjects check box.<br />

6. By default, the package is verified for errors before it is published. If you do not want to verify<br />

your model prior to publishing, clear the Verify the package before publishing check box.<br />

7. Click Publish.<br />

If you chose to externalize query subjects, Framework Manager lists which files were created.<br />

8. Click Finish.<br />

Working with SAP BW Data<br />

The SAP BW model is an OLAP source and is optimized for reporting rather than for high volume<br />

access that is sometimes required for planning activities. To efficiently access data for IBM Cognos 8<br />

<strong>Planning</strong>, create a detailed fact query subject that will access fact data at a detail level suitable for<br />

use with IBM Cognos 8 <strong>Planning</strong>.<br />

Tip: If you have OpenHub, you can use it to generate a text file or database table from SAP BW.<br />

You can then manually create a Framework Manager model and IBM Cognos Package from the<br />

tables and then import the package into <strong>Planning</strong> using an <strong>Administration</strong> Link, D-Link, or D-List<br />

import.<br />

For IBM Cognos products to be able to access SAP BW as a data source, the user accounts used to<br />

connect to SAP must have specific permissions. These permissions are required for the OLAP<br />

interface to SAP BW and are therefore relevant to both reporting and planning activities.<br />

For more information about guidelines for working with SAP BW data, see the Framework Manager<br />

User <strong>Guide</strong>.<br />

For more information about access permissions for modelling and reporting access, see the IBM<br />

Cognos 8 <strong>Planning</strong> Installation and Configuration <strong>Guide</strong>.<br />

For information about setting up your environment to work with SAP BW and <strong>Planning</strong>, see the<br />

IBM Cognos 8 <strong>Planning</strong> Installation and Configuration <strong>Guide</strong>.<br />

Create a Detailed Fact Query Subject<br />

Chapter 9: Managing Data<br />

The detailed fact query subject is a model query subject based on database query subjects and cal-<br />

culations. The relational folder is where the SAP star schema is imported to. The detailed fact query<br />

subject is the logical representation of the fact table and the query subjects in the relational folder<br />

<strong>Administration</strong> <strong>Guide</strong> 167


Chapter 9: Managing Data<br />

are the physical representation of the SAP fact table. We recommend that you do not modify the<br />

contents of the relational folder, unless advised by customer support.<br />

Steps<br />

1. In Framework Manager, click the Key Figures dimension.<br />

2. From the Tools menu, click Create Detailed Fact Query Subject.<br />

3. In the metadata wizard, select the data source you want to use.<br />

You can create a new data source by clicking the New button and specifying SAP BW for<br />

<strong>Planning</strong> as the type.<br />

4. Click OK.<br />

Framework Manager creates a model query subject named Detailed_Key_Figures and a separate<br />

folder containing references to the relational objects. The references to the relational objects<br />

are the physical layer.<br />

5. Create the package.<br />

Note: Packages that contain the Detailed_Key_Figures query subject are only accessible or<br />

supported for the report authoring tools, such as Query Studio and Report Studio if they are<br />

hidden by doing the following:<br />

● In the Define Objects screen click the down arrow and choose Hide Component and Chil-<br />

dren.<br />

● Click Detailed_Key_Figures and Relational_Objects.<br />

6. Publish the package.<br />

Recommendation - Query Items<br />

It is a common requirement to concatenate two or more fields from a data source when creating<br />

D-Lists in Analyst. When importing D-Lists from an IBM Cognos Package, you perform the con-<br />

catenation in Framework Manager by creating a new query item. The query item can then be<br />

included in the published package and imported into D-Lists and used in D-Links.<br />

When working with SAP BW, you can use a concatenated query item to build a D-List in Analyst.<br />

However, when you create a link, either in Analyst or <strong>Contributor</strong>, then the concatenated query<br />

item cannot be used. Instead, use one of the underlying query items for the source and use a substring<br />

on the target dimension.<br />

When applying a filter in Framework Manager, you specify how it is used by selecting a usage<br />

value. To see the filtered data when publishing a package in <strong>Planning</strong>, select Always or Optional.<br />

See the Framework Manager User <strong>Guide</strong> for more information.<br />

Recommendation - Hierarchy<br />

168 <strong>Contributor</strong><br />

These recommendations will help improve performance when working with the SAP BW import<br />

process.


● Use manageably sized dimensions when importing SAP BW data. This is because <strong>Planning</strong> relies<br />

on lookups against the SAP BW hierarchies during the import process, so larger hierarchies<br />

slow down the import process. This may require modelling in SAP BW since it is at a higher<br />

level of detail than the <strong>Planning</strong> process requires.<br />

● Where possible, take data from the lowest level in the BW hierarchies. This is because data is<br />

taken from the fact table level and aggregated to the level selected in the <strong>Planning</strong> link. The<br />

further up the hierarchy that members are mapped into <strong>Planning</strong>, the more aggregations are<br />

needed to be recreated during the import process. This may require modelling in SAP BW since<br />

it is at a higher level of detail than the <strong>Planning</strong> process requires.<br />

Recommendation - Hiding the Dimension Key Field<br />

When working with SAP BW data, the Dimension Key field for any dimension should be hidden<br />

in the Model (not the package) - both for the OLAP and Detailed Fact Query Subject access before<br />

the package is published. It is not intended for direct use from within IBM Cognos <strong>Planning</strong>.<br />

Working with Packages<br />

To avoid a high number of Query Subjects and Query Items when working with and creating<br />

packages in <strong>Planning</strong>, you should make them as specific as possible so they contain only objects<br />

that are useful to a <strong>Planning</strong> user.<br />

Using a naming convention may also be useful, like using <strong>Planning</strong> as a prefix for your packages.<br />

For advanced users, you could also create a single package that holds all of the source objects.<br />

Troubleshooting Detailed Fact Query Subject Memory Usage<br />

When executing administration links that use the Detailed Fact Query Subject, one of the internal<br />

IBM Cognos components builds temporary files in the Temp folder under the IBM Cognos install-<br />

ation directory. The temporary files are deleted after the query completes, but these files can be<br />

large, depending on how much data is being retrieved. If the drive that contains the Temp folder<br />

does not have enough space to contain the temporary files, the query will fail and you will receive<br />

the following error:<br />

Error Message: DM-DBM-0402 COGQF driver reported the following:~~~~COGQF failed to<br />

execute query - check logon / credential path~~~~DM-DBM-0402 COGQF driver reported the<br />

following:~~~~RQP-DEF-0177 An error occurred while performing operation &amp;apos;<br />

sqlOpenResult&amp;apos; status=&amp;apos;-28&amp;apos;.~~UDA-SQL-0114 The cursor<br />

supplied to the operation &amp;quot;sqlOpenResult&amp;quot; is inactive.~~UDA-SOR-0005<br />

Unable to write the file.~~~~~~DM-DBM-0402 COGQF driver reported the following:~~~~<br />

Make at least 2 MB of hard drive space available on the installation location’s drive. If you still<br />

receive the error, then make more hard drive space available.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 169


Chapter 9: Managing Data<br />

Deploying the <strong>Planning</strong> Environment and Viewing the Status<br />

of Deployments<br />

Export a Model<br />

You can export or import complete models, macros, administration links, or Analyst libraries with<br />

or without the associated data from IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> or Analyst. You deploy<br />

a model by exporting it from one environment and importing it into another.<br />

You can export a model structure, with or without data, to move between development, test, and<br />

production environments or to send a model with or without data to IBM Cognos Customer<br />

Resource Center.<br />

When you export a model, IBM Cognos 8 Reports, Events, or Framework Manager Models asso-<br />

ciated with the <strong>Planning</strong> Network are not exported.<br />

The model structure and data are exported to the deployment directory location set in IBM Cognos<br />

Configuration.<br />

You can backup an application by exporting it, but we do not recommend this as a substitute for<br />

database backup.<br />

We recommend that you sign in to all namespaces before beginning the export.<br />

Steps<br />

Import a Model<br />

170 <strong>Contributor</strong><br />

1. From the Tools menu, click Deployment and then click one of the following:<br />

● Export Model<br />

● Export Model and Data<br />

2. In the Welcome to the Export Deployment Wizard page, click Next.<br />

3. Select the objects you want to export and click Next.<br />

Selecting a top level object will select all the children of that object.<br />

4. Type a new name for the export, or choose a name from existing deployments and click Finish.<br />

5. Click OK.<br />

The export request starts on the server.<br />

You can view the progress of the export in the Monitoring Console on the Deployments tab.<br />

To transfer the deployment to a new environment, move the export folder from the source<br />

deployment directory location to the deployment directory location for the target environment.<br />

Compress the export folder to transfer your export to IBM Cognos Resource Center.<br />

You can import a model or object to move an application into a test or production environment.<br />

Models for import must be in the deployment directory location set in IBM Cognos Configuration.


You can import macros, administration links, applications, Analyst libraries, and security rights<br />

from the source <strong>Planning</strong> Content Store that were exported during a previous deployment. You<br />

can select exported objects for import or import an entire model. If a model was exported with<br />

data, then the data will be used during the import.<br />

You can import administration links and macros even if they reference an application that is not<br />

in the target destination. If imported with a related application, macros and administration links<br />

are automatically mapped to the target application.<br />

Through the import process, you can change the target datastore and security for your model. The<br />

deployment wizard attempts to map security settings for users, groups, and roles. If you are using<br />

different namespaces or changing user, group, or role mappings, you may have to complete some<br />

of the mapping manually.<br />

The security settings for the source are applied to the user, group, or role to which you map. You<br />

can map source users, groups, and roles together or individually to any single target user, group,<br />

or role. When mapping a number of users, groups, or roles, the target maintains the greatest level<br />

of security privileges. Any unmapped items are mapped to <strong>Planning</strong> Rights Administrator and do<br />

not appear individually as a user, group, or role in the target.<br />

Application IDs and object names must be unique within the <strong>Planning</strong> <strong>Administration</strong> Domain.<br />

During the import processes, if duplicate names or IDs are found, you are warned. If you proceed<br />

with the import without changing names and IDs, then any existing applications or objects with<br />

common names or IDs will be overwritten.<br />

To import <strong>Contributor</strong> applications, you must have at least one configured datastore and the<br />

<strong>Planning</strong> content store must be added to a job server. A datastore is not required to import Analyst<br />

libraries, macros, or administration links.<br />

Steps<br />

1. From the Tools menu, click Deployment and then click Import.<br />

2. In the Welcome to the Import Deployment Wizard page, click Next.<br />

3. In the Deployment Archive Name page, select a deployment to import and click Next.<br />

4. In the Import Object Selection page, select the objects for import and click Next.<br />

Selecting a top level object selects all the children of that object.<br />

5. In the Namespace Mapping page, select the target namespace for each source namespace, and<br />

click Next.<br />

6. The User Group Role Mapping page contains a tab for each namespace mapping. For each<br />

mapping, assign the correct target user, group, or role to each source by clicking the ellipsis<br />

(…) button.<br />

7. On the Select entries (Navigate) page, in the available entries directory, click the namespace<br />

that contains the target user, group, or role.<br />

8. From the selected entries, select the target user, group, or role and click OK.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 171


Chapter 9: Managing Data<br />

9. Complete the user, group, or role mapping for each Namespace mapping. Once you have<br />

completed mapping each source user, group, or role to the target, click Next.<br />

10. For each application or library with a warning next to it in the Object Mapping page, click the<br />

ellipsis (…) button to change the configuration settings. You can also map all target options<br />

by clicking Map All and adding a prefix or suffix to the object names. You can also set the job<br />

server cluster for the application.<br />

11. On the Configuration settings page, type new names, IDs and locations of files, and click OK.<br />

For an Oracle or DB2 datastore, you must identify tablespaces for data, indexes, blobs, and a<br />

temporary tablespace.<br />

12. To avoid overwriting macros or administration links, for each object with a warning next to<br />

it in the Object Mapping page, type a new name for the target object directly into the target<br />

column.<br />

13. Optionally, if you are importing a model without data, select the option to automatically go<br />

to production with all imported applications during the import process.<br />

14. If you are overwriting objects, you will be prompted to confirm the import, to continue, click<br />

Yes.<br />

15. Click Finish.<br />

16. Click OK.<br />

The import request starts on the server.<br />

You can view the progress of the export in the Monitoring Console on the Deployments tab.<br />

If you did not set the job server cluster in the Map All dialog, refresh the console after the transfer<br />

is complete and add any newly created applications to a job server cluster.<br />

For more information, see "Add Applications and Other Objects to a Job Server Cluster" (p. 57).<br />

Tip: During the import process, some application options are excluded from the transfer because<br />

they do not apply to the new application location, for example, display names, backup location,<br />

and publish options are excluded. If these options are required, you can include them by modifying<br />

the AdminOptions to exclude during Limited transfer or AdminOptions to exclude<br />

during Full transfer resource values in the \bin\epPNHelperResource.xml<br />

file.<br />

View the Status of Existing Deployments<br />

172 <strong>Contributor</strong><br />

If the export or import deployment request fails, you can view the errors. You can also view the<br />

status of export and import processes currently running on the server through the Monitoring<br />

Console.<br />

Steps<br />

1. From the Tools menu, click Deployment and then click View Status of Previous Exports and<br />

Imports.


2. In the Welcome to the View Existing Deployment Wizard page, click Next.<br />

3. Select the request and click Next.<br />

The log of the deployment request appears. Errors and warnings are shown for failed requests.<br />

Troubleshooting Out of Memory Exception When Exporting During a Deployment<br />

You may receive an "Out of Memory" message when doing an export during a deployment.<br />

On all machines that are running the <strong>Planning</strong>AdminConsole service, modify the resource memory<br />

allocation for Java Command Line. From \bin\ open the epPNHelperResources.<br />

xml file and lower the memory usage in the following resource line as follows:<br />

Original<br />

- - <br />

Modified<br />

- - <br />

Importing Text Files into Cubes<br />

To import data into cubes, follow this process:<br />

❑ Create the source file (p. 173).<br />

❑ Select the cube and the text file to load into the cube (p. 175).<br />

❑ Load the data into the datastore (p. 175).<br />

❑ Prepare the import data blocks (p. 176).<br />

❑ Run the Go to Production process (p. 243).<br />

Important: If you have more than one <strong>Planning</strong> <strong>Administration</strong> Console service, you cannot use<br />

this method for importing text files. You must load data into Import Tables (im_cubename) or<br />

manually copy the file onto each of your servers prior to running a Prepare Import job.<br />

Creating the Source File<br />

If the local import file name contains any special characters, such as á or %, the <strong>Administration</strong><br />

Console removes or replaces them when determining the remote file name on the administration<br />

server. You can avoid this by specifically configuring the Import Options setting in the Admin<br />

Options window (p. 79).<br />

The file must be in the following format:<br />

● tab separated<br />

● dimensions in the same order as in the IBM Cognos 8 <strong>Planning</strong> - Analyst cube<br />

● value comes last<br />

● no double quotation marks<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 173


Chapter 9: Managing Data<br />

One million rows per e.List item per cube is a good size limit.<br />

Import Data Source File Sample<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

Budget Version 1<br />

174 <strong>Contributor</strong><br />

The following sample source file is an extract from a tab separated text file that can be used to<br />

import data into the Corporate Expenses cube in the sample go_expenses_contributor library.<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Asia<br />

Pacific<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Sales<br />

Nov<br />

Nov<br />

Nov<br />

Nov<br />

Nov<br />

Nov<br />

Nov<br />

Nov<br />

Dec<br />

Dec<br />

Dec<br />

Dec<br />

Dec<br />

Dec<br />

Dec<br />

Dec<br />

Dec<br />

613300<br />

613500<br />

615100<br />

615300<br />

615400<br />

615500<br />

618200<br />

619500<br />

613100<br />

613200<br />

613300<br />

613500<br />

615100<br />

615300<br />

615400<br />

615500<br />

618200<br />

Communications: mobile phone<br />

Communications: telephone<br />

equipment<br />

Supplies: computer supplies<br />

Supplies: office supplies<br />

Supplies: fax & photocopier<br />

Supplies: catering<br />

Services: legal<br />

Services: recruitment<br />

Communications: line charges<br />

Communications: long distance<br />

Communications: mobile phones<br />

Communications: telephone<br />

equipment<br />

Supplies: computer supplies<br />

Supplies: office supplies<br />

Supplies: fax & photocopier<br />

Supplies: catering<br />

Services: legal<br />

670<br />

370<br />

680<br />

300<br />

350<br />

1280<br />

14000<br />

8000<br />

340<br />

450<br />

670<br />

370<br />

680<br />

300<br />

350<br />

1280<br />

14000


Use the preview in the Import Data Copy tab to check that you have the source data in the correct<br />

format.<br />

Select the Cube and Text File to Load into the Cube<br />

The copy process copies the import data file to a file location on the administration server and<br />

specifies the cube that the data is to be loaded into. You can also check that your source file is in<br />

the format expected by the datastore. The administration server destination is specified in Admin<br />

Options (p. 79), but should be modified only by a database administrator.<br />

If your import files are large, it is quicker to manually copy the files to the administration server<br />

destination. If you do this, you must follow the steps described below, but do not click Copy. As<br />

soon as you have specified a valid file and location, the <strong>Administration</strong> Console registers which file<br />

is to be loaded into a particular cube. You can only process one import file at a time.<br />

Steps<br />

1. Click Development, Import Data for the appropriate application.<br />

2. On the Copy tab, in the Select cube box, click the cube to import into.<br />

3. In the Select text file to load box, enter the text file and its location.<br />

4. In the Preview pane, check that the order of columns in the text file matches the order expected<br />

by the datastore.<br />

The header row gives the names of the dimensions taken from the cube, and the final column<br />

(importvalue) contains the value. The rows below the heading contain the data from the text<br />

file.<br />

5. If the data appears to be in the wrong columns, you should rearrange the column order in the<br />

text file and repeat steps 1-5.<br />

6. Unless you want to manually copy the files, click Copy and then repeat steps 2 to 5 until you<br />

have selected all the required cube and text file pairings.<br />

The next step is to load the data.<br />

Load the Data into the Datastore<br />

Load the data into staging tables in the datastore, one table per cube. An import table is created<br />

for each cube during datastore creation. There is a column for each dimension, plus a value column.<br />

If new cubes or new dimensions are added to the Analyst model after an application is created, new<br />

import tables or columns in the tables are created when synchronize runs and is saved.<br />

The cube name associated with the import table is stored in the applicationobject table. The tables<br />

are named im_cubename. Errors are stored in ie_cubename.<br />

This process is not multi-threaded.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 175


Chapter 9: Managing Data<br />

You can also automate the loading of import files (p. 205).<br />

Steps<br />

1. In the Import Data window for the appropriate application, click the Load tab.<br />

2. Select the Load check box for each cube that you want to load data for.<br />

Row Count indicates how many rows are currently in the import table from previously loaded<br />

data.<br />

3. Select Delete Existing Rows if you want previously loaded data to be removed, otherwise when<br />

the names of previously loaded data match the newly loaded data it is replaced by the new data<br />

and previously loaded data that is not matched remains in the staging table.<br />

4. Click Load.<br />

The next step is to prepare the data (p. 176).<br />

Prepare the Import Data Blocks<br />

176 <strong>Contributor</strong><br />

To prepare the import data blocks, the data is taken from the import staging tables per cube, per<br />

e.List item. The import staging table is cleared. The calculation engine validates the data and converts<br />

it into import blocks. Errors are written to ie_cubename.<br />

The import data block contains just the data required for an individual e.List item. Data for other<br />

e.List items, data targeting No Data cells or formula items, and data not matching any items are<br />

removed.<br />

The process of converting data into import blocks uses the job architecture to run on multiple<br />

computers and processors. It does not conflict with other online jobs for the application. Progress<br />

logs are shown.<br />

If you are importing a large file, you can test the import file to check that it is valid, and to avoid<br />

time consuming problems. When you test the import file, a prepare job is created for the first e.List<br />

item for the selected cube in the import table. Any errors are listed, such as extra dimensions,<br />

columns in the wrong order, and invalid e.List items.<br />

If you have more than one <strong>Planning</strong> <strong>Administration</strong> Console service, you must load data into Import<br />

Tables (im_cubename) prior to running a Prepare Import job.<br />

You can also automate the preparation of import files (p. 205).<br />

Prepared Data Blocks<br />

The Prepared Data Blocks tab displays the e.List items that have import data blocks prepared. You<br />

must wait for the prepare import job to run before there are any prepared data blocks. The number<br />

of data cells prepared per cube is listed for each e.List item.<br />

Deleting the Import Queue<br />

If you decide that you do not want to proceed with the import of data, you can click the Delete<br />

import queue button.


Go to Production Process<br />

If the Prepare Import process was not run, no data is imported when Go to Production is run. To<br />

prepare import data blocks, you must cancel Go to Production and return to the Import Data<br />

window.<br />

Steps to Test the Import File<br />

1. In the Import Data window for the appropriate application, click the Prepare tab.<br />

2. In the Prepare column, select the check boxes next to the cubes that you want to test.<br />

3. Click Test.<br />

Any errors are listed in the Import Errors pane and a prepare import job is created. You can<br />

view the progress of this job in the Job Management window, or in the Monitoring Console<br />

(p. 52). If the test is successful, you can run prepare for all the import data. This overwrites<br />

test data.<br />

Steps to Prepare the Import File<br />

1. In the Import Data window for the appropriate application, click the Prepare tab.<br />

2. In the Prepare column, select the check boxes next to the cubes you are importing data into.<br />

3. If you want import blocks created for all e.List items, and not just the e.List items that you are<br />

importing data into, select the Zero Data option.<br />

This zeros existing data in the cube before import takes place.<br />

4. Click Display Row Counts to show the number of rows in the text file being imported.<br />

5. Click Prepare.<br />

If the Admin Option Display warning message on Zero Data is set to yes (p. 79), a warning<br />

message is displayed if the Zero Data option is selected. This is to prevent accidental setting of<br />

this option.<br />

A prepare import job is created. You can view the progress of this job in the Job Management<br />

window.<br />

When you perform Go To Production, the prepared data will be imported.<br />

Chapter 9: Managing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 177


Chapter 9: Managing Data<br />

178 <strong>Contributor</strong>


Chapter 10: Synchronizing an Application<br />

Synchronize updates all cubes in an application when the underlying objects in IBM Cognos 8<br />

<strong>Planning</strong> - Analyst change. Changes include renaming dimensions, adding, deleting, or renaming<br />

dimension items. Synchronize an application to re-import the definition of the cubes in the applic-<br />

ation from Analyst. Synchronize also brings in new data for assumption cubes (that is, those cubes<br />

without the e.List).<br />

Before making changes to an Analyst model, you should back up the library.<br />

Synchronizing an application means that all e.List items will be reconciled, see "Reconcili-<br />

ation" (p. 54), after the Go to Production process is run. We recommend that you back up the<br />

datastore before synchronizing.<br />

Changes that Result in Loss of Data<br />

The structure of the library is very important to a <strong>Contributor</strong> application. If changes are made to<br />

the library in Analyst, such as deleting a dimension item, and the application is synchronized, all<br />

data associated with that dimension item is lost.<br />

If there is a possibility that data may be lost, you may get a warning message similar to the following:<br />

"Destructive Synchronize detected. If you save the changes, this may result in the loss of data."<br />

If you receive this message, you should back up your application datastore before proceeding.<br />

A synchronize is destructive (that is, results in loss of data) in the following circumstances:<br />

● cube dimensions were added<br />

● cube dimensions were deleted<br />

● cube dimensions were reordered<br />

● detail items from a dimension were deleted<br />

● detail items from a dimension were changed to calculated<br />

See "How to Avoid Loss of Data" (p. 180) for more information.<br />

If you are automating the synchronization process, you can choose to stop the process or continue<br />

when a destructive synchronize is detected.<br />

<strong>Administration</strong> <strong>Guide</strong> 179


Chapter 10: Synchronizing an Application<br />

Synchronizing an Application<br />

Synchronize an application with Analyst to ensure that all cubes that are shared with Analyst are<br />

updated.<br />

Steps<br />

Generate Scripts<br />

1. Select Development, Synchronize with Analyst for the appropriate application. A check is made<br />

to see if you are logged on with appropriate rights. If you are not, you are prompted to log on<br />

via the IBM Cognos Common Logon.<br />

2. If the library or library name has changed, enter or browse for the new name in Library.<br />

The <strong>Administration</strong> Console checks to see if the e.List selected when creating the application<br />

still exists in the library. If it does not, a warning message appears.<br />

3. Click Synchronize to begin the synchronization process.<br />

A list of objects that could change is displayed, for example: which cubes were added, which<br />

cubes were removed, which cubes had their dimensions changed, and whether dimensions were<br />

added, deleted, or substituted.<br />

Click Advanced to see more detailed information.<br />

This displays detailed model information about what has changed. For more information, see<br />

"Model Changes Window" (p. 250).<br />

4. To save the synchronization changes, click Save.<br />

The synchronization does not take place until Go to Production is run. If you decide not to go<br />

ahead with the synchronize, click another part of the <strong>Administration</strong> Console without saving.<br />

If you save, then subsequently decide to cancel the synchronization, you can click the Reset<br />

Development to Production button on the toolbar. This discards all changes made to the<br />

application since the last time the Go to Production process was run.<br />

You can automate this process. For more information, see "Synchronize" (p. 205).<br />

If the Generate Scripts option is set to Yes in Admin Options (p. 180), a check is made to see if the<br />

datastore needs to be restructured, for example, if columns in tables must be added or deleted. If<br />

they do, synchronize generates a script, Synchronize script.sql, when synchronize is run. This script<br />

must be run by a database administrator. Doing so changes the columns in the ie_cubename table<br />

just as synchronize does.<br />

How to Avoid Loss of Data<br />

180 <strong>Contributor</strong><br />

There are two ways of avoiding loss of data when you add, delete, reorder or substitute dimensions.<br />

The method you choose depends on the size of the e.List.<br />

The first method is to publish the production data using the View Publish Layout publish option.<br />

The procedure for this is listed in the steps below.


An alternative method is to run a <strong>Contributor</strong> to <strong>Contributor</strong> link using Analyst. This is a simpler<br />

process, but should only be used on applications with a small e.List.<br />

Data is moved from the production version of a <strong>Contributor</strong> source into the development version<br />

of the <strong>Contributor</strong> target via Analyst. Once this is complete, you must run the Go to Production<br />

process.<br />

Steps to Publish the Production Data<br />

1. Make the Analyst model change.<br />

2. Synchronize the application.<br />

3. Click the Set offline button to take the system offline.<br />

This takes it offline on the Web client and prevents data changing after publish begins.<br />

4. Publish the data with No Data Dimension selected (p. 279).<br />

5. Transfer the data from the ev_cubename view, using a tool such as DTS, depending on your<br />

datastore. Remember to reorder/change the columns as required.<br />

6. Run Prepare Import (p. 176).<br />

7. Run Go to Production (p. 243).<br />

8. Click the Set online button .<br />

Example Synchronization<br />

In the example shown below, the first item under Restructured Cubes, the order of the dimensions<br />

is changed. The dimension e.List has moved from fourth to second. The preview shows the new<br />

order of the dimensions and the old order.<br />

Chapter 10: Synchronizing an Application<br />

<strong>Administration</strong> <strong>Guide</strong> 181


Chapter 10: Synchronizing an Application<br />

If you look at the expanded Product Price and Cost cube, you can see that the dimension e.List was<br />

added to it, and in Compensation Assumptions, the dimension e.List was removed.<br />

Under Dimensions, you can see that a dimension item (18V Trim Saw Drill/Driver Kit) was deleted<br />

from Indoor and Outdoor Products.<br />

Click Advanced to see more detailed information. This provides a detailed description of the differ-<br />

ences between the previous Analyst model and the current model. It lists the cubes and dimensions<br />

that have changed. When you click an item, a detailed breakdown of the changes is provided.<br />

Typically, this information is used for debugging purposes.<br />

Advanced - Model Changes<br />

You can display the Model Changes window to see detailed information that is typically used by<br />

technical support and development in problem solving. This report may take some time to generate.<br />

If you have problems, you may be asked to send a file containing this information.<br />

Steps<br />

1. Click the Advanced button in the Synchronize window, or, during Go to Production, click the<br />

Advanced button in the Model Changes window.<br />

2. In the empty box at the bottom of the window, enter a file location and name and click Save.<br />

If you compare the information that you see in Synchronize, Preview with the information that<br />

is displayed by clicking Advanced, you will see, in the Model Changes window, one extra cube<br />

is listed (Expenses) and an extra section named Common Links is listed. Common links contains<br />

details of a D-Link that was changed as a result of the changes to the Compensation Assumptions<br />

cube. The Expenses cube is listed under common cubes because it is the target of the changed<br />

link.<br />

Synchronize Preview<br />

182 <strong>Contributor</strong>


Advanced--Model Changes<br />

Chapter 10: Synchronizing an Application<br />

<strong>Administration</strong> <strong>Guide</strong> 183


Chapter 10: Synchronizing an Application<br />

184 <strong>Contributor</strong>


Chapter 11: Translating Applications into Different<br />

Languages<br />

The Translations branch enables you to translate, from an existing IBM Cognos 8 <strong>Planning</strong> - Con-<br />

tributor application, the strings that will appear in the Web client. The translated strings are held<br />

in the <strong>Contributor</strong> datastore along with the rest of the application and are streamed to the Web<br />

client when the users connect to the application. In addition to creating new language translations,<br />

you can create custom text translations.<br />

There are three roles involved in the translation cycle:<br />

● The model builder who creates the IBM Cognos 8 <strong>Planning</strong> - Analyst model using Analyst.<br />

● The administrator who uses the <strong>Administration</strong> Console to create and manage the <strong>Contributor</strong><br />

application.<br />

● The translator who translates the <strong>Contributor</strong> application.<br />

If a translation is upgraded from IBM Cognos <strong>Planning</strong> version 7.3 or earlier versions, there may<br />

be additional product strings or incompatible strings that require translation. New product strings<br />

and incompatible strings introduced during an upgrade are not automatically filled with the base<br />

language.<br />

Tip: For Classic <strong>Contributor</strong> Web client users, client extensions must be configured and tested<br />

before starting a translation.<br />

When changes, including renaming dimensions or adding dimension links, are made to the Analyst<br />

model, the <strong>Contributor</strong> application must be synchronized and Go to Production must be run to<br />

incorporate the changes into the application. Any changes to the Analyst model that the <strong>Contributor</strong><br />

application is based on may require changes to the translation.<br />

When the translation is opened in the Translations branch, changes to the base strings are displayed,<br />

however, you cannot see which strings have changed. In order to see which base strings have<br />

changed, you should export the translation from the Content Language tab both before and after<br />

synchronizing the application, giving the translations different names. You can then compare the<br />

translations to see what has changed.<br />

Assigning a Language Version to a User<br />

There are two ways of assigning a language version to a user.<br />

● By user properties defined in IBM Cognos Configuration<br />

If a translation exists in the language specified in the user’s product language preference prop-<br />

erties, the user sees the application in this language.<br />

Use this method for a straightforward translation of the model text strings and application<br />

strings from the base language into another language.<br />

<strong>Administration</strong> <strong>Guide</strong> 185


Chapter 11: Translating Applications into Different Languages<br />

If a user selects a preferred product language that is not an IBM Cognos 8 <strong>Planning</strong> tier 1 lan-<br />

guage and no translation exists, the <strong>Contributor</strong> Web client will use the model base language<br />

for content strings and the application base language for product strings.<br />

The application base language is configured in the <strong>Contributor</strong> <strong>Administration</strong> Console, Admin<br />

Options, for each application.<br />

● By user, group, or role<br />

When creating a translation, the base language of the translation will default to the base language<br />

of the application. The base language of the application defaults to the installation language<br />

of the <strong>Contributor</strong> <strong>Administration</strong> Console.<br />

You can assign a translation to a user, group, or role in the Translations branch. This method<br />

is necessary for languages that are not supported by IBM Cognos Configuration, or if you want<br />

to create a translation that takes account of local differences. For example you may have<br />

European French and French Canadian versions, or US English and UK English versions.<br />

Translate the Application<br />

186 <strong>Contributor</strong><br />

You can create a new translation in the <strong>Contributor</strong> <strong>Administration</strong> Console.<br />

Before you can translate the application, you must have translation access rights for the application.<br />

Steps<br />

1. In the <strong>Contributor</strong> <strong>Administration</strong> Console, expand the application to be translated so that<br />

you can see the Translations branch.<br />

2. Right-click Translations and click Create New Translation.<br />

The Create New Translation dialog box appears. The system locale tells you which bitmap<br />

fonts and code pages are defaults for the system that the <strong>Administration</strong> Console is running<br />

on. This should be the same as the Translation Locale, otherwise the translation may not display<br />

properly.<br />

3. Type a translation name.<br />

4. Select By user specified preference for Product Language, or By User, Group or Role.<br />

Use By user specified preference for Product Language if you are creating a translation that<br />

uses one of the supported locales. Users who have this language specified in their properties<br />

get the translated version of the application, unless they are members of a group or role that<br />

is assigned to a different translation.<br />

Use By User, Group or Role to create a translation in a language that applies only to a specified<br />

user, group, or role.<br />

5. To select users, groups, or roles click the ellipsis (…) button.<br />

6. Select the required namespace and then the necessary users, groups, or roles and click the add<br />

button .<br />

7. Click OK.


8. Select a translation locale.<br />

This tells the operating system which code page the <strong>Contributor</strong> application uses when running<br />

in the new language. A code page ensures that the correct character sets and keyboard layouts<br />

are associated with the <strong>Contributor</strong> application so that it will appear properly on the user's<br />

computer. For more information, see "System Locale and Code Pages" (p. 191).<br />

9. Select the translation base language from English, French, German, Japanese, Finnish, and<br />

Swedish.<br />

10. Click Create.<br />

The translation is added to the datastore and the strings from the <strong>Contributor</strong> application are<br />

loaded into the Translations branch.<br />

11. Open the new translation. To do this, click the name of the translation under Translations.<br />

You can now begin translation.<br />

Translate Strings Using the <strong>Administration</strong> Console<br />

There are two parts to the translation: content language and product language.<br />

● Content language relates to information specific to the Analyst model, which includes D-Cube<br />

names, D-List item names, e.List items names, and model name.<br />

Note: When you create a translation with Japanese as the base language, the content strings<br />

are not translated. Analyst does not support Japanese characters. To use Japanese in the Con-<br />

tributor Web client, you must translate the content strings.<br />

● Product language relates to generic strings such as button names, status bar text, error messages,<br />

menu and menu item names. Base strings for a language will be the same for all models.<br />

The Content Language and Product Language tabs separate the translation items into categories.<br />

The total category on the Product Language tab is used when a multi-e.List view is displayed for<br />

all contributions. There are a number of categories that appear on the Product Language tab if client<br />

extensions are installed. These allow you to translate the buttons, wizards and any messages that<br />

the user might see.<br />

The strings are color coded to indicate the status of the string. The colored squares in the Category<br />

column have the following meanings:<br />

● Blue - the translated string cell has not changed in this session.<br />

● Red - the translated string cell has changed in this session.<br />

● White - the translated string cell is empty.<br />

If strings on the Content Language tab are left blank, they will default to the base string in the Web<br />

application. If strings on the Product Language tab are left blank, they will appear blank in the<br />

Web application.<br />

Chapter 11: Translating Applications into Different Languages<br />

<strong>Administration</strong> <strong>Guide</strong> 187


Chapter 11: Translating Applications into Different Languages<br />

188 <strong>Contributor</strong><br />

If you do not have the correct system locale set on your computer, we recommend that you export<br />

the file in .xls format, and use Excel to translate the strings. This ensures that the fonts appear<br />

correctly when you are translating. For more information, see "Export Files for Translation" (p. 190).<br />

In Product Language, some of the translatable strings contain parameters that must not be changed,<br />

for example:<br />

%1:currently annotating user% has been annotating %2:e.List<br />

item% since %3:edit start time%.\r\nIf you continue and annotate<br />

then %1% will be unable to save any changes.\r\nDo you still wish<br />

to annotate %2%?<br />

The following are parameters:<br />

● %1:currently annotating user%<br />

● %2:e.List item%<br />

● %3:edit start time%<br />

You cannot add, remove, or edit parameters. However, they can be moved or repeated within the<br />

translation string.<br />

These are some of the formatting codes that are used:<br />

Code<br />

\r<br />

\n<br />

\t<br />

Description<br />

carriage return<br />

new line<br />

tab<br />

For example:<br />

Unable to create email :-\r\n\tTo: %1:to%\r\n\tCc: %2:cc%\r\n\tSubject:<br />

%3:subject%\r\n\tBody:<br />

%4:body%<br />

Text in message boxes wraps automatically so it is not always necessary to use formatting codes.<br />

Steps<br />

1. In the Translations branch, click the name of the translation.<br />

2. Click the Content Language tab or Product Language tab.<br />

3. Enter the new strings directly in the Translated string column or into the Edit window. You<br />

can choose to:<br />

● Populate the empty column with the text in the Base string column and then edit the text.<br />

To do this, right-click in the column and click Copy base to translated.<br />

● Click on the first base string that you are going to translate. This will appear in the left<br />

pane of the Edit window. Enter or edit the translated text in the right pane. Populate any<br />

remaining blank cells with base string text if this is needed. To do this, right-click in the<br />

column and click Fill empty with base.


You can clear strings from the Translated string column by right-clicking and selecting Clear<br />

translated.<br />

4. Save the translation.<br />

A warning about empty translation strings only applies to the active tab. The translation will<br />

not appear correctly in the Web application if there are any empty content or product strings.<br />

Exporting and Importing Files for Translation<br />

If you want to translate the <strong>Contributor</strong> application outside of the <strong>Administration</strong> Console, you<br />

can use the import and export functions.<br />

You can import and export files in the following formats:<br />

● tab delimited text<br />

● xls<br />

● xml<br />

Define an Excel worksheet (import only).<br />

● other<br />

Define a custom format.<br />

Files that you import must match the format expected by the <strong>Administration</strong> Console. The best<br />

way to ensure this is to first export a file in the format that you will be editing in. After you have<br />

completed your translation, import the changed file.<br />

Import File Format<br />

The format needed for import is:<br />

String ID Context Base Translation Hint<br />

Where:<br />

String ID<br />

Context<br />

Base<br />

Translation<br />

A unique identifier given to an object.<br />

This should not be changed.<br />

Groups the parts of the application into sections, for example, Model, D-<br />

Cube, Hierarchy.<br />

This should not be changed.<br />

The string to be translated in the language in which the model was first cre-<br />

ated.<br />

This should not be changed.<br />

This is the translated string.<br />

This can be changed.<br />

Chapter 11: Translating Applications into Different Languages<br />

<strong>Administration</strong> <strong>Guide</strong> 189


Chapter 11: Translating Applications into Different Languages<br />

Hint<br />

Export Files for Translation<br />

This is a hint to help the translator.<br />

To translate the <strong>Contributor</strong> application outside of the <strong>Administration</strong> Console, you can export a<br />

file for translation.<br />

Steps<br />

1. Open the translation and click either the Content Language tab or the Product Language tab.<br />

Only one tab can be exported at a time.<br />

2. Click Export.<br />

3. Enter a file name and location.<br />

4. Select the required file format. If you click Other, you must enter a custom delimiter, for example<br />

";".<br />

5. Select Include column headers if you want to include a header row. This is useful for files that<br />

you may be opening in a tool such as Excel. It lets you see which text contains the translated<br />

strings.<br />

If you are modifying the export file to import it into the <strong>Administration</strong> Console, you should<br />

only edit the text in the Translation column. If you edit the text in the String ID, Category, or<br />

Base columns, you will have problems importing the file.<br />

6. Click OK.<br />

Import Translated Files<br />

190 <strong>Contributor</strong><br />

After you have completed your translation, you can import a translated file.<br />

Steps<br />

1. Open the translation into which you are going to import the translated file.<br />

2. Click either the Content Language or the Product Language tab.<br />

Select the same tab that you originally exported the file from.<br />

3. Click Import.<br />

4. Select the appropriate file format.<br />

If you select xls, you should enter the worksheet name in the Excel worksheet box. If you select<br />

Other, enter a delimiter in the Delimiter box.<br />

5. Click the First row contains field names option if applicable.<br />

6. Enter the file name and location in the File name box and click OK.<br />

7. Click Save.


Search for Strings in the Content Language or Product Language<br />

Tab<br />

You can search for strings in either the Content Language or Product Language tab using search<br />

criteria.<br />

Steps<br />

1. Click Find.<br />

2. Select the search criteria:<br />

Criteria<br />

Find what<br />

Search All Categories,<br />

Selected Rows<br />

Direction<br />

Search in<br />

Translating Help<br />

Description<br />

Enter the text string. Previous searches are saved and you<br />

can select them from the list.<br />

Select whether you want to search all categories, or selected<br />

rows.<br />

Select the direction for the search, either Up or Down<br />

(default).<br />

Choose whether you want to search in translated string or<br />

base string.<br />

3. To begin the search, click Find next.<br />

You can translate <strong>Contributor</strong> help text in the same way that you translate other strings.<br />

Cube help is located on the Content Language tab under Cube help for . The first row<br />

has simple cube help, the second row is detailed cube help that may contain HTML tags. These<br />

tags can be modified.<br />

Instructions are also located on the Content Language tab under Application Headline and<br />

Instructions. The top row is the headline and the bottom line is the instructions.<br />

It is not possible to translate the default help supplied with the <strong>Contributor</strong> Web site from the<br />

<strong>Contributor</strong> <strong>Administration</strong> Console.<br />

System Locale and Code Pages<br />

Chapter 11: Translating Applications into Different Languages<br />

Before you begin translation, there are some issues you must consider. When you create a new<br />

translation, you select a translation locale to be associated with the translation. The translation<br />

locale tells the system which code page the <strong>Contributor</strong> application uses when running in the Web<br />

client. The code page is a table that relates the binary character codes used by a program to keys<br />

<strong>Administration</strong> <strong>Guide</strong> 191


Chapter 11: Translating Applications into Different Languages<br />

About Fonts<br />

192 <strong>Contributor</strong><br />

on the keyboard or to characters on the display and is used to ensure that text appears in the<br />

appropriate language, as determined by the system locale of the computer. The available system<br />

locales are determined by the installed language groups, which enable programs to show menus<br />

and dialogs in the user’s native language.<br />

When you create the translation, a check is run to ensure that the translation locale ID matches<br />

that of the system locale and the code page. If it does not match, a warning is issued. This is because<br />

the strings are not displayed in the Translations branch in the same way as they are in the Web<br />

client. Any double-byte strings that contain characters that are not contained in the code page will<br />

not be shown in that code page (typically they will appear as "?"). Even though the characters do<br />

not appear correctly, this is only a display problem. The characters are stored correctly internally.<br />

If you do not want to change the system locale to that of the language you are translating to, you<br />

can export a text file containing the client strings and translate the strings using Excel. Excel handles<br />

languages very well and you will see them appear correctly. You should then save the file in .xls<br />

format, import it to the Translations branch, and click Save.<br />

A font has both a name and a set of charsets that it supports. Some fonts support a variety of<br />

charsets from the Unicode range and some do not. Some, such as MS Sans Serif, are not Unicode<br />

fonts and only support the Western code pages.<br />

Japanese has limited support from the set of fonts that are installed by default on a US/English<br />

version of Windows 2000/XP. None of the standard fonts supplied with Windows 2000 support<br />

the Japanese charset.<br />

Once you install the Japanese language pack, however, standard Japanese fonts are installed on<br />

your system (Japanese fonts all have unique names).<br />

For Korean or Japanese translations to appear correctly in the <strong>Contributor</strong> Web client, you must<br />

set them as the default character set for the operating system.<br />

Tip: You can install fonts for East Asian languages through Regional and Language Options in the<br />

Windows Control Panel.


Chapter 12: Automating Tasks Using Macros<br />

You can automate common tasks that are performed in <strong>Contributor</strong> <strong>Administration</strong> Console by<br />

using macros. Formerly known as Automation Scripts and created by the <strong>Contributor</strong> Automation<br />

tool, the automation of tasks is now integrated with <strong>Contributor</strong> <strong>Administration</strong> Console using<br />

macros and macro steps. This makes it easier to maintain and use automated tasks.<br />

Macros, comprised of macro steps, can be<br />

● run interactively within <strong>Contributor</strong> <strong>Administration</strong> Console<br />

● triggered by events, such as the submitting of data<br />

● scheduled to run in IBM Cognos Connection<br />

● using the Windows Command line interface<br />

● using Windows scripts and batch files via a batch scheduling tool<br />

Macros are stored in the <strong>Planning</strong> content store. Macros and macro steps can be exported and<br />

imported again as an XML file.<br />

❑ Create a new macro, see (p. 194).<br />

❑ Run a macro from the <strong>Administration</strong> Console (p. 225), from IBM Cognos Connection (p. 225),<br />

as an IBM Cognos 8 Event (p. 226), from the command line (p. 228), or as a batch file (p. 228).<br />

❑ Set access rights for macros, (p. 42).<br />

❑ Troubleshoot macro errors, (p. 229).<br />

Common Tasks to Automate<br />

Using the <strong>Contributor</strong> <strong>Administration</strong> Console, you can group related tasks into a single macro to<br />

execute them in sequence. Macros can be run in the <strong>Administration</strong> Console, or by using external<br />

scheduling tools.<br />

For example, you can automatically<br />

● import an e.List and rights from files<br />

● run administration links<br />

● import simple (non-rule based) access tables<br />

● synchronize with Analyst<br />

● publish only changed data<br />

The following are examples of tasks in the <strong>Administration</strong> Console that are performed frequently<br />

and are often automated using either Macros or a batch scheduler tool.<br />

<strong>Administration</strong> <strong>Guide</strong> 193


Chapter 12: Automating Tasks Using Macros<br />

Task<br />

Import and Process Data<br />

Update Application with External<br />

Changes<br />

Month End (Move data to other sys-<br />

tems)<br />

Creating a Macro<br />

Create a New Macro<br />

194 <strong>Contributor</strong><br />

Macro Steps included<br />

File (repeated for each Cube)<br />

Prepare Import<br />

Go To Production<br />

Synchronize<br />

Import e.List<br />

Import Rights<br />

Import Access Table<br />

Go to Production<br />

Set an Application Offline<br />

Execute <strong>Administration</strong> Link<br />

Publish<br />

Create macros using the Macros tool in <strong>Contributor</strong> <strong>Administration</strong> Console. Macros are used to<br />

automatically perform tasks. Macros are stored in the <strong>Planning</strong> store.<br />

❑ Create a new macro (p. 194)<br />

❑ Create a macro step (p. 195) or transfer macro steps from another macro (p. 198)<br />

Create a new macro to create an automated task. Macros published to IBM Cognos Connection<br />

can be run or scheduled from the Content <strong>Administration</strong> area of IBM Cognos <strong>Administration</strong> or<br />

used to create an event or job.<br />

Steps<br />

1. In the <strong>Contributor</strong> <strong>Administration</strong> tree, click the Macros icon .<br />

2. Click New.<br />

3. In the Macro Name box, type the name of the new macro. Click the Publish to IBM Cognos<br />

Connection check box to have the macro accessible in IBM Cognos Connection and click OK.<br />

The new macro appears in the Macros box. The Edit State is set to Incomplete because no steps<br />

are added yet, and the Run State is set to Ready.<br />

Tip: Rename or delete a macro by selecting the macro in the Macros list and clicking Rename<br />

or Delete.


Create a Macro Step<br />

Function to automate<br />

Job Servers<br />

Add a container to a job server<br />

Use macros to group and run a number of macro steps in the specified sequence. A macro step holds<br />

the parameters for the macro. For example, you want to run the following tasks to import some<br />

data: Wait for Any Jobs, Load Import Data, Prepare Import, and Go to Production.<br />

Automation scripts used in the <strong>Contributor</strong> 7.2 Automation tool are matched to the macros used<br />

in the current <strong>Contributor</strong> <strong>Administration</strong> Console.<br />

Note: The Automation scripts created in version 7.2 and earlier cannot be used as macro steps.<br />

Stop a job server at a scheduled time<br />

Start a job server at a scheduled time<br />

Generate a report on jobs in a con-<br />

tainer<br />

Remove a container from a job server<br />

Control the maximum number of<br />

jobs that can run on a job server<br />

Set how often a job server checks for<br />

jobs<br />

Schedule other macro steps to allow<br />

any jobs to finish before other jobs<br />

start<br />

Development<br />

Take the development <strong>Contributor</strong><br />

application and create a production<br />

application<br />

Move data from import staging tables<br />

Load data into the datastore staging<br />

table<br />

Type of macro step<br />

Add Monitored Job Object<br />

(p. 199)<br />

Disable Job Processing (p. 199)<br />

Enable Job Processing (p. 199)<br />

Job Doctor (p. 200)<br />

Remove Monitored Job Object<br />

(p. 200)<br />

Set Max Concurrent Job Tasks<br />

(p. 201)<br />

Set Polling Interval for Job Server<br />

(p. 201)<br />

Wait for Any Jobs (p. 201)<br />

Go to Production (p. 202)<br />

Prepare Import (p. 205)<br />

File (p. 204)<br />

Chapter 12: Automating Tasks Using Macros<br />

7.2 automation script name<br />

AddMonitoredApplications.xml<br />

StopApplicationServer.xml<br />

StartApplicationServer.xml<br />

JobDoctor.xml<br />

RemoveMonitoredApplications.xml<br />

SetMaxConcurrentJobTasksFor<br />

ApplicationServer.xml<br />

SetPollingIntervalForApplication-<br />

Server.xml<br />

WaitForAnyJobs.xml<br />

AutomatedGoToProduction.xml<br />

AutomatedPrepareImport.xml<br />

UploadImportFile.xml<br />

<strong>Administration</strong> <strong>Guide</strong> 195


Chapter 12: Automating Tasks Using Macros<br />

Function to automate<br />

Synchronize Analyst and <strong>Contributor</strong><br />

Run an Analyst Macro<br />

Import Access Tables into a Contrib-<br />

utor application<br />

Import an e.List into a <strong>Contributor</strong><br />

application<br />

Import rights into a <strong>Contributor</strong><br />

application<br />

Run an existing Deployment Import<br />

or Deployment Export<br />

Load the development model XML<br />

into a development application<br />

Production<br />

Publish data collected by <strong>Contributor</strong><br />

to a datastore (using default paramet-<br />

ers)<br />

Publish data collected by <strong>Contributor</strong><br />

to a datastore (configuring all para-<br />

meters)<br />

Publish data collected by <strong>Contributor</strong><br />

to a datastore for reporting purposes<br />

Publish only changed data for e.List<br />

items<br />

Delete user or audit Commentary<br />

Run an Admin Extension in a Con-<br />

tributor application<br />

Set a <strong>Contributor</strong> application online<br />

or offline<br />

Administrator Links<br />

196 <strong>Contributor</strong><br />

Type of macro step<br />

Synchronize (p. 205)<br />

Execute Analyst Macro (p. 206)<br />

Import Access Table (p. 206)<br />

Import e.List (p. 208)<br />

Import Rights (p. 209)<br />

Run Deploymnet (p. 210)<br />

Upload a Development Model<br />

(p. 210)<br />

Publish - View Layout (p. 212)<br />

Publish - View Layout - Advanced<br />

(p. 212)<br />

Publish - Table Only Layout<br />

(p. 214)<br />

Publish - Incremental Publish)<br />

(p. 216)<br />

Delete Commentary (p. 217)<br />

Execute an Admin Extension<br />

(p. 218)<br />

Set an Application Online or Off-<br />

line (p. 219)<br />

7.2 automation script name<br />

N/A<br />

N/A<br />

N/A<br />

N/A<br />

N/A<br />

N/A<br />

UploadDevelopmentModel.xml<br />

N/A<br />

AutomatedPublish.xml<br />

N/A<br />

N/A<br />

DeleteAnnotations.xml<br />

ExecuteAdminExtension.xml<br />

UpdateWebClientBarrier.xml


Function to automate<br />

Run an <strong>Administration</strong> Link in a<br />

<strong>Contributor</strong> application<br />

Macros<br />

Generate debugging report for mac-<br />

ros<br />

Test a macro<br />

Run a macro<br />

Run a program from the command<br />

line<br />

Import a macro<br />

Export a macro<br />

Session<br />

Remove an application lock<br />

Steps<br />

Type of macro step<br />

Execute an <strong>Administration</strong> Link<br />

(p. 219)<br />

Macro Doctor (p. 221)<br />

Macro Test (p. 221)<br />

Execute Macro (p. 221)<br />

Execute Command Line (p. 222)<br />

Import Macros from Folder<br />

(p. 222)<br />

Export Macros to Folder (p. 223)<br />

Remove Application Lock (p. 223)<br />

1. In the Macro Steps area, click New.<br />

The Select new macro step type dialog box appears.<br />

2. Click the type of macro step you want to add.<br />

3. Click OK.<br />

7.2 automation script name<br />

N/A<br />

N/A<br />

N/A<br />

Test.xml<br />

A dialog box appears with the parameters relevant to the type of macro step you selected.<br />

4. Review each parameter and change it as required. For information about the parameters, see<br />

N/A<br />

N/A<br />

N/A<br />

N/A<br />

N/A<br />

the topic for the type of macro step you are adding in step 2.<br />

5. Click Validate. This checks the validity of the parameters.<br />

6. If you want to add other steps to the macro, click New and repeat steps 2 to 5 for each macro<br />

step.<br />

7. When you are done, click OK.<br />

Chapter 12: Automating Tasks Using Macros<br />

The macro steps are added to the list of macro steps contained in the macro.<br />

Tips: To edit or delete a macro step, click the macro step and click Edit or Delete. To reorder<br />

macro steps, click the macro step and then click the Move Up or Move Down button.<br />

<strong>Administration</strong> <strong>Guide</strong> 197


Chapter 12: Automating Tasks Using Macros<br />

Transferring Macros and Macro Steps<br />

198 <strong>Contributor</strong><br />

You can copy steps from one macro to another, create a backup copy of your macro, add steps to<br />

another macro, and make a copy of an existing macro.<br />

Steps<br />

1. Click the Macros icon in the tree.<br />

2. In the Macros list, select the Macro and click Transfer.<br />

3. Configure the following properties:<br />

Property<br />

Direction<br />

From<br />

To<br />

Delete Source<br />

Copy From<br />

Other Macro<br />

New Macro<br />

(self)<br />

Files in Folder<br />

Publish to IBM Cognos<br />

Connection<br />

Select Steps<br />

All Steps<br />

Specify Steps<br />

4. Click OK.<br />

Description<br />

Exports a macro step from the existing macro<br />

Imports a macro step into the existing macro<br />

Deletes the macro step from the original macro<br />

Copies the selected macro step to another macro<br />

Creates a new macro that includes the selected macro step<br />

Makes a copy of the selected macro step in the existing macro<br />

Copies the selected macro step to a folder, which is used for<br />

backup purposes<br />

Publishes the macro to IBM Cognos Connection for use in Event<br />

Studio.<br />

Selects all macro steps in the macro<br />

Specifies which macro steps to include in the transfer


Job Servers (Macro Steps)<br />

Add Monitored Job Object<br />

Job servers can be managed using macro steps. Macro steps can do things such as enable or disable<br />

job processing and set polling intervals.<br />

For more information about managing job servers, see "Manage a Job Server Cluster" (p. 56).<br />

Use this macro step to automate the addition of a container to a job server. It works with the<br />

Remove Monitored Job Object macro step. You can schedule the addition and removal of job<br />

objects from job servers at a time that is appropriate for your business.<br />

In <strong>Contributor</strong>, an application can run on more than one job server. The administrator adds a job<br />

server to <strong>Contributor</strong> <strong>Administration</strong> Console, and then adds applications to the job server. The<br />

administrator can start and stop job servers from <strong>Administration</strong> Console, but cannot start and<br />

stop individual applications.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Job Server or Cluster<br />

Job Container<br />

Disable Job Processing<br />

Description<br />

The name of the macro step.<br />

Browse to the Job Server or Cluster that you want to start by clicking<br />

the Browse button and selecting the correct server name.<br />

The container you want to monitor.<br />

Use this macro step to stop a job server at a scheduled time.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Enable Job Processing<br />

Macro Step Name<br />

Job Server or Cluster<br />

Description<br />

The name of the macro step.<br />

The job server or cluster that you want to stop.<br />

Use this macro step to start a job server at a scheduled time.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Description<br />

The name of the macro step.<br />

Chapter 12: Automating Tasks Using Macros<br />

<strong>Administration</strong> <strong>Guide</strong> 199


Chapter 12: Automating Tasks Using Macros<br />

Job Doctor<br />

Parameter<br />

Job Server or Cluster<br />

Description<br />

The job server or cluster that you want to start.<br />

Use this macro step to generate a report on jobs in a container. The report is in XHTML format.<br />

It is typically used on the advice of Technical Support and can be used to help debug problems with<br />

<strong>Contributor</strong> jobs.<br />

Tip: Adding a Wait for Any Jobs macro step before the Job Doctor macro step ensures that all jobs<br />

are complete before moving on to this macro step. For more information on the Wait for Any Jobs<br />

macro step, see "Wait for Any Jobs" (p. 201).<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Job Container<br />

Include contents of<br />

Admin History<br />

Report file name (Enter<br />

Local Application Server<br />

Path)<br />

Remove Monitored Job Object<br />

200 <strong>Contributor</strong><br />

Description<br />

The name of the macro step.<br />

The container you want to report on.<br />

Whether to include the adminhistory table information in the report.<br />

Although this is useful information, it can slow down report generation<br />

and make the output quite large.<br />

A path and file name for the XHTML report.<br />

Use this macro step to automate the removal of a container from a job server. It is the converse<br />

macro step to the Add Monitored Job Object macro step. You can schedule the addition and removal<br />

of applications from job servers at a time that is appropriate for your business.<br />

In <strong>Contributor</strong>, an application can run on more than one job server. The administrator adds a job<br />

server to <strong>Contributor</strong> <strong>Administration</strong> Console, and then adds applications to the job server. The<br />

administrator can start and stop job servers from <strong>Administration</strong> Console, but cannot start and<br />

stop individual applications.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Job Server or Cluster<br />

Description<br />

The name of the macro step.<br />

Browse to the Job Server or Cluster that you want to stop by clicking<br />

the Browse button and selecting the correct server name.


Parameter<br />

Job Container<br />

Set Max Concurrent Job Tasks<br />

Description<br />

The container you want to stop monitoring.<br />

Use this macro step to control the maximum number of jobs that can run on a job server. This is<br />

useful if the computer is slow or you must run other non-<strong>Contributor</strong> applications on the server at<br />

the same time.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Job Server or Cluster<br />

Maximum number of Job<br />

Tasks<br />

Set Polling Interval for Job Server<br />

Wait for Any Jobs<br />

Description<br />

The name of the macro step.<br />

The job server or cluster that you want to limit job tasks on.<br />

The maximum number of job tasks allowed. This should be no more<br />

than the number of physical processors that you have on the computer.<br />

Use this macro step to set how often a job server checks for jobs. The default is 15 seconds. Use<br />

this macro step to control the amount of resources used by the job service.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Job Server or Cluster<br />

Polling Interval<br />

Description<br />

The name of the macro step.<br />

The job server or cluster that you want to set polling interval for.<br />

Set the frequency with which a job server looks to see if there are any<br />

jobs to do. This is measured in seconds. The default is 15 seconds.<br />

Use this macro step to ensure jobs are completed before allowing processing to continue. This macro<br />

step is especially useful when combining it with other macro steps that may not wait for all jobs to<br />

run before beginning, such as the Go To Production macro step.<br />

You can monitor the jobs in a container.<br />

The following table describes the relevant parameters.<br />

Chapter 12: Automating Tasks Using Macros<br />

<strong>Administration</strong> <strong>Guide</strong> 201


Chapter 12: Automating Tasks Using Macros<br />

Parameter<br />

Macro Step Name<br />

Job Container<br />

Job Timeout (in minutes)<br />

Development (Macro Steps)<br />

Go to Production<br />

202 <strong>Contributor</strong><br />

Description<br />

The name of the macro step.<br />

The container you want to monitor.<br />

A time period after which the macro terminates with an error if the<br />

jobs are not completed. The default is 1440 minutes, which is one<br />

day.<br />

Development tasks can be managed using macro steps. Macro steps can do things such as run the<br />

Go to Production process, import access tables, e.Lists, and rights into a <strong>Contributor</strong> application,<br />

and move data from or load data into staging tables.<br />

The Go to Production process takes the development <strong>Contributor</strong> application and creates the pro-<br />

duction application, making it available to users on the Web client. A new development application<br />

is established. Use this macro step to automate the Go to Production process.<br />

Before you can run Go to Production, the application must at least have an e.List and users defined.<br />

You can run Go to Production without setting any rights, but no one can view the application on<br />

the Web client. However, you can preview the application by selecting Production, Preview in the<br />

<strong>Administration</strong> Console.<br />

When you start Go to Production, job status is checked. If jobs are running or queued, the Go To<br />

Production macro step will wait for them to complete. During the automated Go to Production<br />

process, the following checks are completed.<br />

● A check is made to see if there are any jobs running.<br />

● If necessary, a job is created to ensure that all e.List items are reconciled if they are not already.<br />

● A check is made to see if a Cut-down models job is required. If it is required, the job is created<br />

and run.<br />

For more information on Go To Production, see "The Go to Production Process" (p. 243).<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

Create <strong>Planning</strong> package<br />

Description<br />

The name of the macro step.<br />

The name of the <strong>Contributor</strong> application datastore that the macro<br />

step is being run against.<br />

Publish the <strong>Planning</strong> package to IBM Cognos Connection.


Import Data<br />

Parameter<br />

Reset e.List Item States<br />

Minimum e.List Item<br />

Description<br />

State: ● Not Started<br />

Maximum e.List Item<br />

One of the following minimum workflow states<br />

● Work in Progress<br />

● Locked<br />

If you set the minimum workflow state to Work in Progress, any e.List<br />

items that have a workflow state of Not Started are reset to Work in<br />

Progress. The default is Not Started, which means no change takes<br />

place.<br />

State: ● Not Started<br />

Skip Top Level e.List Items<br />

Jobs<br />

Job Timeout (in minutes)<br />

Wait for jobs after Go to<br />

Production<br />

Validation Report<br />

One of the following maximum workflow states<br />

● Work in Progress<br />

● Locked<br />

The state must be greater than or the same as the minimum e.List<br />

Item State.<br />

If you set Work in Progress, and e.List items are Locked, the e.List<br />

items are reset from Locked to Work in Progress.<br />

The default is Locked, which means no change takes place.<br />

Does not reset top level e.List items.<br />

A time period after which the macro step terminates if the preexisting<br />

jobs are not completed. If jobs are running, Go to Production will<br />

wait for them to finish. The default is 1440 minutes, which is one<br />

day.<br />

After the Go to Production process, complete all jobs before moving<br />

on to the next macro step.<br />

Indicates problems with parameters set in the Go to Production pro-<br />

cess. Click Validate to recheck the validation report status.<br />

Importing data into cubes requires the following process.<br />

● Create the source file.<br />

Chapter 12: Automating Tasks Using Macros<br />

<strong>Administration</strong> <strong>Guide</strong> 203


Chapter 12: Automating Tasks Using Macros<br />

204 <strong>Contributor</strong><br />

● Select the cube and text file to load into the cube.<br />

● Load data from the text files into the datastore staging tables.<br />

● Prepare the import data blocks.<br />

● Run the Go to Production process.<br />

For more information on importing data, see "Managing Data" (p. 143).<br />

Two macro steps are provided for the import process: Upload Import File and Prepare Import.<br />

Upload an Import File<br />

Use this macro step to load data from a text file into the datastore staging table. You can load only<br />

one file at a time using this macro step.<br />

An import table is created for each cube during datastore creation. There is a column for each<br />

dimension, plus a value column. If new cubes or new dimensions are added to the Analyst model<br />

after an application is created, new import tables or columns in the tables are added after a syn-<br />

chronize is run and saved.<br />

The cube name associated with the import table is stored in the application object table. The tables<br />

are named im_cubename and ie_cubename. Errors are stored in ie_cubename.<br />

Files can be loaded from any location that your bulk load engine supports.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

File to Upload (Enter<br />

Local Application<br />

Server Path)<br />

Target Cube Name<br />

Remove existing data<br />

in import table<br />

Description<br />

The name of the macro step.<br />

The name of the <strong>Contributor</strong> application datastore that the macro step is<br />

being run against.<br />

The name and location of the file to be loaded. This can be a UNC location<br />

such as \\server\share\file.txt.<br />

The name of the cube that the data is to be imported into.<br />

Whether existing rows are to be deleted. Selecting this item removes previ-<br />

ously loaded data. Clearing this option when the names of previously<br />

loaded data match the newly loaded data causes the new data to replace<br />

the old. Previously loaded data that is not matched remains in the staging<br />

table.


Synchronize<br />

Prepare Import<br />

Use this macro step to take the data from the import staging tables per cube, per e.List item. The<br />

calculation engine validates the data and converts it into import blocks Errors are written to<br />

ie_cubename.<br />

The import data block contains only the data required for an individual e.List item. Data targeting<br />

No Data cells or formula items and data not matching any items is removed.<br />

The process of converting data into import blocks uses the Job architecture to run on multiple<br />

computers and processes. It will not conflict with other online jobs for the application.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

Cube Details<br />

Cubes to Prepare<br />

Cubes to Zero<br />

Job Timeout (in<br />

minutes)<br />

Description<br />

The name of the macro step.<br />

The name of the <strong>Contributor</strong> application datastore that the macro step<br />

is being run against.<br />

The names of the cubes that you are going to prepare import data for.<br />

The names of the cubes that you are going to prepare import data for.<br />

A time period after which the macro step terminates if the job is not<br />

completed. If the job does not succeed, an error appears. The default is<br />

1440 minutes.<br />

Use this macro step to automate the synchronize function for the application.<br />

For more information on synchronizing an application, see "Synchronizing an Application" (p. 179).<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

Analyst Library Name<br />

Description<br />

The name of the macro step.<br />

The name of the <strong>Contributor</strong> application datastore that the macro step<br />

is being run against.<br />

Chapter 12: Automating Tasks Using Macros<br />

The name of the Analyst Library to synchronize. Either use library name<br />

already specified for the application or select other library name.<br />

<strong>Administration</strong> <strong>Guide</strong> 205


Chapter 12: Automating Tasks Using Macros<br />

Parameter<br />

Save Changes if<br />

Destructive<br />

Execute Analyst Macro<br />

Import Access Table<br />

206 <strong>Contributor</strong><br />

Description<br />

Destructive synchronize removes dimensional items or cubes and results<br />

in data loss.<br />

When you run the synchronize process from the <strong>Contributor</strong> Administra-<br />

tion Console, you can preview the changes and decide whether to save<br />

the synchronization. When running synchronize using macros, it is not<br />

possible to preview the changes before saving the synchronize. Instead,<br />

you can choose to cancel the synchronization if data will be lost by<br />

selecting the Save changes if destructive option.<br />

A synchronize is considered destructive in the following circumstances:<br />

● cube dimensions added<br />

● cube dimensions deleted<br />

● cube dimensions substituted<br />

● cube dimensions reordered<br />

● detail items from a dimension deleted<br />

● detail items from a dimension changed to calculated<br />

For more information, see "Changes that Result in Loss of Data" (p. 179).<br />

Use this macro step to execute an Analyst Macro. For more information on Analyst macros, see<br />

the Analyst User <strong>Guide</strong>.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Analyst Macro<br />

Description<br />

The name of the macro step.<br />

The Analyst Library containing the macro you want to run and the specific<br />

macro.<br />

Use this macro step to import Access Tables into your application.<br />

For more information on Access Tables, see "Access Tables" (p. 119).<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Description<br />

The name of the macro step.


Parameter<br />

Application<br />

Base access level<br />

Access Table Name<br />

(import only)<br />

Access Table File Path<br />

(Enter Local Applica-<br />

tion Server Path)<br />

Trim leading and trail-<br />

ing whitespace<br />

First row is heading<br />

Delete Undefined items<br />

File Type<br />

Text File<br />

Quoted Strings<br />

Delimiter<br />

Excel File<br />

Errors and Warnings<br />

Stop if warnings or<br />

errors returned from<br />

import<br />

Description<br />

The name of the <strong>Contributor</strong> application datastore that the macro step<br />

is being run against.<br />

Choose from:<br />

● No Data<br />

● Hidden<br />

● Read<br />

● Write (default)<br />

The Access Table name.<br />

The path for the Access Table file.<br />

Whether you want to remove leading and trailing whitespace in the file.<br />

Whether the first row is used as the header row.<br />

If an access table file was previously imported for the access table, and<br />

you are importing a new one, existing settings are updated with the new<br />

specified settings. Select this check box to delete settings that do not exist<br />

in the new file. If the check box is cleared, previous settings are retained.<br />

Whether you are importing a text file.<br />

Whether the file has quoted strings.<br />

The type of delimiter the file uses.<br />

Whether you are importing an Excel file. Enter the Worksheet location.<br />

Whether you want the macro to stop or not if a warning or error is<br />

returned from the import.<br />

Chapter 12: Automating Tasks Using Macros<br />

<strong>Administration</strong> <strong>Guide</strong> 207


Chapter 12: Automating Tasks Using Macros<br />

Parameter<br />

Log File Path<br />

Add this extension to<br />

the import file path<br />

Specify Log File Path<br />

(Enter Local Applica-<br />

tion Server Path)<br />

Import e.List and Rights<br />

208 <strong>Contributor</strong><br />

Import e.List<br />

Description<br />

Specify the extension you want for the log file name. For example: Log<br />

Specify a local application server path for where you want the log file to<br />

be saved.<br />

For more information on the e.List, see "Managing User Access to Applications" (p. 93).<br />

Use this macro step to import an e.List into your application.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

e.List File Path (Enter<br />

Local Application<br />

Server Path)<br />

Trim leading and trail-<br />

ing whitespace<br />

First row is heading<br />

Delete Undefined items<br />

File Type<br />

Text File<br />

Quoted Strings<br />

Description<br />

The name of the macro step.<br />

The name of the <strong>Contributor</strong> application datastore that the macro step<br />

is being run against.<br />

The location of the e.List file.<br />

Whether you want to remove leading and trailing whitespace in the file.<br />

Whether the first row is used as the header row.<br />

If an e.List was previously imported and you are importing a new e.List,<br />

existing settings are updated with the new specified settings. Select this<br />

check box to delete settings that do not exist in the new file. If the check<br />

box is cleared, previous settings are retained.<br />

Whether you are importing a text file.<br />

Whether the file has quoted strings.


Parameter<br />

Delimiter<br />

Excel File<br />

Errors and Warnings<br />

Stop if warnings or<br />

errors returned from<br />

import<br />

Log File Path<br />

Add this extension to<br />

the import file path<br />

Specify Log File Path<br />

(Enter Local Applica-<br />

tion Server Path)<br />

Import Rights<br />

Description<br />

The type of delimiter the file uses.<br />

Whether you are importing an Excel file. Enter the Worksheet location.<br />

Whether you want the macro to stop or not if a warning or error is<br />

returned from the import.<br />

Specify the extension you want for the log file name. For example: Log<br />

Specify a local application server path for where you want the log file to<br />

be saved.<br />

Use this macro step to import rights into your application.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

Rights File Path (Enter<br />

Local Application<br />

Server Path)<br />

Trim leading and trail-<br />

ing whitespace<br />

First row is heading<br />

File Type<br />

Text File<br />

Description<br />

The name of the macro step.<br />

The name of the <strong>Contributor</strong> application datastore that the macro step<br />

is being run against.<br />

The location for the Rights file.<br />

Whether you want to remove leading and trailing whitespace in the file.<br />

Whether the first row is used as the header row.<br />

Whether you are importing a text file.<br />

Chapter 12: Automating Tasks Using Macros<br />

<strong>Administration</strong> <strong>Guide</strong> 209


Chapter 12: Automating Tasks Using Macros<br />

Run Deployment<br />

Parameter<br />

Quoted Strings<br />

Delimiter<br />

Excel File<br />

Errors and Warnings<br />

Stop if warnings or<br />

errors returned from<br />

import<br />

Log File Path<br />

Add this extension to<br />

the import file path<br />

Specify Log File Path<br />

(Enter Local Applica-<br />

tion Server Path)<br />

Description<br />

Whether the file has quoted strings.<br />

The type of delimiter the file uses.<br />

Whether you are importing an Excel file. Enter the Worksheet location.<br />

Whether you want the macro to stop or not if a warning or error is<br />

returned from the import.<br />

Specify the extension you want for the log file name. For example: Log<br />

Specify a local application server path for where you want the log file to<br />

be saved.<br />

Use this macro step to run a Deployment Import or Deployment Export.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Select the Request to<br />

run<br />

Upload a Development Model<br />

210 <strong>Contributor</strong><br />

Description<br />

The name of the macro step.<br />

Navigate to the deployment that you want to run.<br />

Use this macro step to load the development model XML into a development application. For<br />

information about saving the Development XML, see "Save Application XML for Support" (p. 79).<br />

Remove write access in the <strong>Contributor</strong> <strong>Administration</strong> Console from the current user before running<br />

the macro step. This macro step typically is used if you are running parallel test and live servers<br />

and you need to upload the XML from the test server to the live server. On the target server, there<br />

must be an existing application. After the development model has been uploaded, Go to Production<br />

must be run for the model to appear to Web users.<br />

This should be used only on the advice of Technical Support.


The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

Model Definition File Path<br />

(Enter Local Application<br />

Server Path)<br />

Save any generated datastore<br />

scripts to file<br />

Production (Macro Steps)<br />

Publish<br />

Description<br />

The name of the macro step.<br />

Enter the name of the <strong>Contributor</strong> application datastore that the<br />

macro step is being run against.<br />

The name and location of the model definition file. The model<br />

definition file is a description of the entire <strong>Contributor</strong> application<br />

and is in XML format.<br />

A location for generated datastore scripts to be saved.<br />

When Generate Scripts is set to Yes in Admin Options, a check<br />

is made to see if the datastore must be restructured, for example,<br />

if tables must be added or deleted, a script is generated.<br />

This datastore update script typically must be run by a database<br />

administrator (DBA).<br />

Production can be managed using macro steps. Macro steps can do things such as enable automated<br />

publishing or modify publish layouts.<br />

Use the Automated Publish process when you need to perform a <strong>Contributor</strong> publish as a scheduled<br />

task or from the command line as part of a script. A Publish can no longer target the <strong>Contributor</strong><br />

transactional datastore.<br />

During the publish process, the published data is exported to a temp directory on the job servers.<br />

A file is created for each e.List item for each cube. After the files are created, they are typically<br />

loaded to the datastore using a bulk load utility (BCP or SQLLDR) and then the temp files are<br />

deleted.<br />

Using the Publish - View Layout - Advanced macro step, you can do an interruptible publish if you<br />

want to use different mechanisms to bulk load data into the target datastore or an external applic-<br />

ation. Interruptible publish prevents the temp files from being loaded into the datastore and deleted.<br />

They remain in the temp directory, or you can collate them into a large file per cube. For collation,<br />

each job server that may be involved in the publish job must expose a share that the computer<br />

running the macro step can access. That share must expose the TEMP folder for the user context<br />

of the <strong>Planning</strong> Service.<br />

Tip: To use Interruptible publish, you must select from the How should the data be managed option<br />

group the User-managed option.<br />

For more information on publishing, see "Publishing Data" (p. 259).<br />

Chapter 12: Automating Tasks Using Macros<br />

<strong>Administration</strong> <strong>Guide</strong> 211


Chapter 12: Automating Tasks Using Macros<br />

212 <strong>Contributor</strong><br />

Publish - View Layout<br />

Use this macro step instead of the Publish-View Layout-Advanced macro step when you do not<br />

need to change any of the default parameters set for the Publish-View Layout-Advanced macro<br />

step.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Select Publish Container<br />

Suppress zeros<br />

Cubes to Publish<br />

e.List items to Publish<br />

Description<br />

The name of the macro step.<br />

Select a container to publish the data to.<br />

Whether to publish zeros.<br />

Selecting this option suppresses zeros. This can speed up the process<br />

of publishing data substantially, depending on the number of blank<br />

cells.<br />

Publish - View Layout - Advanced<br />

Whether to publish all or some Contribution cubes.<br />

Whether to publish all e.List items, use the selection from <strong>Contributor</strong><br />

<strong>Administration</strong> Console, or select individual e.List items.<br />

This macro step automates the view layout publish process. This layout is for historical purposes.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Select Publish Container<br />

Use Plain Number Formats<br />

Data Filters<br />

Description<br />

The name of the macro step.<br />

Select a container to publish the data to.<br />

Whether numeric formatting is removed or retained. Selecting this<br />

option removes any numeric formatting. It publishes data to as many<br />

decimal places as needed, up to the limit stored on the computer.<br />

Negative numbers are prefixed by a minus sign. There are no thousand<br />

separator, percent signs, currency symbols, or other numeric formats<br />

that are applied on the dimension or D-Cube. Plain Number Format<br />

uses the decimal point (.) as the decimal separator.


Parameter<br />

Suppress zeros<br />

Data access level to pub-<br />

Description<br />

Whether to publish zeros.<br />

Selecting this option suppresses zeros. This can speed up the process<br />

of publishing data substantially, depending on the number of blank<br />

cells.<br />

lish ● No data<br />

Annotation Filters<br />

How should the data be<br />

managed?<br />

Automatically upload data<br />

to datastore<br />

Remove existing data<br />

Where should the data be<br />

published to?<br />

User Managed<br />

Publish GUIDs not Names<br />

(to upload to export<br />

tables)<br />

Should files be collated<br />

The access level of data to Publish, one of<br />

● Hidden (default)<br />

● Read<br />

● Write<br />

This option is additive, so if you select Hidden, data set to Read and<br />

Write is also published, and if you select Read, data set to Write is<br />

also published.<br />

The type of annotations to be published.<br />

Whether to load data automatically into the datastore.<br />

Selecting this option ensures that a consistent set of data is published.<br />

It publishes data for all the selected cubes, and removes all other<br />

published data in the datastore. Clear this option if you want to leave<br />

existing data. If an e.List item is being republished, it replaces data<br />

for that e.List item with the new data.<br />

Whether to publish to either a default container or an alternate publish<br />

container.<br />

Whether to take control of the published data.<br />

Select this item if you are doing a standard publish because the GUIDs<br />

are used to load the data into the publish tables. You may want to<br />

use names rather than GUIDS if the data is to be exported to Analyst<br />

or external systems.<br />

Chapter 12: Automating Tasks Using Macros<br />

If Yes, enter location for Local Application Server and enter the share<br />

name to retrieve files from. This share must exist on all machines that<br />

process the job. The same share name is used for all machines.<br />

<strong>Administration</strong> <strong>Guide</strong> 213


Chapter 12: Automating Tasks Using Macros<br />

214 <strong>Contributor</strong><br />

Parameter<br />

Cubes to Publish<br />

Stop execution if specified<br />

Cubes not found<br />

Select e.List items<br />

Stop execution if no e.List<br />

items result from settings<br />

e.List items to Publish<br />

Stop execution if specified<br />

e.List items not found<br />

Apply Filters to e.List<br />

items<br />

Publish e.List items<br />

changed since<br />

e.List item type<br />

e.List item state<br />

Job Timeout (in minutes)<br />

Publish - Table Only Layout<br />

Description<br />

Whether to publish all or specific Contribution cubes.<br />

Selecting this option ensures that execution and processing will be<br />

halted when there are no Cubes that can be identified from the results<br />

of the selected settings.<br />

Selecting this option ensures that execution and processing will be<br />

halted when there are no e.List items that can be identified from the<br />

results of the selected settings.<br />

Whether to publish all e.List items, use the selection from <strong>Contributor</strong><br />

<strong>Administration</strong> Console, or select individual e.List items.<br />

Selecting this option ensures that execution and processing will be<br />

halted when there are no e.List items that can be identified from the<br />

results of the selected settings.<br />

Enter or select a date and time for filtering those e.List items that have<br />

since changed.<br />

One of the following:<br />

<strong>Contributor</strong><br />

Reviewer<br />

Reviewer or <strong>Contributor</strong> (default)<br />

Whether to publish e.List items at any state or specify a particular<br />

state to publish.<br />

A time period after which the macro step terminates if the job is not<br />

completed. If the job does not succeed, an error appears. The default<br />

is 1440 minutes, or one day.<br />

This macro step automates the table - only layout publish process. This layout is required for the<br />

Generate Framework Manager Model extensions and the Generate Transformer Model extension<br />

(Analyst and <strong>Contributor</strong> versions for both).<br />

The following table describes the relevant parameters.


Parameter<br />

Macro Step Name<br />

Select Publish Container<br />

Use persisted parameters<br />

for all settings<br />

Data options/ Column<br />

Options<br />

Create columns with data<br />

types based on the 'dimen-<br />

sion for publish'<br />

Only create the following<br />

columns<br />

Include rollups<br />

Include zero or blank val-<br />

ues<br />

Prefix column names with<br />

data type<br />

Table options<br />

Include user annotations<br />

Include audit annotations<br />

Include attached docu-<br />

ments<br />

Description<br />

The name of the macro step.<br />

Select a container to publish the data to.<br />

Whether to use same parameters for all settings for each time macro<br />

step is run.<br />

Determines the column data types from the model using the selected<br />

'dimension for publish'. This option can minimize the default number<br />

of columns although non-uniform data will not be published. For<br />

example, row data will be filtered when a value is inconsistent with<br />

the model and column data type.<br />

This option will always publish the selected columns from the<br />

'dimension for publish'. Use this option to publish only the required<br />

data of the selected types. Selecting numeric, date and text options<br />

will ensure all data (uniform and non-uniform) is published.<br />

Selecting this check box includes all items, including calculated items.<br />

Clearing this option only publishes leaf items, and therefore fewer<br />

rows. You can recreate the calculation in your reporting tools by<br />

linking the et and sy tables.<br />

Whether to include zero or blank values in the publish.<br />

This option suppresses rows containing all zeros or blanks. This can<br />

speed up the process of publishing data substantially, depending on<br />

the number of zero or blank cells.<br />

Whether to prefix column names with the data type.<br />

Select this option if you wish the column name to be prefixed with<br />

the data type to avoid reserved name conflicts.<br />

Chapter 12: Automating Tasks Using Macros<br />

Whether to include these annotations in the publish.<br />

Whether to include these annotations in the publish.<br />

Whether to include attached documents in the publish.<br />

<strong>Administration</strong> <strong>Guide</strong> 215


Chapter 12: Automating Tasks Using Macros<br />

Parameter<br />

Cubes to publish<br />

Dimensions for publish<br />

Select e.List items<br />

e.List items to Publish<br />

Job Monitoring<br />

Timeout (in minutes)<br />

Description<br />

Publish - Incremental Publish<br />

216 <strong>Contributor</strong><br />

Description<br />

Whether to publish all or specific Contribution cubes.<br />

Whether to use the default dimension for publish or specify a partic-<br />

ular dimension.<br />

Whether to publish all e.List items, use the selection from <strong>Contributor</strong><br />

<strong>Administration</strong> Console, or select individual e.List items.<br />

A time period after which the macro step terminates if the job is not<br />

completed. If the job does not succeed, an error appears. The default<br />

is 1440 minutes.<br />

A description for this reporting job.<br />

Use this macro step to publish data so that only the e.List items that contain changed data are<br />

published.If your publish selection contains more than one cube, but values change in only one<br />

cube, the changed e.List items for all the cubes are republished. Before an incremental publish can<br />

be run, the publish schema must be created, either by doing a full publish, selecting the cubes and<br />

e.List items that you want to publish, or by generating and running publish scripts. When the Go<br />

to Production process is run, publishes that are changes-only are suspended. Model changes that<br />

result in changes to the publish schema may result in you needing to do a full publish of all the<br />

selected cubes and e.List items.<br />

Parameter<br />

Macro Step Name<br />

Where should the data be<br />

published to?<br />

Reporting Publish Container<br />

e.List Item Filter<br />

Job Monitoring<br />

Description<br />

The default name of this macro<br />

Choose either to publish data to the default container or specify<br />

an alternate publish container.<br />

Select the check box if you wish to only published submitted e.List<br />

items.


Delete Commentary<br />

Parameter<br />

Timeout (in minutes)<br />

Description<br />

A time period after which the macro step terminates if the job is<br />

not completed. If the job does not succeed, an error appears. The<br />

default is 1440 minutes.<br />

Use this macro step to delete user or audit annotations and attached documents in a <strong>Contributor</strong><br />

application using date and time, character string, and filters for e.List item name.<br />

<strong>Contributor</strong> applications can be annotated by users in the Web application. There are user and<br />

audit annotations.<br />

User annotations consist of comments per cell, cube (tab in the Web client), and model.<br />

Audit annotations are records of user actions in the Web client, such as typing data, importing files,<br />

and copying and pasting data. They can be enabled or disabled. For more information, see "Delete<br />

Commentary" (p. 289).<br />

When you delete commentary, the following process occurs:<br />

❑ The macro step fetches and unpacks the model definition (this is a description of the entire<br />

<strong>Contributor</strong> application).<br />

❑ Processes the commentary for each e.List item in turn--deleting specified comments.<br />

Tip: Adding a Wait for Any Jobs macro step before the Delete Commentary macro step ensures<br />

that all jobs are complete before moving on to this macro step. For more information on the Wait<br />

for Any Jobs macro step, see "Wait for Any Jobs" (p. 201).<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

Annotation Filters<br />

Include user annotations in<br />

the operation<br />

Include audit annotations in<br />

the operation<br />

Description<br />

The name of the macro step.<br />

The name of the <strong>Contributor</strong> application datastore that the macro<br />

step is being run against.<br />

Chapter 12: Automating Tasks Using Macros<br />

Whether user annotations are deleted. It is selected by default.<br />

Whether records of user actions are deleted.<br />

<strong>Administration</strong> <strong>Guide</strong> 217


Chapter 12: Automating Tasks Using Macros<br />

Parameter<br />

Apply date filter<br />

Apply content filter<br />

e.List items to process<br />

Job Timeout (in minutes)<br />

Execute an Admin Extension<br />

218 <strong>Contributor</strong><br />

Description<br />

Whether to delete commentary by a date filter.<br />

If selected, you must enter the date and time before which all<br />

annotations are deleted. This is the date when annotations were<br />

created, not saved. Dates are in ISO8601 format.<br />

Whether to delete commentary using a content filter. This is off by<br />

default.<br />

If selected, you must enter a character string as a filter.<br />

Important: All commentary containing the character pattern spe-<br />

cified are deleted. For example, if you specify the string pen,<br />

annotations containing the words pencil and open are deleted.<br />

You can specify which e.Lists to process.<br />

The e.List item name is in the elistitemname column in the e.List<br />

import file or e.List Item Id in <strong>Contributor</strong> <strong>Administration</strong> Console.<br />

The names are case sensitive.<br />

The following example contains a mixture of contributor e.List<br />

items (A1 and A2) and a reviewer e.List item (B).<br />

A1;A2;B<br />

Alternatively, you can choose to process all e.List items.<br />

A time period after which the macro step terminates if the job is<br />

not completed. The default is 1440 minutes or one day.<br />

Use this macro step to automate the running of the Generate Transformer Model extension.<br />

To automate the Generate Transformer Model Extension, you must first run the extension using<br />

<strong>Contributor</strong> <strong>Administration</strong> Console. This creates valid settings in the application datastore. The<br />

macro then uses these settings.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

Admin Extension<br />

Description<br />

The name of the macro step.<br />

The name of the <strong>Contributor</strong> application datastore that the macro step<br />

is being run against.<br />

The Admin Extension that you want to run.


Set an Application Online or Offline<br />

Use this macro step to automate setting the Web application online or offline, preventing people<br />

from accessing the Web site.<br />

For more information about accessing the Web site, see "Working Offline" (p. 89).<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

Enable Web Client<br />

Barrier<br />

Description<br />

Administrator Links (Macro Steps)<br />

Execute <strong>Administration</strong> Link<br />

The name of the macro step.<br />

The name of the <strong>Contributor</strong> application datastore that the macro step<br />

is being run against.<br />

Whether to prevent users from accessing the <strong>Contributor</strong> application on<br />

the Web. Clear the Enable Web Client Barrier option to bring the<br />

application back online. Select this option to take the application offline.<br />

Use this macro step to run an <strong>Administration</strong> Link automatically. <strong>Administration</strong> Links copy data<br />

between <strong>Contributor</strong> applications in the same <strong>Planning</strong> Store without having to publish data first.<br />

Note: You must first create a valid <strong>Administration</strong> Link for the application. For more information,<br />

see "<strong>Administration</strong> Links" (p. 147).<br />

Tip: Adding a Wait for Any Jobs macro step before and after the Execute <strong>Administration</strong> Link<br />

macro step ensures that all jobs are complete before moving on to this macro step and after running<br />

this macro step. For more information on the Wait for Any Jobs macro step, see "Wait for Any<br />

Jobs" (p. 201).<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

<strong>Administration</strong> Link<br />

Job Timeout (in<br />

minutes)<br />

Validate <strong>Administration</strong> Links<br />

Description<br />

The name of the macro step.<br />

The <strong>Administration</strong> Link to run.<br />

A time period after which the macro step terminates if the job is not<br />

completed. If the job does not succeed, an error appears. The default is<br />

1440 minutes, or one day.<br />

Chapter 12: Automating Tasks Using Macros<br />

Use this macro step to validate an administration link automatically. <strong>Administration</strong> links copy<br />

data between <strong>Contributor</strong> applications in the same <strong>Planning</strong> Store without having to publish data<br />

<strong>Administration</strong> <strong>Guide</strong> 219


Chapter 12: Automating Tasks Using Macros<br />

first. If you have an administration link based on an Analyst A-Table, you should validate the link<br />

if a synchronize with Analyst has been done, or if there have been any changes to the A-Table in<br />

Analyst.<br />

Note: You must first create a valid administration link using an A-Table for the application. For<br />

more information, see "<strong>Administration</strong> Links" (p. 147).<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

<strong>Administration</strong> Link<br />

Job Timeout (in<br />

minutes)<br />

Synchronize <strong>Administration</strong> Links<br />

Description<br />

The name of the macro step.<br />

The administration link or links to run.<br />

A time period after which the macro step terminates if the job is not<br />

completed. If the job does not succeed, an error appears. The default is<br />

1440 minutes, or one day.<br />

Use this macro step to synchronize an administration link automatically. <strong>Administration</strong> links copy<br />

data between <strong>Contributor</strong> applications in the same <strong>Planning</strong> Store without having to publish data<br />

first. When you create an administration link based on an Analyst A-Table, there is always the<br />

possibility that the underlying A-Table in Analyst can change over time. You can synchronize to<br />

ensure the A-Table you have used in an administration link is up to date.<br />

Note: You must first create a valid administration link using an A-Table for the application. For<br />

more information, see "<strong>Administration</strong> Links" (p. 147).<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

<strong>Administration</strong> Links<br />

Job Timeout (in<br />

minutes)<br />

Macros (Macro Steps)<br />

220 <strong>Contributor</strong><br />

Description<br />

The name of the macro step.<br />

The administration link or links to run.<br />

A time period after which the macro step terminates if the job is not<br />

completed. If the job does not succeed, an error appears. The default is<br />

1440 minutes, or one day.<br />

Macros are automated tasks such as those performed in <strong>Contributor</strong> <strong>Administration</strong> Console. You<br />

can automate many macro-specific functions, such as running and importing macros.


Macro Doctor<br />

Macro Test<br />

Execute Macro<br />

Use this macro step to generate a report on macros for debugging purposes. In the event of problems<br />

with macros you may be asked by customer support to create and run the Macro Doctor macro.<br />

The Macro Doctor captures information about the macros. It also allows you to see more detail<br />

about the execution steps, and write them to files so that they can be inspected. Those macro step<br />

definitions may be imported into another system, if required.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Folder for report and<br />

Macro Steps (Enter Local<br />

Application Server Path)<br />

Include detailed progress for<br />

Macro Steps<br />

Description<br />

The name of the macro step.<br />

The location where the report and macro step information is created.<br />

Whether to include detailed progress information for each macro<br />

step. This extra information can often aid the debugging process.<br />

Use this macro step to test if the macro components are running correctly. When successfully run,<br />

it logs a user-specified Windows Application Event Log message that can be modified.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Message<br />

Description<br />

The name of the macro step.<br />

A message to be logged.<br />

Use this macro step to run another macro automatically. This macro step is very useful because<br />

you can nest many macros inside one macro. For example, if you have weekly or monthly processes<br />

that share macro steps, such as import and publish, you can use the Execute macro to run them<br />

both.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Macro<br />

Description<br />

The name of the macro step.<br />

The macro to run. Do not select the same macro that you are adding<br />

this macro step to.<br />

Chapter 12: Automating Tasks Using Macros<br />

<strong>Administration</strong> <strong>Guide</strong> 221


Chapter 12: Automating Tasks Using Macros<br />

Parameter<br />

Number of times<br />

Execute Command Line<br />

Description<br />

How many times the macro should run. The default is 1.<br />

Use this macro step to run any program from the command line.<br />

Important: Appropriate Access Rights need to be granted in order to use this macro step. For more<br />

information, see "Access Rights for Macros" (p. 42).<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Command to Execute<br />

Check for Return Code<br />

Import Macros from Folder<br />

222 <strong>Contributor</strong><br />

Description<br />

The name of the macro step.<br />

The Command to run.<br />

One of the following:<br />

● Ignore Return Code<br />

ignores the return code from the program.<br />

● Success Return Code<br />

The macro step fails if the program returns any other code other<br />

than the one specified. Typically, success is represented by zero.<br />

Use this macro step to import a macro that has previously been exported.<br />

Tip: It is possible to modify an exported macro step’s XML file with other editors and then import<br />

again using this macro step.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Select Source folder<br />

Process sub folders<br />

Description<br />

The name of the macro step.<br />

The location of the macro to be imported.<br />

One of the following:<br />

● Only direct sub folders<br />

● All sub folders


Parameter<br />

Export Macros to Folder<br />

How should duplicate<br />

Description<br />

One of the following:<br />

names be handled ● Replace<br />

● Append<br />

● Create different name<br />

Use this macro step to export a macro to a file location.<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Root folder (Enter Local<br />

Application Server Path)<br />

Manage Existing Files<br />

Session (Macro Steps)<br />

Remove Application Lock<br />

Description<br />

The name of the macro step.<br />

The location where the macro is exported to.<br />

One of the following:<br />

● Archive<br />

Use this to retain a history of your macros.<br />

● Remove<br />

● Leave<br />

Use this macro step to remove an Application Lock. For more information, see "Managing Ses-<br />

sions" (p. 61)<br />

The following table describes the relevant parameters.<br />

Parameter<br />

Macro Step Name<br />

Application<br />

Description<br />

The name of the macro step.<br />

Chapter 12: Automating Tasks Using Macros<br />

The name of the <strong>Contributor</strong> application that is locked.<br />

<strong>Administration</strong> <strong>Guide</strong> 223


Chapter 12: Automating Tasks Using Macros<br />

Running a Macro<br />

224 <strong>Contributor</strong><br />

There are a number of scheduled and ad hoc methods that can be used to run the macro. You can<br />

run a macro in the following ways.<br />

● "Run a Macro from <strong>Administration</strong> Console" (p. 225)<br />

● "Run a Macro from IBM Cognos Connection" (p. 225)<br />

● "Run a Macro from an IBM Cognos 8 Event" (p. 226)<br />

● "Run a Macro using Macro Executor" (p. 227)<br />

● "Run a Macro using Command Line " (p. 228)<br />

● "Run a Macro using Batch File" (p. 228)<br />

For more information about the execution location and credentials for <strong>Contributor</strong> macros, see the<br />

following table.<br />

Type of Macro Execution<br />

<strong>Administration</strong> Con-<br />

sole<br />

IBM Cognos Connec-<br />

tion<br />

IBM Cognos 8 Event<br />

Macro Executor<br />

command line<br />

batch file<br />

Execution Location<br />

<strong>Planning</strong> Server with Dispatcher<br />

<strong>Planning</strong> Service enabled<br />

<strong>Planning</strong> Server with Dispatcher<br />

<strong>Planning</strong> Service enabled<br />

<strong>Planning</strong> Server with Dispatcher<br />

<strong>Planning</strong> Service enabled<br />

Local machine where the Macro<br />

Executor is running. The machine<br />

must have the <strong>Planning</strong> Server<br />

installed.<br />

Local machine where the command<br />

line execution is running. The<br />

machine must have the <strong>Planning</strong><br />

Server installed.<br />

Local machine where the batch file<br />

is running. The machine must have<br />

the <strong>Planning</strong> Server installed.<br />

Credentials<br />

User logged on the <strong>Contributor</strong><br />

<strong>Administration</strong> Console<br />

IBM Cognos Connection credentials<br />

IBM Cognos Connection credentials<br />

used to create event<br />

Scheduler Credentials in the System<br />

Settings of the <strong>Contributor</strong> Admin-<br />

istration Console<br />

Scheduler Credentials in the System<br />

Settings of the <strong>Contributor</strong> Admin-<br />

istration Console<br />

Scheduler Credentials in the System<br />

Settings of the <strong>Contributor</strong> Admin-<br />

istration Console


Run a Macro from <strong>Administration</strong> Console<br />

You can run a macro using the Macro tool in <strong>Administration</strong> Console.<br />

Steps<br />

1. In the <strong>Contributor</strong> <strong>Administration</strong> tree, click the Macros icon.<br />

2. In the Macros list, select which macro you want to run and click Execute.<br />

A dialog box appears informing you that the macro is running.<br />

Tips: You can monitor the progress of the macro in the Macro Steps list. You can view any<br />

error messages by clicking Error Details. You can stop a macro by clicking Stop. The macro<br />

will stop before the next macro step begins.<br />

Run a Macro from IBM Cognos Connection<br />

Once published to IBM Cognos Connection, you can run the macro or create a job and use the<br />

macro or job in an Event created in Event Studio.<br />

You must first create a new macro or transfer an existing macro and publish it to IBM Cognos<br />

Connection, see "Creating a Macro" (p. 194).<br />

To secure access to <strong>Contributor</strong> macros in IBM Cognos Connection, see "Set Access Rights for<br />

<strong>Contributor</strong> Macros in IBM Cognos Connection" (p. 43).<br />

Steps<br />

1. In IBM Cognos Connection, in the upper-right corner, click IBM Cognos <strong>Administration</strong>.<br />

2. Click the Configuration tab and then click Content <strong>Administration</strong>.<br />

3. Click <strong>Planning</strong> and then click Macros.<br />

4. Set the general properties and permissions, see the IBM Cognos 8 <strong>Administration</strong> and Security<br />

<strong>Guide</strong>.<br />

5. To run the macro immediately or schedule it to run at a specified time, click Run with options<br />

and select to run now or later. If you select later, choose a day and time to execute the<br />

macro and click OK.<br />

6. To create a recurring schedule to run the macro, click Schedule .<br />

7. Under Frequency, select how often you want the schedule to run.<br />

The Frequency section is dynamic and changes with your selection. Wait until the page is<br />

updated before selecting the frequency.<br />

8. Under Start, select the date and time when you want the schedule to start.<br />

9. Under End, select when you want the schedule to end.<br />

Tip: If you want to create the schedule but not apply it right away, select the Disable the<br />

schedule check box. To later enable the schedule, clear the check box.<br />

Chapter 12: Automating Tasks Using Macros<br />

<strong>Administration</strong> <strong>Guide</strong> 225


Chapter 12: Automating Tasks Using Macros<br />

10. Click OK.<br />

The macro schedule is created and the macro runs at the next scheduled time.<br />

Run a Macro from an IBM Cognos 8 Event<br />

226 <strong>Contributor</strong><br />

You can create events that run <strong>Contributor</strong> macros when specified conditions are met. For example,<br />

you can move data between applications using an Event Studio Agent to trigger a <strong>Contributor</strong><br />

macro that uses an administration link.<br />

When you specify an event condition, you describe specific occurrences that an agent must detect<br />

before it performs its tasks. The event condition is a query expression that you create using items<br />

from the package.<br />

Task execution rules specify when a task is performed. By default, a task is performed for new<br />

instances of events and all ongoing instances of events, but you can change this.<br />

You specify the task execution rules separately for each task in the agent.<br />

For more information about creating an event and agent, see the IBM Cognos 8 Business Intelligence<br />

Event Studio User <strong>Guide</strong>.<br />

Steps<br />

1. In Event Studio, click the Actions menu and then click Specify Event Condition .<br />

2. Create a detail expression, a summary expression, or both by doing the following:<br />

● If you want part of the event condition to apply to values of individual source items, click<br />

the Detail tab and follow step 3.<br />

● If you want part of the event condition to apply to aggregate values, click the Summary<br />

tab and follow step 3.<br />

3. In the Expression box, create a query expression by doing the following:<br />

● Type text or drag items from the source tab.<br />

● Type text or drag operators, summaries, and other mathematical functions from the func-<br />

tions tab.<br />

Tip: To see the meaning of an icon on the functions tab, click the icon and read the<br />

description in the Information box.<br />

4. If you want to check the event list to ensure that you specified the event condition correctly,<br />

from the Actions menu, click Preview.<br />

5. If you want to know how many event instances there are, from the Actions menu, click Count<br />

Events.<br />

6. From the File menu, click Save As .<br />

7. Specify a name and location for the agent and click OK.<br />

8. In the I want to area, click Add a task.


9. Click Advanced.<br />

10. Click Run a planning macro task.<br />

11. In the Select the planning macro dialog box, specify the task to include in the agent by searching<br />

the folders to find the task you want and clicking the entry.<br />

12. Under Run the planning macro task for the events, review the event status that will cause the<br />

task to be run.<br />

13. From the File menu, click Save .<br />

If you want to add other events, see IBM Cognos 8 Business Intelligence Event Studio User<br />

<strong>Guide</strong>.<br />

14. In the I want to area, click Manage the task execution rules .<br />

15. On the source tab , click one or more data items that uniquely define an event and drag<br />

them to the Specify the event key box.<br />

16. Click Next.<br />

17. On the Select when to perform each task page, do the following:<br />

● In the Tasks box, click the task that the agent will perform for the event statuses you specify.<br />

● Under Perform the selected task for, select one or more event status values.<br />

18. If you want to manage the execution rules for another task, repeat step 4.<br />

19. Click Finish.<br />

The execution rules for each task you selected are set.<br />

Tip: If you want to reset the execution rules for every task in the agent to the default values,<br />

from the Actions menu, click Remove Task Execution Rules. Each task is reset to be performed<br />

for new instances of events and all ongoing instances of events.<br />

20. Save the agent.<br />

Run a Macro using Macro Executor<br />

The Macro Step Plug-in<br />

The Macro Executor is a program that takes the macro step and creates a component that performs<br />

the action. After the component is created, the macro step is passed to it and then run.<br />

MacroExecutor is typically installed to C:\Program Files\Cognos\C8\bin\epMacroExecutor.exe.<br />

Note: Double-clicking epMacroExecutor.exe displays the available Command Line options.<br />

Important: The Macro Executor must be installed on the computer you are trying to run it from.<br />

You cannot execute it remotely.<br />

Each macro step is associated with a macro step plug-in. In addition to running the specified func-<br />

tionality, it validates the structure of the macro step.<br />

Chapter 12: Automating Tasks Using Macros<br />

<strong>Administration</strong> <strong>Guide</strong> 227


Chapter 12: Automating Tasks Using Macros<br />

Run a Macro using Command Line<br />

You can run the macro by typing the following from the command line, substituting the appropriate<br />

macro name:<br />

"C:\Program Files\Cognos\C8\bin\epMacroExecutor.exe" MacroName<br />

Important: If the macro name has spaces in it, you must enclose it in quotes.<br />

Run a Macro using Batch File<br />

You can use Windows built in Scheduler (Control Panel, Scheduled Tasks).<br />

Important: If jobs are scheduled to start while Go to Production is running for an application, the<br />

job will fail.<br />

The following is an example of a batch file (.bat) that can be used to run the macro:<br />

"C:\Program Files\Cognos\C8\bin\epMacroExecutor.exe"<br />

MacroName<br />

IF ERRORLEVEL 1 GOTO ExceptionDetectedSettingUpLabel<br />

IF ERRORLEVEL 2 GOTO ExceptionDetectedExecutingLabel<br />

ECHO Succeeded<br />

GOTO EndLabel<br />

:ExceptionDetectedSettingUpLabel<br />

ECHO Exception detected in setting up macro<br />

GOTO EndLabel<br />

:ExceptionDetectedExecutingLabel<br />

ECHO Exception detected in executing macro<br />

GOTO EndLabel<br />

:EndLabel<br />

PAUSE<br />

Microsoft Calendar Control<br />

Date Formats<br />

228 <strong>Contributor</strong><br />

The Macros tool uses Microsoft Calendar Control to allow for date selections.<br />

If you want to enable the use of the calendar graphic for date selection, this calendar control<br />

(MSCAL.OCX) must be installed and registered on the computer on which <strong>Administration</strong> Console<br />

runs. This control is available in the Microsoft Office suite or from Microsoft Visual Studio.<br />

The format of dates used in the Macro step is as follows:<br />

yyyy-mm-ddThh:nn:ss.ttt+00:00<br />

It is in ISO8601 format, where:<br />

Where:<br />

● yyyy = year<br />

● mm = month<br />

● dd = date


● T (signifies time)<br />

● hh = hour<br />

● nn = minutes<br />

● ss = seconds<br />

● ttt = milliseconds<br />

● +00:00=The time zone offset in hours and minutes, relative to GMT (Greenwich Mean Time)<br />

also known as UTC - Coordinated Universal Time.<br />

Here is an example:<br />

2002-11-13T15:10:31.663+00:00<br />

Troubleshooting Macros<br />

You can monitor the status of macros within the <strong>Contributor</strong> <strong>Administration</strong> Console. If a macro<br />

fails, the Error Details button is enabled. Click this button to display information about why the<br />

macro failed. You can also find error messages in the error log.<br />

Unable to Run <strong>Contributor</strong> Macros Using a Batch File<br />

If you are running a <strong>Contributor</strong> macro using a batch file and it does not work, check the following:<br />

❑ Can you run the macro using the <strong>Contributor</strong> <strong>Administration</strong> Console?<br />

If not, check that the parameters are correct.<br />

❑ Is the batch file syntax correct?<br />

"C:\Program Files\Cognos\c8\bin\epMacroExecutor.exe" MacroName<br />

Check that the correct filename is used in the actual macro step. If the file is open when the<br />

batch command is run, the command fails and returns an error code of 1.<br />

Check that the correct case is used for the macro name. The macro name is case sensitive.<br />

Look in the TEMP folder of the <strong>Planning</strong> Service user context on the machine running the<br />

<strong>Planning</strong> Service where the macro was executed. If you see the following message in the Error<br />

Description column your security may not be set up correctly. For more information about<br />

security, see "Security" (p. 29).<br />

"Permission<br />

denied"<br />

Chapter 12: Automating Tasks Using Macros<br />

Note: You can only run macros from the command line on a computer with a server install.<br />

<strong>Administration</strong> <strong>Guide</strong> 229


Chapter 12: Automating Tasks Using Macros<br />

230 <strong>Contributor</strong>


Chapter 13: Data Validations<br />

Data validation is the process of aligning plans with targets by enforcing business rules and policies.<br />

Use the data validation feature in IBM Cognos 8 <strong>Planning</strong> to define rules that ensure that incoming<br />

data in a <strong>Contributor</strong> application is in the right format and conforms to existing business rules.<br />

Building data validations involves defining a business rule that specifies the criteria that user input<br />

must meet for the entry to be accepted.<br />

A validation rule represents a single data entry requirement imposed on a range of cells in a single<br />

cube of a model. This requirement is expressed as a rule or boolean formula (true or false) that<br />

identifies invalid data entries when contributors or reviewers attempt to save or submit a plan. A<br />

rule set is a collection of rules that can be associated with e.Lists and fail actions.<br />

Data validation in IBM Cognos 8 <strong>Planning</strong> has the following benefits:<br />

● You can apply different rules to different e.List items.<br />

● It reduces the number of conditional IF-THEN-ELSE formula flags in an Analyst model.<br />

● Centralizes data validation rules definitions.<br />

Validation Methods<br />

IBM Cognos 8 <strong>Planning</strong> provides these methods for validating data:<br />

● presence check<br />

Validates input into empty numeric or text cells. This method checks that critical data is present<br />

and was not omitted from the plan. For example, contributors must enter forecast data for<br />

product sales or provide an explanatory note for variances.<br />

● dependencies<br />

The text cell is based on values in other cells that contain single or compound conditions. For<br />

example, contributors must enter an explanation into a text cell for any capital request that<br />

exceeds $25,000 or for a capital request in the Other category greater than $25,000.<br />

● business rule compliance<br />

Ensures that the data entered conforms to the business rules. For example, the pay range for<br />

new hires must not exceed top of pay scale.<br />

● single numeric validations, with variation by e.List item<br />

For example, all Division A profit centers must have a forecast profit margin that is equal to<br />

or exceeds 20% and a profit that exceeds $250,000. All Division B profit centers must have a<br />

forecast profit margin that is equal to or exceeds 15% and a profit that exceeds $0.<br />

Validation Triggers<br />

Validation rules are run on the <strong>Contributor</strong> Web client or on <strong>Contributor</strong> for Excel. A rule is<br />

evaluated under one or more of the following conditions:<br />

<strong>Administration</strong> <strong>Guide</strong> 231


Chapter 13: Data Validations<br />

● automatically, when a contributor saves a plan or when a reviewer submits a plan<br />

● manually, when a contributor or reviewer selects the Validate Data option from the File menu<br />

or the Validation Data toolbar button in the Web client<br />

● manually, when a contributor or reviewer selects the Validate Data option from the <strong>Contributor</strong><br />

menu in Excel<br />

When one of these triggers occurs, the rule formula is evaluated to either pass or fail. If any of the<br />

rules in the rule set detects a failure during the evaluation, the rule set is considered to have failed<br />

and the fail action specified in the rule set is performed. The contributor or reviewer may be prevented<br />

from saving or submitting the plan.<br />

Setting Up Data Validation<br />

232 <strong>Contributor</strong><br />

You administer and maintain data validation in the <strong>Contributor</strong> <strong>Administration</strong> Console. The Data<br />

Validations branch under the Applications node includes specific components that are required at<br />

each stage of the data validation workflow.<br />

Important: Rules that were built prior in versions to IBM Cognos <strong>Planning</strong> 8.2 are incompatible<br />

with the current product version, and must be redefined as follows.<br />

The process flow for defining validations as follows:<br />

❑ Plan how the rule applies to a reviewer, contributor, or both a reviewer and contributor (p. 233)<br />

For reviewers, define validation rules against post-aggregate D-Lists. Because reviewers are<br />

managing the aggregate of their contributors, the post-aggregate calculations, which are results<br />

from all e.List items, are applicable only to the reviewer.<br />

For contributors, define validation rules against pre-aggregation D-Lists (before the e.List).<br />

For contributors and reviewers, define validation rules against post-aggregate calculations.<br />

❑ Synchronize the <strong>Contributor</strong> applications with the Analyst models<br />

Ensure that all cubes in an application are updated when the underlying objects in Analyst<br />

change. Changes to the model may include renaming dimensions or adding, deleting, or<br />

renaming dimension items. By synchronizing, you can import from Analyst the updated cube<br />

definition in the application.<br />

❑ Define one or more rules (p. 237)<br />

In the Data Validations, Rules folder, use the Validation Rule wizard to define the validation<br />

rules. Specify the rule message that appears when a data fails validation. <strong>Contributor</strong>s or<br />

reviewers can then react to the failed entry. The D-Cube to which the validation applies, the<br />

measures dimension whose items are used to define the rule formula, and the scope or target<br />

range for validation.<br />

❑ Define one or more rule sets (p. 239)<br />

After you create a rule set, you must add at least one rule to the rule set.<br />

In the Data Validations, Rule Sets folder, you can create rule sets by adding one or more rules,<br />

and assigning fail actions. A rule set applies to a single data validation process.


❑ Associate the rule sets with groups of e.List items (p. 240)<br />

In the Data Validations, Rule Set e.List Items folder, associate the rules sets with the groups of<br />

e.list items. Specify the roles, such as contributors, reviewers, or contributors and reviewers<br />

and their subordinates, to which the rule set applies.<br />

❑ Run the Go to Production process<br />

Perform this process to make the application, including its new business rules and data format<br />

constraints, available to users on the Web client and <strong>Contributor</strong> for Excel.<br />

The Impact of Aggregation on Validation Rules<br />

Before creating a validation rule, it is important to understand how the aggregation of totals works<br />

for a <strong>Contributor</strong> application. For <strong>Contributor</strong> models, data is stored according to the e.List item.<br />

For a contributor, only one e.List item exists. That means calculations that occur after the e.List<br />

item must include only their own e.List item.<br />

For a reviewer, the data is an aggregation of all the e.List items that roll up into that reviewer.<br />

Therefore, the model may contain sums of the values for the e.List items (pre-aggregation), and<br />

other calculations that are based on the sum of the values after the e.List items are aggregated (post-<br />

aggregation).<br />

Because an e.List item for a reviewer is an aggregation of multiple e.Lists, the calculations that<br />

appear in the D-Lists after the e.List dimension must include data from all the e.List items that roll<br />

up into the e.List item for that reviewer. These calculations change when the lower-level e.List items<br />

save data. We recommend aggregating the values of the calculations before the e.List dimension<br />

(pre-aggregation calculations) because the post-aggregation calculations are recalculated when data<br />

changes for one of the e.List items that is aggregated. The reason is that the pre-aggregation calcu-<br />

lations belong within a single e.List item and the data from other e.List items does not affect the<br />

calculation result.<br />

Example - In Analyst, Setting Up the D-Cube<br />

To use pre-aggregation calculations for a dimension, you must ensure that the order of the dimension<br />

list in a D-Cube is correct. When a planner sets up a D-Cube, D-Lists are chosen in the order that<br />

uses the calculation D-List first and the aggregation D-Lists last.<br />

In the following example, the first dimension, RollupTest Slots, holds data values. The second<br />

dimension, RollupTest FirstDim, occurs prior to the e.List item, so its calculations are pre-aggreg-<br />

ation. The next dimension is the e.List. The final dimension, RollupTest LastDim, is post-aggregation<br />

because it occurs after the e.List.<br />

Chapter 13: Data Validations<br />

<strong>Administration</strong> <strong>Guide</strong> 233


Chapter 13: Data Validations<br />

234 <strong>Contributor</strong><br />

As shown next, the RollupTest FirstDim D-List includes the Conditional item, which is a test for<br />

data input greater than 50,000, with a default calculation option of Force to Zero for the Flag<br />

Value. That means it will not calculate a value for this aggregate.<br />

The next example shows the RollupTest LastDim D-List that includes Conditional1 items that test<br />

for LinkedValue greater than 50,000.


The following graphic shows the RollupTest LastDim D-List that includes Conditional2 items that<br />

test for InputValue greater than 50,000.<br />

The D-Cube that is built with RollupTest FirstDim and RollupTest LastDim shows how the values<br />

for cells are calculated.<br />

Chapter 13: Data Validations<br />

<strong>Administration</strong> <strong>Guide</strong> 235


Chapter 13: Data Validations<br />

236 <strong>Contributor</strong><br />

When the underlying calculations are defined in Analyst, you can view the outcome in the Contrib-<br />

utor Web client. Suppose that in the <strong>Contributor</strong> Web client, there are three e.List items, A1 Profit-<br />

center, rolls up into A Region and A2 Profit-center.<br />

For A1 Profit-center, because the pre-aggregate test is based on conditional values of 50,000, only<br />

input cell passes the test. The Conditional cell test against the Input Value item, so the Conditional<br />

is 0 and 1. Conditional1 tests LinkedValue and Conditional2 tests InputValue. The post-aggregate<br />

is Text1 and Text2.<br />

For A2 Profit-center, you can see how the pre-aggregation (LinkedValue and InputValue) and post-<br />

aggregation (Text1 and Text2) tests change.<br />

The Reviewer e.List item shows that the Flag Value is not present. The Force to Zero option in<br />

Analyst suppressed this from the reviewer e.List item because no data was present.<br />

The Conditional values are sums of the values from the <strong>Contributor</strong> e.List items. For A1 Profit-<br />

center, the two values are 0 and 1. For A2 Profit-center, the values are 1 and 1. The aggregation<br />

added the values of these flags for a reviewer of 1 and 2.<br />

Contitional1 and Conditional2 do not have their values added because the calculation is recalculated<br />

against the aggregation total. Note that the Conditional1 value shows the test failing in the first<br />

row for the reviewer. Text1 and Text2 reveal that the formatted D-List items appear based on the<br />

recalculated Conditional1 and Conditional2 fields.


Define a Validation Rule<br />

To support your business decisions, your contributors must enter data that is valid and accurate.<br />

Validation rules give you a simple way to build business logic that checks the validity of data before<br />

a plan is saved or submitted.<br />

Rules also include an error message that appears when the rule returns a value of false. The text<br />

strings in error messages, names of the validation rules, and rule set names, can be translated through<br />

the Translation node in the Development branch in the <strong>Administration</strong> Console. Use the Content<br />

Language tab, which handles the translation of model strings, to specify your settings. Use the<br />

Product Language tab to translate all fixed parts of the validations, such as the following message:<br />

Submit is not allowed due to one or more blocking validation errors<br />

When creating a rule, you must also specify the cell range (scope) that is subject to validation.<br />

A validation rule contains a formula or expression that evaluates the data in one or more cells in<br />

the grid and returns a value of true or false. If you require complex expressions that are cross-<br />

dimensional or deeply nested, we recommend that you first construct them in IBM Cognos 8 <strong>Planning</strong><br />

- Analyst.<br />

● Do not create contradicting rules for the same target range because they will prevent contributors<br />

or reviewers from saving the plan.<br />

● If an entry does not conform to a rule, ensure that you provide explicit instructions in your<br />

message. For example, instead of stating invalid entry, state the message as Capital costs<br />

greater than $25,000 must be pre-approved.<br />

● Consider which items are visible and editable for the contributor. Cells that are readable can<br />

cause a validation error, but hidden or no data cells do not impact validation rules and cannot<br />

cause a validation error.<br />

You can use saved selections to specify the data that you want validated. You can name and save<br />

a collection of items from a dimension using the Access Tables and Selections node under the<br />

Development branch in the <strong>Administration</strong> Console. Saved selections are dynamic in that the items<br />

in the selection change when an application is synchronized following changes to the Analyst model.<br />

Hidden, and empty cells are not validated when the rule set is run.<br />

Steps<br />

1. Open the <strong>Administration</strong> Console.<br />

2. In the <strong>Administration</strong> tree, expand Datastores, DatastoreServerName, Applications, Applica-<br />

tionName, Development, and the Data Validations folder.<br />

3. Click the Rules folder.<br />

Chapter 13: Data Validations<br />

<strong>Administration</strong> <strong>Guide</strong> 237


Chapter 13: Data Validations<br />

238 <strong>Contributor</strong><br />

4. Click New.<br />

The New Validation Rule wizard appears.<br />

5. On the Welcome page, click Next.<br />

6. On the Validation Rule Options page, do the following:<br />

● In the Rule Name box, type a unique name that distinguishes the validation rule from<br />

others.<br />

No blanks or special characters, such as apostrophe (’), colon (:), question marks (?), and<br />

quotation marks (") are allowed.<br />

● In the Rule Message box, type the error text message that you want the contributor or<br />

reviewer to see if the validation fails.<br />

We recommend that a message is included in the rule to facilitate data entry. The message<br />

should contain meaningful information that helps the contributor or reviewer enter the<br />

correct data.<br />

7. Click Next.<br />

8. On the Validation Rule Cubes page, select the D-Cube against which the rule is applied, and<br />

click Next.<br />

Assumptions cubes do not appear in the list of available D-Cubes.<br />

9. On the Validation Rule Dimension Selection page, select a measures dimension in the D-Cube<br />

whose items are used to create the boolean formula for the rule, and click Next.<br />

A rule expression is defined against a specific dimension in the selected D-Cube. All dimensions<br />

of the cube, except the e.List, are listed.<br />

10. On the Validation Rule Expression page, build the business logic by defining a rule formula<br />

that evaluates to either true or false, and click Next.<br />

11. Under Available components select items from the specified dimension in the D-Cube that you<br />

want to use to define your rule expression and then click the arrow to move them to the<br />

Expression definition box. Use the IF statement, AND/OR boolean operators, or logical<br />

comparison operators, such as =, , and = 0.15) AND (Margin


All dimensions in the selected D-Cube are available with the exception of the measures and<br />

e.List dimensions. Note that because the item includes aggregates as well as details, it<br />

is not an optimal data item to include in a rule.<br />

13. Click Finish. If you want to change or review your settings, click Back.<br />

The rule is automatically saved and associated with the model. It is now available for inclusion in<br />

a rule set.<br />

Define or Edit a Rule Set<br />

You can add rules to rule sets. Each rule set is associated with an action that occurs if the data entry<br />

fails validation. You can use more than one rule set for each validation process on a cube, with<br />

multiple fail actions or messages.<br />

You must create one or more rules before you can define a rule set.<br />

Steps<br />

1. Open the <strong>Administration</strong> Console.<br />

2. In the <strong>Administration</strong> tree, expand Datastores, DatastoreServerName, Applications, Applica-<br />

tionName, Development, and the Data Validations folder.<br />

3. Click the Rule Sets folder, and choose whether to create a new rule set or edit an existing one:<br />

● To create a new rule set, click New.<br />

● To edit an existing rule set, click the rule set that you want to change, and then click Edit.<br />

4. In the Rule Set Name box, type a unique name for the rule set that distinguishes it from the<br />

others.<br />

5. In the Fail Action box, specify one of the following types of action to be triggered when one<br />

or more rules in the rule set fails validation:<br />

● To show only the rule message and take no action, click Message Only.<br />

● To show the rule message and restrict contributors or reviewers from submitting the plan,<br />

click Restrict Submit.<br />

● To show the rule message and prevent contributors or reviewers from either saving or<br />

submitting the plan, click Restrict Save and Submit.<br />

Important: Use caution when applying this setting - this is not considered a best practice. If one<br />

or more rules fail, contributors or reviewers cannot save the plan. To close the plan, the rules<br />

must be resolved to a value of true, which may not be possible for the user to achieve.<br />

6. In the upper rule grid, select the rule or rules that you want to include in the rule set, and click<br />

Add.<br />

The selected rule or rules appears in the lower grid.<br />

Tip. You can remove a rule from the rule set using the Remove button.<br />

7. Click OK.<br />

Chapter 13: Data Validations<br />

<strong>Administration</strong> <strong>Guide</strong> 239


Chapter 13: Data Validations<br />

Edit a Validation Rule<br />

8. Click the Save button to save the rule set.<br />

You can now associate the rule set to e.List items (p. 240).<br />

You can modify a rule to better reflect the constraints placed on data entry for a particular D-Cube.<br />

Steps<br />

1. Open the <strong>Administration</strong> Console.<br />

2. In the <strong>Administration</strong> tree, expand Datastores, DatastoreServerName, Applications, Applica-<br />

tionName , Development, and the Data Validations folder.<br />

3. Click the Rules folder.<br />

4. Select a rule that you want to change, and click Edit.<br />

5. Choose how you want to modify the rule, and click OK.<br />

6. If you want to rename the rule to something more obvious, in the Name box, type the new<br />

name.<br />

7. If you want to change the message to reflect the new constraints or limitations, in the Message<br />

box, type the new message.<br />

8. If the constraint on data entry changed, do the following:<br />

● Click the ellipses button next to the Expression box.<br />

● In the Edit Validation Rule Expression dialog box, define the new boolean expression used<br />

to evaluate data entry and click OK.<br />

● Under Available components, select items from the specified dimension in the D-Cube that<br />

you want to use to define your rule expression, and then click the arrow to move them to<br />

the Expression definition box. Use the IF statement, AND/OR boolean operators, or logical<br />

comparison operators, such as =, , and


2. In the <strong>Administration</strong> tree, expand Datastores, DatastoreServerName, Applications, Applica-<br />

tionName ,Development, and the Data Validations folder.<br />

3. Click the Rule Set e.List Items folder.<br />

The Validation Rule Set grid shows all the available rules sets.<br />

4. Associate a rule set to an e.List item as follows:<br />

● Under Validation Rule Set, click a rule set.<br />

Tip: Press Ctrl + click to select multiple rule sets.<br />

● Under E.List Item Name, click the e.List item to which you want to apply the rules, and<br />

click Add. You can also select ALL, All DETAIL, ALL AGGREGATE, or any e.List saved<br />

selections.<br />

The By Rule Set tab is filtered by rule sets and is sorted by all the rules sets and their asso-<br />

ciated e.List items. The By e.List Item tab is filtered by e.List items that are associated with<br />

the current rule sets.<br />

5. Click the Save button to save the associations to the e.List.<br />

Chapter 13: Data Validations<br />

<strong>Administration</strong> <strong>Guide</strong> 241


Chapter 13: Data Validations<br />

242 <strong>Contributor</strong>


Chapter 14: The Go to Production Process<br />

Use Go to Production to formally commit a set of changes. Any issues such as invalid editors, an<br />

invalid e.List, and destructive model changes, are reported by the Go to Production process.<br />

The Go to Production process can be automated (p. 202) so that you can schedule it to run during<br />

slow periods.<br />

Go to production consists of the following stages:<br />

● pre-production<br />

● go to production<br />

● post-production<br />

The crucial stage is the go to production stage where the old production application is replaced by<br />

the incoming development application.<br />

Until the go to production stage, all Web client users can use the old production application as<br />

normal. Immediately after the go to production stage, a new production application exists that Web<br />

client users then use. After the go to production stage, Web client users attempting to open a model<br />

have access only to the new production application. However, if users are already viewing or editing<br />

a model from the old production application at the time of the go to production stage, client-side<br />

reconciliation is required.<br />

In the go to production stage, the old production model definition is replaced by a new production<br />

model definition (the incoming development model definition). Many development changes have<br />

no effect on the structure or content of production data blocks and affect only the production model<br />

definitions. An example is everything appearing within the Web-Client Configuration branch in<br />

the <strong>Contributor</strong> <strong>Administration</strong> Console. If these are the only changes made, the production<br />

application is fully updated when the go to production stage is complete.<br />

Other development changes require a new production model definition and require the production<br />

data blocks to be updated. For example, synchronizing with the Analyst model can change the<br />

structure and content of data blocks in many ways. If changes that effect data blocks have been<br />

made, the go to production stage fully updates the production model definitions as normal, but the<br />

data blocks are updated by a subsequent reconciliation process.<br />

If an import is performed, an Analyst to <strong>Contributor</strong> D-Link is run, or if an administration link is<br />

run in the development application, then import data blocks are created. If there are import data<br />

blocks at the point of the go to production stage, these import data blocks are moved into the new<br />

production application. After the go to production stage, the import data blocks are combined with<br />

the production data blocks, which are also handled by the reconciliation process. After this, the<br />

import data blocks are removed from the development application.<br />

In summary, the go to production stage replaces the old production model definition with a new<br />

one, and moves any import data blocks into production. If import data blocks or changes that affect<br />

production data blocks are made, the production data blocks are updated by a reconciliation process<br />

that follows Go to Production.<br />

<strong>Administration</strong> <strong>Guide</strong> 243


Chapter 14: The Go to Production Process<br />

During the go to production stage the application is taken offline temporarily to ensure data<br />

integrity. The new e.List item workflow states are determined to correctly process any e.List hierarchy<br />

changes. As soon as those changes are applied the application goes online again and the post-pro-<br />

duction processes are started. This offline period is typically so short that it is transparent to users,<br />

but it can sometimes exceed one minute.<br />

<strong>Planning</strong> Packages<br />

244 <strong>Contributor</strong><br />

In IBM Cognos 8, a package is a folder in IBM Cognos Connection. You can open the package in<br />

a studio to view its content. A <strong>Planning</strong> package is a light-weight package that contains only the<br />

connection information to the cubes in the <strong>Planning</strong> application. The D-list and D-List item metadata<br />

are extracted from the <strong>Planning</strong> application at run-time.<br />

To access <strong>Contributor</strong> applications, you must select the option to create the package when you run<br />

Go to Production. This option also gives users access to IBM Cognos 8 studios from the <strong>Contributor</strong><br />

application if they have the studios installed and enables users to report against live <strong>Contributor</strong><br />

data using the <strong>Planning</strong> Data Service (p. 302).<br />

You may choose not to create a package if you just want to publish the data and create a PowerCube<br />

or an Framework Manager model using the extensions. This will save time because the Go to Pro-<br />

duction will finish more quickly.<br />

To create a <strong>Planning</strong> Package, you must have the Directory capability. This is not part of the <strong>Planning</strong><br />

Rights Administrator role, but it is part of the Security Administrator role. For more information,<br />

see "Capabilities Needed to Create IBM Cognos 8 <strong>Planning</strong> Packages" (p. 34).<br />

The <strong>Planning</strong> Package is created with the same display name as the <strong>Contributor</strong> application by<br />

default, and a data source connection named IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> is created in<br />

Framework Manager. You can configure the name of the <strong>Planning</strong> Package, and add a screen tip<br />

and description. For more information, see "Set Go to Production Options" (p. 82).<br />

The security on the <strong>Planning</strong> Package is as follows:<br />

● All users, groups, or roles that have <strong>Planning</strong> <strong>Administration</strong> capability are granted adminis-<br />

trative access to the package.<br />

● All the users who have access to the application are added as user of this package.<br />

● The user who is logged on to the console when performing the Go to Production is the user<br />

who creates the package. Therefore, that user is given administrative access to the package.<br />

This user is not necessarily a planning administrator because they could have been granted only<br />

Go to Production permission by a planning administrator.<br />

If you remove an application from the console, any corresponding planning package in IBM Cognos<br />

Connection is disabled. The package will be hidden from the users and will appear with a locked<br />

icon to administrators. This allows administrators to maintain an application while making it appear<br />

offline to users. When the application is re-added in the console, the corresponding planning<br />

package is re-enabled.


Reconciliation<br />

The reconciliation process ensures that the copy of the application used by the on the Web is up to<br />

date. For example, all data is imported, new cubes are added, and changed cubes are updated. For<br />

more information, see "Reconciliation" (p. 54).<br />

The first time Go to Production is run for an application, all e.List items are reconciled. Subsequently,<br />

only some changes result in e.List items result in changes being made. Reconciliation can take some<br />

time, depending on the size of the e.List. If you are making changes that require reconciliation,<br />

check that you made all required changes before running Go to Production.<br />

The Production Application<br />

Model Definition<br />

Data Block<br />

When you first create a <strong>Contributor</strong> application, there is only a development version. The production<br />

version is created only after Go to Production is run. The production version of the application is<br />

the version that users see in the web. Changes made in the development application apply only after<br />

Go to Production is run.<br />

The production version of a <strong>Contributor</strong> application consists of one or more model definitions,<br />

and one data block for each contribution or review e.List item.<br />

Before you can run Go to Production, the <strong>Contributor</strong> application must contain at least an e.List.<br />

If you can run the Go to Production process without setting any rights, no one can view the<br />

application on the Web. You can preview the application prior to setting any rights by selecting<br />

Production, Preview in the <strong>Administration</strong> Console.<br />

When you start Go to Production, job status is checked. If jobs are running or queued, you cannot<br />

run Go to Production. A message appears prompting you to go to the Job Management window.<br />

For more information, see "Managing Jobs" (p. 52).<br />

If there are discrepancies in information about translations in the datastore table and the model<br />

XML, you cannot run Go to Production. For more information, see "Datastore Options" (p. 84).<br />

A model definition is a self-contained definition of the model. It holds definitions of the dimensions,<br />

cubes, and D-Links of the model, as set up in IBM Cognos 8 <strong>Planning</strong> - Analyst. It also holds details<br />

of modifications applied in the <strong>Contributor</strong> <strong>Administration</strong> Console. This includes configuration<br />

details, such as navigation order for cubes, options such as whether reviewer edit or slice and dice<br />

are allowed, and <strong>Planning</strong> Instructions. A model definition also includes e.List details and access<br />

table definitions. It also contains all assumptions cube data, because this does not vary by e.List<br />

item.<br />

Chapter 14: The Go to Production Process<br />

The data block for an e.List item contains all data relevant to an individual e.List item, except<br />

assumptions cube data (p. 119). It contains one element of data for every cell in the model, except<br />

for any cell identified as No Data by <strong>Contributor</strong> access tables (p. 122). No Data cells are generally<br />

treated as if they did not exist. This reduces the volume of data that must be downloaded to and<br />

uploaded from clients, speeding up client-side recalculation and server-side aggregation.<br />

<strong>Administration</strong> <strong>Guide</strong> 245


Chapter 14: The Go to Production Process<br />

Production Tasks<br />

When a Web client user opens an e.List item by clicking its name in the status table, a model<br />

definition is opened, and then the appropriate data block is loaded into it. If a multi e.List item<br />

view is opened, more than one data block is loaded. Wherever possible, the model definition and<br />

data block are loaded from the client-side cache if enabled. If the client-side cache does not contain<br />

an up-to-date version of the model definition or the data block, they are downloaded from the<br />

server. Note that data in the data block is not compressed, although compression and decompression<br />

takes place on transmission to and from the client.<br />

In addition to a data block, each e.List item also has an annotation block. Various translation tables<br />

exist if multiple languages are used.<br />

You can do the following to the production version of a <strong>Contributor</strong> application:<br />

● publish data<br />

You publish the production version of the application. However, you must set dimensions for<br />

publish in the development application and then run Go to Production to apply the changes to<br />

the production version. This is because setting dimensions for publish requires datastore tables<br />

to be restructured (p. 82).<br />

● delete user and audit annotations<br />

● preview the workflow state<br />

● preview the model and data<br />

● configure extensions for the Classic <strong>Contributor</strong> Web Client<br />

Extensions allow you to extend the functionality of the Classic <strong>Contributor</strong> Web Client in ways<br />

that fulfill business requirements. The <strong>Contributor</strong> Web Client includes the Get Data and Export<br />

for Excel functionary.<br />

● run administration links<br />

Cut-down Models and Multiple Languages<br />

246 <strong>Contributor</strong><br />

If neither cut-down models nor multiple languages are used, there is just one master model definition<br />

for an application.<br />

When cut-down models are used, separate model definitions are produced for individual e.List<br />

items or groups of e.List items, according to the cut-down model option chosen (p. 249).<br />

When multiple languages are used, a separate master model definition exists for each language.<br />

Each master model definition contains just the relevant translated strings to prevent the master<br />

model definition from becoming excessively large.<br />

An application may use multiple languages and cut-down models.


The Development Application<br />

Web client users interact only with the production version of an application. Most application<br />

configuration or administration is performed in the development version of an application. This<br />

has no impact on the production application or the web client users, until you run Go to Production.<br />

In considering the Go to Production process, the following components are relevant:<br />

● The development version of the master model definition<br />

● A set of import data blocks<br />

Development Model Definition<br />

Import Data Blocks<br />

With the exception of data import, any configuration performed in the development application<br />

affects the development model definition. In the <strong>Contributor</strong> <strong>Administration</strong> Console, changes are<br />

implemented to the development model definition as soon as they are saved. For example, new<br />

access table definitions are stored in the development model definition when you click the save<br />

button in the main Access Tables window.<br />

You cannot undo individual changes made to the development application. However, you can reset<br />

the entire development application which then matches the current production application, using<br />

the Reset Development to Production toolbar button .<br />

The final stage of the <strong>Contributor</strong> import process creates a set of import data blocks. The Prepare<br />

import stage extracts data from the import staging tables and creates one import data block for<br />

each contribution e.List item with import data. Import data blocks contain data for only one e.List<br />

item, and contain only valid data. Invalid data, such as data that does not match any dimension<br />

item or targets formula items or No Data cells, is written to an import errors table (ie_cubename).<br />

No import data block is created for e.List items with no import data.<br />

By removing all irrelevant or invalid data, the import data block for an e.List item is kept as small<br />

as possible. This is crucial for the subsequent reconciliation process, particularly for client-side<br />

reconciliation.<br />

Note that if you use the Prepare zero data option, an empty import data block is created for all<br />

e.List items.<br />

Because the Prepare process is performed using the job architecture (a Prepare_Import job), it can<br />

be scaled out, and monitored in the normal way. It does not conflict with other jobs for the<br />

application, but it is not possible to run the Go to Production until it is complete.<br />

Analyst to <strong>Contributor</strong> D-Links and administration links that target the development application<br />

also create import data blocks that are brought into the production application by the Go to Pro-<br />

duction process. Data from links that target the production application is brought in to the applic-<br />

ation by an activate process that triggers a reconcile job. For more information see "Reconcili-<br />

ation" (p. 54).<br />

Chapter 14: The Go to Production Process<br />

<strong>Administration</strong> <strong>Guide</strong> 247


Chapter 14: The Go to Production Process<br />

Run Go to Production<br />

You must run Go to Production to commit any changes made to the development application, such<br />

as configuration options, importing data, and synchronize with Analyst.<br />

You must wait for all jobs to stop running before running Go to Production. This includes the<br />

reconcile job.<br />

Before running Go to Production, ensure that<br />

● an e.List, was imported, rights were set<br />

● appropriate data dimensions for publish were set<br />

● the Copy Development e.List item publish setting to production application and Prevent client-<br />

Steps<br />

side reconciliation options are set as required<br />

For information about these options see "Go to Production Options Window" (p. 248).<br />

1. Select the <strong>Contributor</strong> application.<br />

2. Click the Go to Production button. If this button is not enabled, check that the application has<br />

an e.List. If so, you do not have access rights to run Go to Production .<br />

Go to Production Options Window<br />

Back-up Datastore<br />

Use the Go to Production Options window to specify whether to back up the datastore (recommen-<br />

ded), whether to reset the Workflow State, and whether to show information about invalid owners<br />

and editors. The Invalid owners and editors option is not relevant the first time you run Go to<br />

Production.<br />

This option creates a backup of the development and production application and stores them in<br />

the location specified during application creation or in the Datastore Maintenance window (p. 84).<br />

We recommend that you set this option. If you clear this option, a warning advises you to make a<br />

backup in case of problems. Note that when you automate the Go to Production process, there is<br />

no backup option and you should schedule a backup to be made before running the Go to Production<br />

process.<br />

Create <strong>Planning</strong> Package<br />

248 <strong>Contributor</strong><br />

In order to be able to access <strong>Contributor</strong> applications from IBM Cognos Connection, you must<br />

select this option when you run Go to Production. This option also gives users access to IBM<br />

Cognos 8 studios from the <strong>Contributor</strong> application if they have the studios installed, and enables<br />

users to report against live <strong>Contributor</strong> data using the <strong>Planning</strong> Data Service (p. 302). For more<br />

information, see "<strong>Planning</strong> Packages" (p. 244).


Display Invalid Owners and Editors<br />

Workflow States<br />

Use this option to choose whether to show owners and editors who become invalid when you run<br />

Go to Production. This option can add some time to the Go to production process.<br />

Reset resets the state of the e.List items in the <strong>Contributor</strong> Application.<br />

If required, select one of the following options:<br />

● Not Started<br />

This sets every e.List item back to the state of Not Started.<br />

● Not Started and Work in Progress<br />

This sets all e.List items that are not in a state of Not Started to the state of Work in Progress.<br />

e.List items that are Not Started remain in this state. This option aggregates all data but does<br />

not indicate whether e.List items were modified.<br />

● Work in Progress<br />

this sets all e.List items to a state of Work in Progress, meaning that changes were saved but<br />

not submitted.<br />

● Locked<br />

this locks all e.List items. No changes can be made to locked e.List items, but the data can be<br />

viewed.<br />

Show Changes Window<br />

Skip top e.List items enables you to reset all but the top e.List items.<br />

The Show Changes window contains tabs which shows the differences between the development<br />

application that you are running the Go to Production process on and the current production<br />

application. By looking at this information, you can see the effect that Go to Production has on the<br />

production application.<br />

The First Time Go to Production is Run on an Application<br />

The first time you run Go to Production, no changes are shown. If you imported data, a tab shows<br />

import data details (p. 254). You can cancel the Go to Production process at this stage.<br />

After you view Import data details, click Next.<br />

If you have set cut-down model options (p. 74) the cut-down models job monitor is shown until<br />

the cut-down models job has finished running. You can cancel while the cut-down models job is<br />

running, but not after it is complete. The final process is started automatically.<br />

Subsequent Go to Productions<br />

When you run Go to Production more than once, depending on the changes you have made, you<br />

may see the following information:<br />

● model changes<br />

Chapter 14: The Go to Production Process<br />

<strong>Administration</strong> <strong>Guide</strong> 249


Chapter 14: The Go to Production Process<br />

● import data details (p. 254)<br />

● invalid owners and editors (p. 254)<br />

● e.List items to be reconciled (p. 256)<br />

Model Changes Window<br />

Changes to Cubes<br />

250 <strong>Contributor</strong><br />

The model changes window shows the changes made to the IBM Cognos 8 <strong>Planning</strong> - Analyst model<br />

since the previous Production application was created.<br />

For example, which cubes were added, removed, or had their dimensions changed, and whether<br />

dimensions were added, deleted, or substituted.<br />

Pay particular attention to changes that could affect production data, such as cubes or dimension<br />

items deleted.<br />

Click Advanced to view a detailed description of the differences between the previous Analyst model<br />

and the current model. It lists the cubes and dimensions that changed. When you click an item, a<br />

breakdown of the changes appears. Typically, this information is used for debugging purposes.<br />

When you expand the cubes listed under Common Cubes, the following branches are listed under<br />

each cube: New, Old and Differences.<br />

New and Old contain the same categories of information and list what was in the old model and<br />

what is in the new model.<br />

Name<br />

Dimensions<br />

AccessTables<br />

AccessLevel<br />

BaseFormat<br />

Description<br />

The dimensions in the order in which they were in Analyst.<br />

The access tables assigned to the cube.<br />

The base access level for the cube.<br />

The D-Cube format as defined in Analyst. If no format is defined,<br />

it defaults to the system default.


Name<br />

UpdateLinks<br />

BreakBackEnabled<br />

MeasuresDimension<br />

AggregationDimension<br />

Description<br />

The update links that are associated with the cube.<br />

1= breakback enabled for the cube.<br />

-1= breakback disabled for the cube.<br />

-1 = no dimension for publish set on the cube.<br />

n = the position in the dimension order in Analyst of the dimension<br />

for publish, excluding the e.List.<br />

1 = the cube has an e.List.<br />

-1 = no e.List.<br />

The following window shows the differences for the D-Cube Revenue Plan that result from adding<br />

a dimension:<br />

Name<br />

[New] Dimensions<br />

[Old] Dimensions<br />

[New] AggregationDimen-<br />

sion<br />

[Old] AggregationDimen-<br />

sion.<br />

Description<br />

Lists the dimensions after the synchronize.<br />

The dimensions before the synchronize.<br />

Chapter 14: The Go to Production Process<br />

A positive number indicates that this cube contains the e.List.<br />

-1 indicates that there was no e.List in this cube in the old model.<br />

<strong>Administration</strong> <strong>Guide</strong> 251


Chapter 14: The Go to Production Process<br />

Changes to Dimensions<br />

Changes to Links<br />

252 <strong>Contributor</strong><br />

When a dimension is changed, three branches are listed under each dimension: New, Old, and<br />

Differences. When you click one of these branches, something similar to the following appears:<br />

This table lists the following details for each dimension item in the Now, Old, and Differences<br />

windows.<br />

Name<br />

Item Guid<br />

Item Id<br />

Name<br />

Parent Index<br />

Description<br />

A unique internal reference for items in a model. When you add a dimension<br />

item, this item is assigned a GUID.<br />

Internal unique numeric reference.<br />

The name of the dimension item.<br />

Internal reference that indicates the parent of each item in the hierarchy.<br />

When a link changes, three branches are shown, New, Old, and Differences.


The New and Old branches show the following information.<br />

Name<br />

Source<br />

Target<br />

Correspondences<br />

Mode<br />

Scale<br />

UseRounding<br />

RoundingDigit<br />

Match names: Link name<br />

SourceDimension<br />

Description<br />

The name of the source D-Cube.<br />

The name of the target D-Cube.<br />

The matched dimensions if any exist.<br />

The execution mode selected for the link.<br />

The scaling factor applied to the D-Link.<br />

The rounding factor used.<br />

The rounding digit used.<br />

The matched dimensions.<br />

The source dimension.<br />

Chapter 14: The Go to Production Process<br />

<strong>Administration</strong> <strong>Guide</strong> 253


Chapter 14: The Go to Production Process<br />

Name<br />

SourceSubColumns<br />

TargetSubColumns<br />

CaseSensitive<br />

TargetCalculations<br />

DumpItem<br />

Selection:Dimension name<br />

Selection<br />

IsTarget<br />

Description<br />

The source sub column, if it exists.<br />

The target sub column, if it exists.<br />

1 = on<br />

0 = off<br />

The target calculation, if it exists.<br />

Indicates that data from unmatched source items is assigned to a<br />

dump item in the target D-List.<br />

The dimension name where there is no match.<br />

The selection of items.<br />

Indicates the target.<br />

0 indicates a selection on the source dimension rather than the tar-<br />

get.<br />

When you click the Difference branch, you see an overview of the changes.<br />

Import Data Details Tab<br />

The Import data details tab appears only if you imported data. This window shows the e.List items<br />

that have import data blocks prepared. Within each e.List item, the number of data cells prepared<br />

per cube are listed.<br />

Invalid Owners and Editors Tab<br />

Invalid Editors<br />

254 <strong>Contributor</strong><br />

Invalid owners and Editors tab appears only if you have selected the Display invalid owners and<br />

editors check box in the Go to Production Options tab.<br />

This tab lists users who are currently editing or annotating, and who will become invalid when the<br />

development application becomes the production application.<br />

The Editor column lists the editors who are currently editing the e.List item.<br />

An invalid editor is a user who was editing or annotating an e.List item when Go to Production<br />

was run, and, due to a change, can no longer edit or annotate the e.List item. These changes can<br />

be one of the following:<br />

● The e.List item that was being edited or annotated was deleted.<br />

● The rights of a user were changed to view.


Editor Lagging<br />

● Reviewer edit (p. 74) is prevented.<br />

● The review depths of an e.List item that is being edited by a reviewer or annotated were changed<br />

so that the user no longer has access.<br />

Editor lagging lists those Web client users who are editing at the time, either on or offline.<br />

Steps<br />

1. Go to the Production branch of the application, and then click Preview.<br />

2. Right-click on the e.List item and select Properties.<br />

3. Click the Editors tab.<br />

If the e.List item is being worked on off-line, a cross is shown next to Edit session connected.<br />

This is shown because in some circumstances, a user is unable to save their changes. For example,<br />

a reconcile job runs for the e.List item that is being edited or annotated and client side recon-<br />

ciliation is prevented. The user is bounced off the e.List item, losing any changes. They can save<br />

the contents of the grid locally to a text file, and re-import the file. Another example is when<br />

two reconcile jobs run for the e.List item while someone is editing or annotating it (on or offline).<br />

The Prevent client side reconciliation option can be on or off. The user is bounced off the e.List<br />

item, losing any changes, or is unable to bring the data online.<br />

Note that an administration link or an Analyst to <strong>Contributor</strong> link to a production version of<br />

an application causes a reconcile job to be run to update the e.List items.<br />

Sometimes the <strong>Administration</strong> Console cannot detect whether a user is still editing or annotating,<br />

although the <strong>Administration</strong> Console is always aware of the start of an editing session. An<br />

editing session is ended when the user closes the grid, and submits the e.List item. Normally<br />

these actions are detected by the <strong>Administration</strong> Console. However, if the user loses their net-<br />

work connection while editing, the <strong>Administration</strong> Console thinks that the user is still editing.<br />

If the user reconnects to the network and ends the session by closing the grid or submitting the<br />

e.List item, the <strong>Administration</strong> Console detects that the edit session is ended.<br />

How a User Can Lose Access to an e.List Item<br />

The following changes result in an end user losing access to an e.List item they are currently editing<br />

or annotating:<br />

● A user who was editing or annotating an e.List item when Go to Production was run was<br />

deleted.<br />

● An e.List item that was being edited or annotated when Go to Production was run was deleted.<br />

● The rights of a user who was editing or annotating an e.List item when Go to Production was<br />

run have been changed to View.<br />

● Reviewer edit was prevented and the reviewer was editing an e.List item when Go to Production<br />

was run.<br />

Chapter 14: The Go to Production Process<br />

<strong>Administration</strong> <strong>Guide</strong> 255


Chapter 14: The Go to Production Process<br />

● The review depths of an e.List item that is being edited by a reviewer (with reviewer edit allowed)<br />

or annotated have been changed so that the user no longer has access.<br />

● A reconcile job (p. 54) is run for the e.List item that is being edited or annotated and Client<br />

side reconciliation is prevented.<br />

● Two reconcile jobs have been run for the e.List item while someone is editing or annotating it<br />

(on or offline). Prevent client side reconciliation can be on or off. Note that running an<br />

administration link or a Analyst<strong>Contributor</strong> link causes a reconcile job to run.<br />

● Another user takes ownership of the e.List item while the current user is editing it or annotating.<br />

● Users will receive a warning message and the buttons in the grid will disappear. Users will be<br />

able to right-click in the data and save to file.<br />

e.List Items to be Reconciled Tab<br />

The Owner column lists the current owners of the e.List items to be reconciled. By default, the<br />

current owner is the first person in the Rights table to be assigned to the e.List item with rights<br />

higher than View. If another owner takes ownership of the e.List item, they become the current<br />

owner (p. 95). If the e.List item has no current owner, it shows System.<br />

The Editor column lists the names of users who are currently editing or annotating the e.List item<br />

to be reconciled. If the e.List item is currently being edited, this is the same name as in the Owner<br />

column. If the e.List item is not being edited, the cell says No Editor.<br />

The Import Data Block column indicates whether there are import data blocks for the e.List item.<br />

Cut-down Models Window<br />

Finish Window<br />

256 <strong>Contributor</strong><br />

If cut-down models (p. 138) are required, they are generated at this stage so that they are available<br />

to Web client users as soon as the new application is available.<br />

After the cut-down model job runs, the Finish window is displayed and the final step in the Go to<br />

Production process is started automatically.<br />

During the final stage of Go to Production, the following processes occur.<br />

Stage<br />

Datastore Backup<br />

Description<br />

A datastore backup is made if this option was selected and happens<br />

after the cut-down models process.


Stage<br />

Preproduction Processes<br />

Go to Production<br />

Post Production Tasks<br />

Description<br />

● A master model definition per language is produced.<br />

● New e.List items are added to datastore tables.<br />

● Import data blocks are associated with the production application.<br />

● Error trapping takes place, for example, if there are no e.List<br />

items, Go to Production does not take place.<br />

● The development and production model definitions are unpacked<br />

and loaded into memory.<br />

● Two e.List items are reconciled as a test. Most errors in reconcili-<br />

ation occur when the first e.List items are reconciled.<br />

● The switch-over from the development to the production applic-<br />

ation is performed. During this stage, the system takes the<br />

application offline temporarily to ensure data integrity. The new<br />

e.List item workflow states are determined during this time to<br />

correctly process any e.List hierarchy changes. As soon as those<br />

changes are applied it the application becomes online again and<br />

the post-production processes are started. This offline period is<br />

typically so short that it is transparent to users, but it can some-<br />

times exceed one minute.<br />

● The workflow state is refreshed to include new items.<br />

● The new application is put into production.<br />

● The Web application is restarted and users can now submit.<br />

● Any e.List items that were removed are deleted from the datastore.<br />

● Completed jobs that are no longer relevant are removed from the<br />

job list, such as a publish of a previous production application.<br />

A message that tells you that you successfully put the development<br />

application into production.<br />

Chapter 14: The Go to Production Process<br />

<strong>Administration</strong> <strong>Guide</strong> 257


Chapter 14: The Go to Production Process<br />

258 <strong>Contributor</strong><br />

Stage<br />

Description<br />

Then the following operations are performed:<br />

● Obsolete cut-down models are removed by the cut-down tidy job.<br />

● Old import data blocks are removed.<br />

● A validate_users job is run to check that the current owner or<br />

editor of an e.List item can still access the e.List item.<br />

● Redundant copies of translations from the previous production<br />

application are removed by the language_tidy job.<br />

● If reconciliation is required, it is queued and run as soon as job<br />

servers are started and set up to monitor the application.<br />

Note: If you set the production application offline before running the<br />

Go to Production process, it is offline when the Go to Production<br />

finishes running. If the production application is online before running<br />

Go to Production, it is online when Go to Production finished.


Chapter 15: Publishing Data<br />

You can publish the data collected by IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> to a datastore, either<br />

from the <strong>Administration</strong> Console, or using the publish macros (p. 211).The data can then be used<br />

either as a source for a data mart or warehouse, or with IBM Cognos 8 studios. The publish process<br />

creates a datastore containing tables and views based on the publish layout and options that you<br />

select.<br />

Publish Layouts<br />

Choose from these types of publish layouts: table-only, incremental, and view.<br />

● The table-only layout gives users greater flexibility in reporting on <strong>Planning</strong> data. The table-<br />

only layout can also be used as a data source for other applications. This layout is required by<br />

the Generate Framework Manager Model Admin extension (p. 306) and the Generate Transformer<br />

Model Admin extension (p. 309).<br />

● The incremental publish layout publishes only the e.List items that contain changed data. Users<br />

can schedule an incremental publish using a macro (p. 216) or through IBM Cognos Connection<br />

and Event Studio. You can achieve near real-time publishing by closely scheduling incremental<br />

publishes.<br />

● The view layout generates views in addition to the export tables. This layout is for historical<br />

purposes.<br />

The Publish Process<br />

Publishing data is a production task that can only be performed after you run Go to Production<br />

and all e.List items that you are publishing are reconciled. You can monitor the progress of the<br />

e.List items being reconciled in the Preview window (p. 293). If you are setting data dimensions for<br />

publish for the view layout, you must do this before you run Go to Production. This is not necessary<br />

for the table-only layout.<br />

When you publish data, the following things happen:<br />

● A publish container is created, if one does not already exist.<br />

● An accurate snapshot is taken of the data at the time a publish is run to ensure a consistent<br />

read.<br />

● A publish job runs. This creates tables in the datastore, depending on the layout and options<br />

selected. For more information, see "Jobs" (p. 49).<br />

You can run a publish at the same time as an administration link (p. 147).<br />

<strong>Administration</strong> <strong>Guide</strong> 259


Chapter 15: Publishing Data<br />

The Publish Data Store Container<br />

You must publish to a separate publish container. This is because the main application container<br />

holds the transactional planning data in compact XML binary large object (blob) format, and must<br />

be backed up on a regular schedule based on the lifecycle of the transactional application.<br />

The publish container contains a snapshot of the planning data in relational form, which is a different<br />

lifecycle and contains a significantly different storage and performance profile. By having separate<br />

publish containers, you can make use of dedicated job servers for publish datastores. The two<br />

available publish layouts cannot coexist in a single datastore container.<br />

Access Rights Needed for Publishing<br />

Publish Scripts<br />

260 <strong>Contributor</strong><br />

You must be granted rights to publish data for the application (p. 39). If this is the first time you<br />

run publish, you can either publish to the default publish container, or create a new publish container.<br />

The default publish container resides on the same datastore server as the application with the same<br />

tablespace settings (for Oracle and DB2 UDB).<br />

For publish to be processed, the publish container must be added to a job server cluster or job server<br />

(p. 58). Therefore you must either have the right to create a publish container, and assign objects<br />

to a job server, or the publish container must be created and added to the job server already (p. 39).<br />

You may need to create publish scripts before you can publish data if you do not have DBA rights<br />

to your datastore server.<br />

To generate publish scripts, the Generate Scripts option must be set to Yes in the Admin Options<br />

table (p. 180).<br />

If you attempt to publish but a publish container does not exist, a script is generated. A DBA must<br />

then run the script to create the container. A message indicating the location of the script is shown.<br />

If the publish container does exist, a check is run to see if there are any datastore incompatibilities.<br />

If there are incompatibilities, another script is generated. Incompatibilities occur if you republish<br />

a datastore, and the format of the metadata has changed between publishes. For example, a cube<br />

was added, data dimensions changed, items were added to the Analyst model. There are always<br />

incompatibilities on the first publish, since the metadata tables are not present.You cannot publish<br />

until this script is run to update the datastore.<br />

You can generate a synchronization script manually by clicking the Generate synchronization script<br />

for datastore button.<br />

Warnings<br />

You may receive a warning similar to the following when running a script generated by Table-only<br />

layout publish, when the Generate Scripts option is selected:<br />

"Warning: The table 'annotationobject' has been created but its maximum row size (8658) exceeds<br />

the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail<br />

if the resulting row length exceeds 8060 bytes."


The table definition allows for a large amount of data to be stored per row. SQL Server generates<br />

a warning to let you know that there is a limit on how much data you can have on a row. If your<br />

annotation data exceeds this limit then your publish will fail. You can reduce the amount of data<br />

by selecting a smaller data dimension or by reducing the amount of data in the system, for example<br />

by using Delete Commentary.<br />

Selecting e.List Items to Be Published<br />

When you publish data, select the e.List items that you want to publish.<br />

● For a View layout, you can import the e.List item publish settings in the e.List import file. If<br />

you do this, select the Copy development e.List item publish settings to production application<br />

option. When you run Go to Production, the publish settings are copied to the e.List items tab.<br />

If you have not imported the publish e.List item settings, you can also set them on the e.List<br />

items tab.<br />

● For a Table-only layout, you must set the publish e.List item settings on the e.List items tab.<br />

Reporting Directly From Publish Tables<br />

IBM Cognos <strong>Planning</strong> published data is stored in standard datastore tables. You can report directly<br />

from these tables, using the Generate Framework Manager Model functionality to help in the process.<br />

You must not run reports during the publish process, as you may get inconsistent results. Also, if<br />

destructive changes have been made to the <strong>Planning</strong> environment, the publish tables may no longer<br />

match the ones defined in your reporting metadata model.<br />

It is best practice to isolate the business intelligence reports from the source data environments by<br />

creating a reporting datastore. This allows you to add value by bringing in data from other applic-<br />

ations. For example, perhaps there is some data which was optimized out of the IBM Cognos<br />

<strong>Planning</strong> application but would be useful for your reports.<br />

At its simplest, this datastore could be a straight copy of the publish tables as produced by IBM<br />

Cognos <strong>Planning</strong>. It could also be a traditional data mart or an extension to your existing data<br />

warehouse. Dimensionally-aware ETL tools such as IBM Cognos Data Manager can also be used<br />

to ensure that a single version of data runs through all your IBM Cognos <strong>Planning</strong> and business<br />

intelligence applications.<br />

If you report directly from IBM Cognos <strong>Planning</strong> tables, be aware of the following:<br />

Scenario and Version Dimensions<br />

IBM Cognos 8 is usually set up to automatically aggregate data around grouped items. If your<br />

report does not contain all the dimensions from a fact table then the data for the unspecific<br />

dimensions is aggregated.<br />

Normally, this is desirable behavior, but the Scenario and Version dimensions that are often used<br />

in planning applications are not suited for aggregation. One technique to handle this is to set up a<br />

mandatory filter on your cube tables in Framework Manager, forcing the reporting environment<br />

to either prompt for values whenever the fact table is used, or to have separate filtered query subjects<br />

for each version.<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 261


Chapter 15: Publishing Data<br />

Precalculated Summaries<br />

Be aware of precalculated summary levels in the published tables when using the Table-only publish<br />

layout. You may find that they complicate your data model. You can disable them by clearing the<br />

Include Roll Ups publish option.<br />

If you do not do this, then the data for precalculated summary levels is published into the same<br />

tables as the detail items. If you are using item tables (named it_D-List_name and containing an<br />

unstructured flat list of all items in the hierarchy) this is acceptable. If not, you may have reporting<br />

issues as your queries need dimensional context in order to avoid double counting.<br />

Note also that the publish take longer to run (more data points to write). If you are not using the<br />

item tables then the reporting environment could confuse users because there are separate hierarchy<br />

table aliases for each level in Framework Manager.<br />

Model Changes that Impact the Publish Tables<br />

262 <strong>Contributor</strong><br />

IBM Cognos <strong>Planning</strong> models are flexible and change to map the changes in the business. Changes<br />

can affect reports that are run off planning data. The following tables describe the impact that<br />

various changes to the <strong>Planning</strong> model have on publish tables in the data.<br />

Changes to D-Lists that are used as D-List formatted items result in fact table name change.<br />

Change to Dimension<br />

Add items<br />

Delete items<br />

Rename items<br />

D-List (not Dimension for Publish)<br />

None as long as the number of<br />

levels in the hierarchy remain the<br />

same.<br />

None as long as the number of<br />

levels in the hierarchy remain the<br />

same.<br />

None unless name filters are used<br />

in the BI application.<br />

Dimension for Publish<br />

New columns added. Existing SQL<br />

still works.<br />

Columns are deleted. Processing<br />

referring to these columns must be<br />

modified.<br />

D-List formatted items are stored<br />

in the fact columns as text rather<br />

than as a foreign key. As a result,<br />

text exported from previously<br />

published data may not match this<br />

text.<br />

A full publish resets the text in the<br />

Publish tables, but review external<br />

datastores where these items have<br />

not been normalized.


Change to Dimension<br />

Add hierarchy levels<br />

Delete hierarchy levels<br />

Reorder items<br />

Refresh items from<br />

the datastore<br />

Rename the D-List<br />

Change to D-Cube<br />

Reorder dimension<br />

Add dimension<br />

Delete dimension<br />

D-List (not Dimension for Publish)<br />

New columns created in dimension<br />

tables. Existing reports will not fail<br />

but level naming may no longer be<br />

correct.<br />

Columns deleted in dimension<br />

tables.<br />

None.<br />

None.<br />

Dimension table name changes.<br />

Affect<br />

Dimension for Publish<br />

None.<br />

None.<br />

Datastore columns are reordered<br />

but SQL still works.<br />

SQL still works.<br />

None, as long as the D-List is not<br />

used in D- cube where it is not the<br />

dimension for publish.<br />

None. The column sequence in the datastore may change but this does<br />

not impact reports.<br />

Assuming that the new dimension is not the dimension for publish, data<br />

for all items in the new D-List are automatically summed if no action is<br />

taken.<br />

Data Dimensions for Publish<br />

For most lists this is desirable, but care needs to be taken if the dimension<br />

contains scenarios or versions.<br />

Links to the dimension table are removed from the fact table. Reports<br />

referring to items in that dimension are affected.<br />

Setting data dimensions for publish can reduce the volume of published data considerably.<br />

In both layouts, for each selected D-Cube, you can choose a D-List that is designated as the<br />

dimension for publish. A separate column is created for each data item in the dimension. Selecting<br />

a dimension speeds the publishing process because fewer rows are written to the target database.<br />

Candidate dimensions for publish typically contain formatted D-List items or items by which the<br />

business tracks or measures performance. Such items are often numeric.<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 263


Chapter 15: Publishing Data<br />

For a View layout, set dimensions for publish in Development, Application Maintenance, Dimensions<br />

for Publish. Setting a data dimension is optional only for View layout, and changes to dimensions<br />

for publish apply only after you have run Go to Production. If the dimension that is used as a data<br />

dimension is removed in IBM Cognos 8 <strong>Planning</strong> - Analyst and the application is synchronized, the<br />

synchronization process handles this.<br />

It is mandatory to select a data dimension for publish for the Table-only layout. If you do not select<br />

one, a default dimension is used. This is a dimension that has formats defined. If you have more<br />

than one dimension with formats, ensure you select the one you require. If you plan to use Contrib-<br />

utor data as a source for IBM Cognos 8 Business Intelligence Studios, the dimension you select for<br />

a cube is the one used as the measure dimension in IBM Cognos 8. The PPDS driver also uses a<br />

dimension for publish. If one is set on the cube, it is used as the measure dimension in IBM Cognos 8.<br />

We recommend that each item of the selected dimension contain uniform data. This means that for<br />

every row, the data is of the same type: numeric, text, or date/time.<br />

Handling Nonuniform Data in Table-only Publish<br />

Slicing the D-Cube along the dimension for publish may result in nonuniform data. For example,<br />

cell data along any item of the dimension for publish may be of mixed types. Because of this, rows<br />

of data for an item may be of different types.<br />

In the table-only layout, where nonuniform data exists and must be preserved, if you select Create<br />

columns with data types based on the "dimensions for publish", it automatically creates enough<br />

columns so that no data is excluded. However, if you manually choose the columns to create, only<br />

the data in the format selected is published. For example, selecting the numeric and date/time options<br />

guarantees that only numeric and date/time data are written to the corresponding numeric and<br />

date/time columns; text is excluded. As a result, if the first row of an item is a numeric value, it is<br />

stored in the corresponding numeric column. The remaining data type columns for that item are<br />

populated with null values.<br />

In the view layout, data type uniformity is handled by storing all values in text columns. An associ-<br />

ated fact view (fv) is created using the sum hierarchy to view only numerical information.<br />

Selecting a Dimension for Publish for Reporting<br />

264 <strong>Contributor</strong><br />

It is important to carefully consider which dimension for publish you select when publishing data<br />

to be reported on. The dimension for publish is the D-List whose items become columns in the fact<br />

table. That is, instead of becoming an ID which links to a hierarchy table, the items in the selected<br />

D-List are converted to actual fact table columns.<br />

Why the Choice of Dimension for Publish is Important<br />

If you are reporting using a non-OLAP reporting tool, the publish is performed using SQL behind<br />

the scenes. Data is reported in columns, which are sorted, grouped and summarized in the rows.<br />

This means that you can perform actions such as inter-column calculations and independent<br />

formatting in the columns, but the rows can only be summarized. You can potentially build SQL<br />

reports with intra-row calculations, but it would take you longer to build and costs more to maintain.<br />

Note that rows and columns here are SQL terms. Columns and rows can be switched around within<br />

a report, and indeed this is often the case in financial reports.


Selecting Your Dimensions for Publish<br />

The primary calculation D-List is often used as the dimension for publish. However there are situ-<br />

ations where other D-Lists (such as Time or Versions) are more suitable.<br />

Your choice of dimension for publish is driven by a number of factors, and the <strong>Planning</strong> and BI<br />

designers should work together to select the appropriate one for reporting. The <strong>Planning</strong> designer<br />

may find it easier to build another cube than to report from an existing cube.<br />

Carefully consider which dimension for publish to use, because if you change it after you have<br />

started to build reports, you may need to rework existing reports.<br />

The following D-List attributes identify a likely dimension for publishing:<br />

● It contains a combination of text, numeric and date items (mixed data types).<br />

● It contains numeric items with different display formats such as ##% and #,###.##.<br />

● Your reports need to do additional calculations between items in the D-List.<br />

● You need to treat some of the D-List fields separately for reporting purposes.<br />

● The dimension for publish impacts the time the Publish takes to run. Even though there are<br />

fewer columns to create, more rows are written to the datastore, and this takes time to write.<br />

Tip: In some circumstances you may not want a dimension for publish. In this case your publish<br />

table has one row for every combination of dimension and you would leave all the processing<br />

and formatting intelligence to the reporting tool. Using the Table-only layout, you must select<br />

a dimension for publish, so to achieve equivalent functionality, add a D-List to the cube con-<br />

taining one item, and use this D-List as the dimension for publish.<br />

The Table-Only Publish Layout<br />

Using the table-only publish layout, you can<br />

● generate data columns in their native order, which preserves the original order when reporting,<br />

as when you publish to a view layout<br />

● publish detail plan data<br />

● select whether to prefix the dimension for publish column names with their data type to avoid<br />

reserved name conflicts<br />

When using the Generate Framework Manager Model Admin extension (p. 306), the table-only<br />

publish layout must be used.<br />

The following types of tables are created when you publish using the table-only layout.<br />

Table type<br />

Attached Documents<br />

Description<br />

Contains metadata about the<br />

attached documents<br />

Prefix or name<br />

Ad_ for cell attached documents, doc-<br />

umentobject for tab (cube) and model<br />

attached documents<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 265


Chapter 15: Publishing Data<br />

Table type<br />

Items (p. 267)<br />

Hierarchy (p. 267)<br />

Export (p. 270)<br />

Annotation (p. 271)<br />

Metadata (p. 274)<br />

Common (p. 276)<br />

Job (p. 276)<br />

Object locking (p. 277)<br />

Publish parameter<br />

Database Object Names<br />

266 <strong>Contributor</strong><br />

Description<br />

Describes the D-List items.<br />

Contains the hierarchy inform-<br />

ation derived from the D-list,<br />

which is published to two<br />

associated tables.<br />

Contains published D-Cube<br />

data.<br />

Contains annotations, if the<br />

option to publish annotations<br />

is selected.<br />

Contains metadata about the<br />

publish tables.<br />

Contains tables used to track<br />

when major events occurred in<br />

the publish container.<br />

Contains tables with informa-<br />

tion relating to jobs.<br />

A table used to lock objects in<br />

the system when they are being<br />

processed.<br />

Contains state information<br />

related to table-only publish<br />

Prefix or name<br />

it_<br />

sy_ for the simple hierarchy<br />

cy_ for the calculated hierarchy.<br />

et_<br />

an_ for cell and audit annotations<br />

annotationobject for tab (cube) and<br />

model annotations<br />

P_APPLICATIONCOLUMN<br />

P_APPCOLUMNTYPE<br />

P_APPOBJECTTYPE<br />

P_APPLICATIONOBJECT<br />

P_ADMINEVENT<br />

P_ADMINHISTORY<br />

P_CONTAINEROPTION<br />

P_JOB<br />

P_JOBITEM<br />

P_JOBITEMSTATETYPE<br />

P_JOBSTATETYPE<br />

P_JOBTASK<br />

P_OBJECTLOCK<br />

publishparameters<br />

are derived from the <strong>Planning</strong> object names. The maximum length of table and column names are<br />

as follows.


Type of Name<br />

Column<br />

Table<br />

MS SQLServer<br />

128<br />

128<br />

IBM DB2 UDB<br />

30<br />

128<br />

Oracle<br />

Names cannot begin with a number or underscore (_), and can include the following characters:<br />

● a through z<br />

● 0 through 9<br />

● _ (underscore)<br />

Items Tables for the Table-only Layout<br />

One items table is created for each D-List. It contains one row per item. The name of the table is<br />

generated from that of the D-List and the prefix it_.<br />

The items tables have the following columns.<br />

Column<br />

itemid<br />

itemname<br />

displayname<br />

disporder<br />

itemiid<br />

Description<br />

Unique identifier for the item<br />

Name of the Item<br />

Display name of the item<br />

Display order specified in Analyst, which is zero-based<br />

D-List integer identifier for the item, which is used as the<br />

primary key<br />

Hierarchy Tables for the Table-only Layout<br />

The complete hierarchies are published to the cy tables while the simple summary hierarchies are<br />

available in the sy_ tables.<br />

These tables all have the same format. They contain the following columns for each level of the<br />

hierarchy.<br />

Column<br />

levelLevelNumber_guid<br />

levelLevelNumber_iid<br />

levelLevelNumber_name<br />

Description<br />

30<br />

30<br />

Globally unique identifier of the item<br />

D-List integer that identifies the item<br />

Item name<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 267


Chapter 15: Publishing Data<br />

Column<br />

levelLevelNumber_displayname<br />

levelLevelNumber_order<br />

Description<br />

Item display name<br />

Item order in the hierarchy<br />

Simple hierarchy tables are created by the publish table-only layout. They are intended to be used<br />

when there are simple parent-child relationships between D-List items that can be summed. The<br />

purpose of this is to allow a reporting tool to automatically generate summaries for each hierarchy<br />

level, or for use with applications that do not require precalculated data, such as a staging source<br />

for a data warehouse.<br />

In the following examples, D-List items are represented by letters, and the relationships between<br />

items are drawn as lines.<br />

Parent D-List items are calculated from child D-List item dependencies. Leaf D-List items do not<br />

have child D-List item dependencies.<br />

All D-List items have their values shown in simple parenthesis and in addition, leaf D-List item<br />

codes are shown in curly braces.<br />

D-List Value<br />

0<br />

1<br />

2<br />

3<br />

Example 1 - Simple Summaries<br />

268 <strong>Contributor</strong><br />

Description<br />

Direct child of a simple sum D-List item.<br />

The leaf has multiple parents.<br />

The leaf item is part of a sub-hierarchy that has been moved<br />

to the root (no parent).<br />

The leaf item is an orphan.<br />

The left pane is an example of simple hierarchies with values. The right pane is an example of simple<br />

hierarchies with values and leaf D-List item codes.


Example 2 - Leaf D-List Item with Multiple Parents<br />

In the left pane, [E] has more than one parent, so parentage is assigned to the first parent in the IID<br />

order. In the right pane, [D] becomes a leaf D-List item, and [F] becomes orphaned and is moved<br />

to the root in the right pane.<br />

Example 3 - Non-Simple Summaries<br />

In the left pane, [P] is the product of [S] and [T]. Leaf D-List items of non-simple summaries get<br />

moved to the root. In the right pane, [P] became a leaf D-List item, [S] and {T] were orphaned and<br />

moved to the root in the right pane.<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 269


Chapter 15: Publishing Data<br />

Example 4 - Sub-Hierarchy of Non-Simple Summary<br />

In the left pane, [B] is the product of [C] and [E]. [C] has its own simple summary hierarchy. Because<br />

non-simple sums are not included in the hierarchy, in the right pane, [B] becomes a leaf, [E] and<br />

[C] become orphaned and moved to the root, and [C] keeps its sub-hierarchy because it is a simple<br />

sum.<br />

Export Tables For the Table-only Layout<br />

270 <strong>Contributor</strong><br />

Cell data for the selected cubes is published to the export tables.<br />

If you select the Include Roll Ups option, the export tables contain all the data, including calculated<br />

data.<br />

If you do not select this item, the export tables contain only non-calculated fact data.<br />

Users who report against published data that contains only fact data use the reporting tool to<br />

aggregate the calculated items when grouping with the hierarchical dimensions.<br />

You can control how the export tables (prefix et_) are generated as follows.<br />

● Publish only uniform cube data. Select the Create Columns With Data Types Based on the<br />

Dimension for Publish option, the data type of each item of the Dimension for publish is used<br />

for the columns of the export tables. If individual cell types differ from that of the corresponding<br />

columns, the corresponding cell data is not published and an informative message appears.<br />

● Select only data of the specified types.<br />

When more than one data type is selected, multiple columns appear for each item in the export<br />

tables, one column per data type. For example, if both numeric data and dates are selected,<br />

two columns are created per item in the dimension for publish.<br />

● Include the original formatted numeric and date values, which are stored in the text column.<br />

This is useful when the original format cannot be easily reproduced in the reporting tool<br />

application.<br />

● Publish entire cubes, or publish only leaf data and let the reporting engine perform the rollups.<br />

In this way, you control the level of detail of the information to publish.


Data Types Used to Publish Data<br />

The summary hierarchy as specified in the sy_ tables must be used to perform the rollups. Leaf<br />

cells are those that correspond to leaf items of the simple summary hierarchies.<br />

The following data types are used for publishing data.<br />

Data type<br />

TEXT<br />

DATE<br />

DOUBLE<br />

INTEGER<br />

MS SQLServer<br />

VARCHAR(8000)<br />

datetime<br />

float<br />

integer<br />

IBM DB2 UDB<br />

CLOB<br />

date<br />

float<br />

int<br />

Oracle<br />

VARCHAR2(4000)<br />

timestamp<br />

float<br />

NUMBER(10)<br />

The prefixes text_, date_, and float_ are used to identify the data types of columns in tables, and<br />

the suffix _[count] is used to guarantee name uniqueness.<br />

Annotations Tables for the Table-only Layout<br />

The an_cubename Table<br />

You can choose to publish user and audit annotations.<br />

Cell and audit annotations are published to the an_cubename table.<br />

Tab and model annotations are published in the annotationobject table.<br />

This table contains cell and audit annotations.<br />

The columns of the an_cubename table are as follows:<br />

Column<br />

HierarchyDimensionName<br />

Dimension_DimensionName<br />

MeasureDimensionItem-<br />

Name_user_id<br />

DimensionItemName_date<br />

Description<br />

The unique identifier (p. 267) of the e.List items for the<br />

coordinates of the cell annotations.<br />

The unique identifier of the D-List items for the coordinates<br />

of the cell annotations.<br />

The last user who updated the annotation.<br />

The last date the annotation was updated.<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 271


Chapter 15: Publishing Data<br />

Column<br />

DimensionItemName_annotation<br />

DimensionItemName_value<br />

visible<br />

The annotationobject Table<br />

Description<br />

For a cell annotation, the text of the annotation.<br />

For an audit annotation, the details of the action performed.<br />

The text is prefixed with Audit annotation on … followed<br />

by the action.<br />

This table contains tab and model annotations.<br />

The cell value at the time of the annotation.<br />

Indicates if this row can be reported on.<br />

The columns of the annotationobject table are as follows.<br />

Column<br />

object_id<br />

node_id<br />

user_id<br />

annotation_date<br />

annotation<br />

Description<br />

The identifier of the cube or model being annotated.<br />

The e.List item identifier.<br />

The user id of the person who created the annotation.<br />

The date and time the annotation was made. They are<br />

stored as UTC + 00:00.<br />

The text of the annotation.<br />

Attached Document Tables for the Table-only Layout<br />

The ad_ cubename Table<br />

272 <strong>Contributor</strong><br />

You can choose to publish some metadata about the attached document. Cell level attached document<br />

metadata is published to the ad_cubename table. Tab and model attached document metadata is<br />

published in the documentobject table.<br />

This table contains cell attached document metadata.<br />

The columns of the ad_cubename table are as follows.<br />

Column<br />

HierarchyDimensionName<br />

Description<br />

The unique identifier (p. 193) of the e.List items for the<br />

coordinates of the cell attached documents.


Column<br />

Dimension_DimensionName<br />

MeasureDimensionItem-<br />

Name_user_id<br />

DimensionItemName_date<br />

DimensionItemName_filename<br />

DimensionItemName_filesize<br />

DimensionItemName_comment<br />

DimensionItemName_value<br />

visible<br />

The documentobject Table<br />

Description<br />

The unique identifier of the D-List items for the coordinates<br />

of the cell attached documents.<br />

The last user who updated the attachment of the document.<br />

The last date the attachment of the document was updated.<br />

The file name of the document that was attached.<br />

The file size at the time the document was attached.<br />

A comment that was entered at the time the document was<br />

attached.<br />

The cell value at the time the document was attached.<br />

Indicates if this row can be reported on.<br />

This table contains tab and model metadata about attached documents.<br />

The columns of the documentobject table are as follows.<br />

Column<br />

node_id<br />

object_id<br />

user_id<br />

document_date<br />

document_name<br />

document_size<br />

document_comment<br />

visible<br />

Description<br />

The e.List item identifier.<br />

The identifier of the cube or model that is having the docu-<br />

ment attached to it.<br />

The user id of the person who attached the document.<br />

The date and time the document was attached.<br />

The file name of the document that was attached.<br />

The file size at the time the document was attached.<br />

A comment that was entered at the time the document was<br />

attached.<br />

Indicates if this row can be reported on.<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 273


Chapter 15: Publishing Data<br />

Metadata Tables<br />

Metadata about the publish tables is maintained in several tables.<br />

The P_APPLICATIONOBJECT Table<br />

The description of each database object created during a publish operation is maintained in<br />

applicationobject.<br />

The columns of the P_APPLICATIONOBJECT table are as follows.<br />

Column<br />

objectname<br />

displayname<br />

objectid<br />

objecttypeid<br />

datastoretypeid<br />

objectversion<br />

lastsaved<br />

libraryid<br />

The P_APPLICATIONCOLUMN Table<br />

274 <strong>Contributor</strong><br />

Description<br />

Name of the object.<br />

Display name of the associated <strong>Planning</strong> object.<br />

A globally unique reference (GUID) for the object.<br />

The type of object, for example, EXPORT_TABLE,<br />

DIMENSION_SIMP_HIER.<br />

Describes the datastore type: Table.<br />

This is an internal version number used for debugging.<br />

This detects if the published model is out of date.<br />

This detects if the published model is out of date.<br />

The columns of the P_APPLICATIONCOLUMN table are as follows.<br />

Column<br />

objectname<br />

columnname<br />

displayname<br />

columnid<br />

objecttypeid<br />

columntypeid<br />

Description<br />

Name of the object.<br />

Name of the column.<br />

Display name of the associated <strong>Planning</strong> object.<br />

A globally unique reference (GUID) for the object.<br />

Type of <strong>Planning</strong> object, such as, EXPORT_TABLE,<br />

DIMENSION_ITEMS, DIMENSION_SIMP_HIER.<br />

-


Column<br />

columnorder<br />

logicaldatatype<br />

The P_APPCOLUMNTYPE Table<br />

Description<br />

The types of tables that can exist in <strong>Contributor</strong>.<br />

The columns are as follows:<br />

Column<br />

objecttypeid<br />

columntypeid<br />

description<br />

The P_APPOBJECTTYPE Table<br />

The types of application object that exist.<br />

The columns are as follows:<br />

Column<br />

objecttypeid<br />

description<br />

The dimensionformats Table<br />

The order in which the column appears.<br />

The type of data, such as, epGUID, epTextID.<br />

Description<br />

The object type, such as a FACT_VIEW.<br />

-<br />

A description of the object type.<br />

Description<br />

The object type, such as ANNOTATION_OBJECT.<br />

A description of the object type.<br />

The dimensionformats table formatting information for the items of the dimension for publish that<br />

is compatible with IBM Cognos 8 Business Intelligence.<br />

The columns of the dimensionformats table are as follows.<br />

Column<br />

dimensionid<br />

itemguid<br />

formattype<br />

negativesignsymbol<br />

Description<br />

Globally unique identifier of the dimension for publish.<br />

Globally unique identifier of the item of the dimension for<br />

publish.<br />

One of percent, number, or date.<br />

Chapter 15: Publishing Data<br />

String indicating how negative values must be reported.<br />

<strong>Administration</strong> <strong>Guide</strong> 275


Chapter 15: Publishing Data<br />

Column<br />

Common Tables<br />

Job Tables<br />

276 <strong>Contributor</strong><br />

noofdecimalplaces<br />

scalingfactor<br />

zerovaluechars<br />

Description<br />

Number of decimal places for numerical values.<br />

Integer for the scaling factor of numerical values.<br />

Characters to use for zero of blank value.<br />

Common tables are created so that you can track the history of events in the publish container.<br />

The P_ADMINHISTORY table stores information about when major events occurred to the publish<br />

container.<br />

The P_ADMINEVENTS table contains the IDs and descriptions of the event types used in the<br />

P_ADMINHISTORY table.<br />

The P_CONTAINEROPTION table is used for Oracle and DB2 to store tablespace information<br />

for blob, data, and index.<br />

The following tables are created to support jobs (p. 49).<br />

Table<br />

P_JOB<br />

P_JOBITEM<br />

P_JOBITEMSTATETYPE<br />

P_JOBSTATETYPE<br />

P_JOBTASK<br />

P_JOBTYPE<br />

Description<br />

Information about the jobs that are running or ran in the<br />

application. This information is used in the Job Manage-<br />

ment window.<br />

Each Job Item is represented by a row in the jobitem table.<br />

The state of the Job Item is also stored. If a problem<br />

occurred while running the Job Item, descriptive text is<br />

stored in the failurenote column and is appended to the<br />

failurenote column for the job.<br />

Job item status types: failed, ready, running, succeeded.<br />

Job status types: canceled, complete, creating, queued,<br />

ready, running.<br />

Where and when the job items ran and the security context<br />

it used.<br />

Job types and their implementation program IDs.<br />

Parameters and failurenotes in Job tables are stored as XML LOBs.


The P_OBJECTLOCK Table<br />

The P_OBJECTLOCK table supports macros, administration links, and the export queue. It locks<br />

objects in the system when they are being processed and contains information about the objects<br />

being worked on.<br />

Create a Table-only Publish Layout<br />

Before you publish, you must ensure the following<br />

● you have appropriate access rights (p. 260)<br />

● the Go to Production process has been run (p. 243)<br />

Note: You can increase publish performance by managing or dropping indexes during the publish<br />

process. You can set this option in Development, Application Maintenance, Admin Options. For<br />

more information, see "Admin Options" (p. 79).<br />

Steps<br />

1. In the application's tree, click Production, Publish, Table-only Layout.<br />

2. Check the cubes you want to publish data from.<br />

● The Dimension column indicates the data dimension for publish that is selected.<br />

● The Annotation Rows column shows the number of annotation rows for a cube when you<br />

click Display row counts.<br />

● The Export Rows column shows the number of rows that are published when you click<br />

Display row counts.<br />

3. Click the e.List Items tab.<br />

You can select or clear individual items, or use the buttons at the top of the table.<br />

This step is not required for assumption cubes because assumption cubes do not contain e.List<br />

items.<br />

4. To set the Publish options and configure the Publish datastore connection, click the Options<br />

tab (p. 278).<br />

5. Click Publish.<br />

6. If you are asked if you want to create a publish container, click OK.<br />

7. Select the job server or job server cluster to monitor the publish container and click Close.<br />

You need the rights to add an application to the job server or job server cluster.<br />

A reporting publish job is queued. You can monitor the progress of the job. For more information,<br />

see "Jobs" (p. 49).<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 277


Chapter 15: Publishing Data<br />

Options for Table-only Publish Layout<br />

278 <strong>Contributor</strong><br />

In the Publish Options tab, you can set the publish options and configure the publish datastore<br />

connection.<br />

Option<br />

Creating a New Publish Container<br />

Configuring the Publish Datastore<br />

Connection<br />

Create columns with data types<br />

based on the ’dimension for publish’<br />

Only create the following columns<br />

Include Roll Ups<br />

Include Zero or Blank Values<br />

Prefix column names with data<br />

types<br />

Include User Annotations<br />

Include Audit Annotations<br />

Description<br />

The first time you attempt to publish data, you can either<br />

create the default publish container, by clicking Publish, or<br />

create a new publish container (p. 285).<br />

To configure the publish datastore connection, click the<br />

Configure button, see (p. 286).<br />

To use the item types from the dimension for publish as the<br />

table columns.<br />

To manually select the data types that are part of the pub-<br />

lish process for each measure.<br />

You can choose to publish Numeric, Text, and Date<br />

columns. Within the Text column, you can also choose<br />

whether to include formatted numeric and date values.<br />

Selecting this check box includes all items, including calcu-<br />

lated items. Clearing this option only publishes leaf items,<br />

and therefore fewer rows. You can recreate the calculation<br />

in your reporting tools by linking the et and sy tables.<br />

Clearing this check box means that empty cells are not<br />

populated with zeros or blanks. This can speed up the<br />

process of publishing data substantially, depending on the<br />

number of zero or blank cells.<br />

Select this option if you wish the column name to be pre-<br />

fixed with the data type to avoid reserved name conflicts.<br />

Selecting this check box publishes cell level user annotations<br />

in a table named an_cubename.<br />

Selecting this check box publishes audit annotations in a<br />

table named an_cubename, in the column annota-<br />

tion_is_edit.


Option<br />

Include Attached Documents<br />

Create an Incremental Publish<br />

Description<br />

Selecting this check box includes information about attached<br />

documents. Information about the attached document such<br />

as the filename, location, and file size are published with<br />

the data.<br />

If you are publishing data using the Table-only layout, the Incremental Publish feature can be used<br />

to only publish those e.List items that have changed since the last time you published. Incremental<br />

Publish uses the same infrastructure as Table-only layout but tracks what items were altered since<br />

the last publish and only re-publishes them. You can create macros that will run an incremental<br />

publish at scheduled intervals, see "Publish - Incremental Publish" (p. 216). This provides you with<br />

a nearly real-time publish. Another benefit of scheduling incremental publishes is that you can<br />

publish changed data soon after saving or submitting plans. It also reduces the need for frequently<br />

scheduled full publishes and therefore potentially saving time and resources.<br />

If your publish selection contains more than one cube, but values change in only one cube, the<br />

changed e.List items for all the cubes are republished. The incremental publish is by definition a<br />

change-only publish and therefore requires that a publish schema is created, either by running a<br />

full publish, selecting the cubes and e.List items that you want to publish, or by generating and<br />

running publish scripts. When you run a Go to Production process incremental publishes that are<br />

changes-only are suspended. Model changes that result in changes to the publish schema may require<br />

you to do a full publish of all the selected cubes and e.List items.<br />

Steps<br />

1. In the application's tree, click Production, Publish, Incremental Publish.<br />

2. To configure the publish container, click Configure.<br />

The incremental updates will be applied to that container.<br />

3. To publish only submitted changes, select Include only submitted items.<br />

Note: If you use this option without changing data in an e.List, the e.Lists without changes are<br />

not included in the publish.<br />

4. Click Publish.<br />

A message displays indicating if an Incremental Publish job was initiated or, if no changes were<br />

detected.<br />

The View Publish Layout<br />

The view publish layout as supported in <strong>Contributor</strong> and Analyst version 7.2 is compatible with<br />

previous <strong>Planning</strong> data solutions. It is intended for backwards compatibility only.<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 279


Chapter 15: Publishing Data<br />

We recommend that non-IBM Cognos applications currently dependent on the view publish layout<br />

be migrated to use the new table-only publish layout because of the improvements in publish per-<br />

formance, data storage efficiency and incorporation of best practices.<br />

The following types of publish tables are created when you publish using the view layout:<br />

Table type<br />

Items (p. 281)<br />

Hierarchy (p. 281)<br />

Export (p. 282)<br />

Annotation (p. 282)<br />

Metadata (p. 274)<br />

Common (p. 276)<br />

Job (p. 276)<br />

Database Object Names<br />

280 <strong>Contributor</strong><br />

Description<br />

Describes the D-List items.<br />

Used by reporting tools. The depth<br />

of each item in the dimension hier-<br />

archy is recorded and display<br />

information.<br />

Contains published D-Cube data.<br />

Contains annotations, if the option<br />

to publish annotations is selected.<br />

Contains metadata about the<br />

tables.<br />

Contains tables used to track when<br />

major events occurred in the pub-<br />

lish container.<br />

Contains tables with information<br />

relating to jobs.<br />

Prefix or Name<br />

it_<br />

hy_<br />

cy_ (calculation hierarchy tables)<br />

et_<br />

an_ for cell and audit annotations<br />

annotationobject for tab (cube) and<br />

model annotations<br />

P_APPLICATIONCOLUMN<br />

P_APPCOLUMNTYPE<br />

P_APPOBJECTTYPE<br />

P_APPLICATIONOBJECT<br />

annotationobject<br />

P_ADMINEVENT<br />

P_ADMINHISTORY<br />

P_CONTAINEROPTION<br />

P_JOB<br />

P_JOBITEM<br />

P_JOBITEMSTATETYPE<br />

P_JOBSTATETYPE<br />

P_JOBTASK<br />

P_OBJECTLOCK<br />

Database object names are limited to 18 lowercase characters, and are derived from the IBM<br />

Cognos 8 <strong>Planning</strong> object names.


Items Tables for the View Layout<br />

One items table is created for each D-List. It contains one row per item. The name of the table is<br />

generated from that of the D-List and the prefix it_.<br />

The items tables have the following columns.<br />

Column<br />

itemid<br />

dimensionid<br />

itemname<br />

displayname<br />

disporder<br />

Hierarchy Tables for the View Layout<br />

Description<br />

Unique identifier for the item.<br />

Unique identifier for the D-List.<br />

Name of the item.<br />

Display name of the item.<br />

Display order specified in Analyst, which is zero-based.<br />

There is no model construct for specifying items hierarchies. Instead, hierarchies are derived from<br />

user specified equations.<br />

Two types of hierarchies are currently supported; complete hierarchies and simple summary hier-<br />

archies.<br />

Complete hierarchies are used to produce reports on the entire contents of cubes. Complete hier-<br />

archies are used to organize cube data and are not used to perform rollups and calculations in the<br />

reporting engine. The rules that govern the generation of complete hierarchies in the cy_ tables are<br />

as follows:<br />

● The parent of a given item is the first simple sum that references the item.<br />

● If this sum does not exist, it is the first non-sum calculation that references the item.<br />

● If neither exists, the item is a top-level item.<br />

Simple summary hierarchies are used when only detail items are published and rollups are performed<br />

from the reporting engine. The rules that govern the generation of these hierarchies are as follows:<br />

● The parent of a given item is the first simple sum that references it.<br />

● If there are there are multiple candidates for the parent of an item, it is assigned to the first<br />

parent in iid order and the other candidate parents are considered to be detail items in the<br />

hierarchy.<br />

● In the case where a parent cannot be identified that way and the item is not a simple sum, it is<br />

considered to be a root item.<br />

Simple summary hierarchies are not necessarily complete because all items that are part of a D-List<br />

may not necessarily be part of the hierarchy.<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 281


Chapter 15: Publishing Data<br />

The starting point for the production of these hierarchies is the graph of items dependencies produced<br />

when equations are parsed. This graph specifies all parent/child relationships between items. Because<br />

the simple summary hierarchy is limited to simple sums, sub-hierarchies can be detached from the<br />

main hierarchy and moved to the top.<br />

Export Tables for the View Layout<br />

Cell data for the selected cubes are published to the et_ tables, one row per cell. These tables contain<br />

the coordinate for each cell of the cube and the corresponding cell value. One column per D-List<br />

stores the item GUID along that dimension. An additional column stores the cell value (published<br />

as a blob or varchar depending on the target DBMS). In IBM Cognos 8 <strong>Planning</strong>, a cell value can<br />

contain a date, a double, or text.<br />

Annotation Tables for the View Layout<br />

You can choose to publish user and audit annotations.<br />

Cell and audit annotations are published to the an_cubename table.<br />

Tab and model annotations are published to the annotationobject table.<br />

The an_ cubename View Layout Table<br />

The an_cubenameview layout table contains cell and audit annotations.<br />

The columns of the an_cubename table are as follows.<br />

Column<br />

Dimension_DimensionName<br />

HierarchyDimensionName<br />

annotation_user_gu<br />

annotation_date<br />

annotation_cell_va<br />

annotation_is_edit<br />

The annotationobject View Layout Table<br />

282 <strong>Contributor</strong><br />

Description<br />

The unique identifier of the D-List items for the coordinates<br />

of the cell annotations.<br />

The name of the e.List.<br />

The globally unique identifier of the last user who updated<br />

the annotation.<br />

The date and time the annotation was made. They are<br />

stored as UTC + 00:00.<br />

The cell value at the time of the annotation.<br />

Whether the annotation is editable (0 = no, 1 =yes).<br />

The annotationobject view layout table contains published tab and model annotations.<br />

The columns of the annotationobject table are as follows.


Views<br />

Column<br />

objectguid<br />

nodeguid<br />

user_guid<br />

annotation_date<br />

annotation<br />

Description<br />

The globally unique identifier (GUID) of the cube or model<br />

being annotated.<br />

The GUID of e.List item being annotated.<br />

The GUID of the user that annotated.<br />

The date and time the annotation was made.<br />

The text of the annotation.<br />

An ev_view is created to provide a more user-friendly access to its associated export table (et_table),<br />

which contains cube data. In this view, GUIDs are simply replaced by the display name associated<br />

with the D-List items, and export value are cast to varchar when published as blobs.<br />

A fact view (with fv_prefix) is created for each cube being published and is limited to numeric values<br />

by joining the export values from the et_ table to the items in the hy_ tables for the cube. These<br />

rules for deriving this hierarchy are explained earlier.<br />

A complete view (with cv_prefix) is created for each cube being published and is built by joining<br />

the export values from the et_ table to the items in the cy_ tables for the cube.<br />

The following views are created in a view publish layout.<br />

View name<br />

fv_cubename<br />

ev_cubename<br />

av_cubename<br />

cv_cubename<br />

Create a View Layout<br />

Description<br />

A view on the cell data for a cube that resolves the star<br />

schema linking to the flattened out hierarchy for a dimen-<br />

sion.<br />

Before you publish, you must ensure that<br />

A view on the cell data for a cube that resolves the star<br />

schema linking to the items in a dimension.<br />

A view on the cell annotations table for a cube that resolves<br />

the star schema.<br />

● you have appropriate access rights (p. 260)<br />

● select the data dimensions for publish, if required (p. 263)<br />

Chapter 15: Publishing Data<br />

A complete view created for each cube being published.<br />

<strong>Administration</strong> <strong>Guide</strong> 283


Chapter 15: Publishing Data<br />

● select the e.List items to be published (p. 261)<br />

● Go to Production was run (p. 243) after the data dimensions for publish were selected<br />

Note: You can increase publish performance by managing or dropping indexes during the publish<br />

process. You can set this option in Development, Application Maintenance, Admin Options. For<br />

more information, see "Admin Options" (p. 79).<br />

Steps<br />

1. In the application tree, click Production, Publish, View Layout.<br />

2. On the Cubes tab, check the cubes you want to publish data from.<br />

● The Dimension column indicates the dimension that is selected.<br />

● The Annotation Rows column shows the number of annotation rows for a cube when you<br />

click Display row counts. Note that only cell annotations are published.<br />

● The Export Rows column shows the number of rows that are be published when you click<br />

Display row counts.<br />

3. Click the e.List Items tab to select the e.List items to publish. You are unable to publish before<br />

doing this.<br />

4. Click the Options tab. This step is optional and enables you to set the Publish options and<br />

configure the Publish datastore connection. For more information, see "Options for View<br />

Layout" (p. 284).<br />

5. Click Publish.<br />

6. If you are asked if you want to create a publish container, click OK.<br />

7. Select the job server or job server cluster to monitor the publish container and click Close.<br />

Options for View Layout<br />

284 <strong>Contributor</strong><br />

You need the rights to add an application to the job server or job server cluster.<br />

A publish job is queued. You can monitor the progress of the jobs.<br />

In the Publish Options window, you can set the publish options and configure the publish datastore<br />

connection.<br />

Option<br />

Creating a New Publish Container<br />

Configure the Publish Datastore<br />

Connection<br />

Description<br />

The first time you attempt to publish data, you can either<br />

create the default publish container, by clicking Publish, or<br />

create a new publish container (p. 285).<br />

To configure the publish datastore connection, click the<br />

Configure button, see (p. 286).


Option<br />

Do Not Populate Zero/Null/Empty<br />

Data<br />

Publish Only Cells With Writeable<br />

Access<br />

Use Plain Number Formats<br />

Remove all data before publishing<br />

new data<br />

Include User Annotations<br />

Include Audit Annotations<br />

Description<br />

Ensure that empty cells are not populated with zeros.<br />

Selecting this option can substantially speed up the process<br />

of publishing data, depending on the number of blank cells.<br />

Selecting this check box publishes only rows that include<br />

at least one cell with write access; rows for which all cells<br />

are read-only or hidden are not included. Clearing this check<br />

box publishes all cells, including hidden cells, regardless of<br />

access levels.<br />

Selecting this check box removes any numeric formatting<br />

for the purposes of export. It exports to as many decimal<br />

places as are needed, up to the limit stored on the computer.<br />

Negative numbers are prefixed by a minus sign. No thou-<br />

sand separator, percent signs, currency symbols, or other<br />

numeric formats that were applied on the dimension or D-<br />

Cube are used. Plain Number Format uses the decimal point<br />

(.) as the decimal separator.<br />

Selected by default, selecting this option ensures that a<br />

consistent set of data is published. It publishes data for all<br />

the selected cubes, and remove all other published data in<br />

the datastore.<br />

If this check box is cleared, it leaves existing data, unless<br />

the e.List item is being republished. In this case, it removes<br />

the existing data for that e.List item and replace it with the<br />

new data.<br />

Create a Custom Publish Container<br />

Selecting this check box publishes cell level user annotations<br />

in a table named an_cubename.<br />

Publishes audit annotations to a table named an_cubename,<br />

in the column annotation_is_edit.<br />

Before you publish data, no publish containers exist. You can create a custom publish container.<br />

When you create a table-only publish layout (p. 277) or a view publish layout (p. 279), you can also<br />

create a default publish container using default naming conventions on the same datastore server<br />

as the application datastore.<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 285


Chapter 15: Publishing Data<br />

Type of Publish<br />

Table-only layout<br />

View Layout<br />

Steps<br />

Publish Container Name<br />

applicationname_table<br />

applicationname_view<br />

1. In the Production application, click Publish, and either Table-only Layout or View Layout<br />

depending on the type of publish container you require.<br />

2. Click the Options tab, and then click the Configure button.<br />

3. Select the datastore server where you want the publish container to be created.<br />

4. Click Create New.<br />

5. Click the button next to the Name box .<br />

6. Complete the New publish container dialog box:<br />

● Application name<br />

Type a name for the publish datastore. We recommend that you use the name of the current<br />

application and append to it a suffix, such as, _view for a view layout, or _table for a table-<br />

only layout.<br />

● Location of datastore files<br />

Enter an existing location for the datastore files on the datastore server. Required only by<br />

SQL Server applications.<br />

7. If you have an Oracle and DB2 UDB application, click Tablespace and then specify the following<br />

configuration options:<br />

● Tablespace used for data<br />

● Tablespace used for indexes<br />

● Tablespace used for blobs<br />

Custom temp tablespaces are supported for Oracle only.<br />

8. Click Create.<br />

If you are prompted to create a script, this must be run by a DBA to create the publish container.<br />

If you are not prompted to create a script, the container is created.<br />

The publish container must be added to a job server or job server cluster so that the publish<br />

jobs are processed.<br />

Configure the Datastore Connection<br />

286 <strong>Contributor</strong><br />

You must have Modify connection document rights (p. 39) for the datastore server.


You can select and configure the publish container.<br />

Note: Tablespace settings can be configured only when creating a new publish container.<br />

Steps<br />

1. In the Production application, click Publish, and either Table-only Layout or View Layout<br />

depending on the type of publish container you require.<br />

2. Click the Options tab, and then click the Configure button.<br />

If you are prompted to create and generate a script, do the following:<br />

● Click OK and Cancel.<br />

● Click Generate Synchronization Script for Datastore.<br />

● Name and save the script, and pass to the DBA to run the script.<br />

● When the script is run, click the Configure button.<br />

For more information on scripts, see "Publish Scripts" (p. 260).<br />

3. Click the required publish container and click Configure.<br />

4. Configure the following options.<br />

Option<br />

Trusted Connection<br />

Use this account<br />

Password<br />

Preview Connection<br />

Test Connection<br />

Action<br />

Click to use Windows authentication for the logon<br />

method to the datastore. You do not have to specify a<br />

separate logon ID or password. This method is common<br />

for SQL Server datastores and less common, but possible,<br />

for Oracle.<br />

Enter the datastore account that this application will use<br />

to connect. This box is not enabled if you use a trusted<br />

connection.<br />

Type the password for the account. This box is not<br />

enabled if you use a trusted connection.<br />

Click to view a summary of the datastore server connec-<br />

tion details.<br />

Click to check the validity of the connection to the<br />

datastore server. This is mandatory.<br />

5. If you want to configure advanced settings, click Advanced, and enter the following information.<br />

Typically these settings should be left as the default. They may not be supported by all datastore<br />

configurations.<br />

Chapter 15: Publishing Data<br />

<strong>Administration</strong> <strong>Guide</strong> 287


Chapter 15: Publishing Data<br />

Setting<br />

Provider Driver<br />

Connection Prefix<br />

Connection Suffix<br />

6. Click OK.<br />

Remove Unused Publish Containers<br />

288 <strong>Contributor</strong><br />

Description<br />

Select the appropriate driver for your datastore.<br />

Specify to customize the connection strings for the needs<br />

of the datastore.<br />

Specify to customize the connection strings for the needs<br />

of the datastore.<br />

You can remove unused publish datastore containers. This does not delete the datastore container,<br />

it just removes the reference so that it is not displayed in the Select Publish Datastore Container<br />

window.<br />

Steps<br />

1. From the Tools menu, click Maintenance, Validate Publish Containers. A list of publish con-<br />

tainers is displayed.<br />

2. Select the ones you want to remove, and click Delete.


Chapter 16: Commentary<br />

Attached documents and user annotations that are linked to a plan are grouped together and are<br />

named Commentary. The user can view an attached document by browsing the Commentary of<br />

an application.<br />

The Maintenance branch of the production application enables the user to delete user annotations,<br />

audit annotations, and attach documents from the application datastore. The user can also filter<br />

what to delete based on a date or the text that it contains.<br />

User and Audit Annotations<br />

User annotations can be associated with a cell, cube (or Tab in the Web client), and model. Any<br />

owner of an e.List item with assigned or inherited rights that are greater than View can annotate.<br />

Users with View rights cannot annotate, but can view existing annotations.<br />

Administrators can choose to record user actions, these records are called audit annotations. You<br />

can record user actions in the Web client, such as adding and editing data or importing files.<br />

Tracking changes through the system is useful for auditing purposes and to see who has made<br />

changes if an e.List item has multiple owners.<br />

To control its impact on the size of the application datastore, audit annotations are configured in<br />

Application Options, see "Change Application Options" (p. 74).<br />

Delete Commentary<br />

Administrators can delete comment in a <strong>Contributor</strong> application using date and time, character<br />

string and e.List item name filters. See "Deleting Commentary" (p. 290) for more information. The<br />

user can also automate the deleting of annotations by using a macro "Delete Commentary" (p. 217).<br />

Users can also delete annotations. See the <strong>Contributor</strong> Browser User <strong>Guide</strong> for more information.<br />

Delete Commentary - e.List items<br />

Before you can delete commentary, they must select all the e.List items that they want to delete<br />

them from. To do this, use the buttons at the top of the Delete Annotations, e.List Items tab to:<br />

● Select All or Clear All e.List items.<br />

● Select All Children or Clear All Children.<br />

● Select All Planners or Clear All Planners.<br />

You can also click individual items to select or deselect them.<br />

<strong>Administration</strong> <strong>Guide</strong> 289


Chapter 16: Commentary<br />

Deleting Commentary<br />

You can delete all commentary in a <strong>Contributor</strong> application using date and time, character string<br />

and e.List item name filters.<br />

After you specify the filters, e.List items for the annotations to be deleted, and click Delete comment-<br />

ary, a COMMENTARY_TIDY job is run. The deletion is not seen by web clients until a reconcile<br />

is run. This enables the commentary to be deleted while the <strong>Contributor</strong> application is online. This<br />

may be run as a macro (p. 217).<br />

Annotations and saved annotations can be deleted by the creator until the annotation is submitted.<br />

Steps<br />

1. In the Production branch of the application, click Maintenance, Delete Commentary.<br />

2. On the Delete options tab, click the options as required:<br />

● Delete user annotations<br />

Note: There are three types of annotations, Note, Note Attachment, and Attachment. Only<br />

the Note type annotation is deleted when this option is selected. Annotations added with<br />

attachments are not deleted.<br />

● Delete audit annotations<br />

● Delete attached documents<br />

● Apply date filter. Select if the user wants to delete commentary by date. If the user selects<br />

this option, they must select a date from the Delete commentaries before date box. It will<br />

default to today's date at midnight, local time.<br />

● Apply annotation content filter. Select if the user wants to delete commentary by content.<br />

For example, if they want to delete annotations containing the word banana, any annotations<br />

containing this word will be deleted, if they also conform to the other filters.<br />

3. Click the e.List items tab and click the e.List items that the annotations will be deleted from.<br />

4. Click Delete Commentary.<br />

The Tidy annotations job runs on available job servers. To monitor the progress of the job,<br />

view the Job Management window (p. 52).<br />

Configuring the Attached Documents Properties<br />

290 <strong>Contributor</strong><br />

In the <strong>Contributor</strong> <strong>Administration</strong> Console, the administrator designates what type of files are<br />

allowed and also configures the size limits of an attached document. These settings are set at the<br />

System level and apply to all applications within a specific <strong>Planning</strong> environment.<br />

Note: The maximum number of attached documents is 500.<br />

For more information, see "Configure the Web Client" (p. 71).


Publishing Attached Documents<br />

Information about attached documents can be published using the Table-Only Layout publish<br />

function. Information such as file size, file name, location, and the user who attached the file is<br />

published.<br />

For more information, see "The Table-Only Publish Layout" (p. 265).<br />

Copy Commentary<br />

Attached documents and user annotations that are linked to a plan are grouped together to form<br />

Commentary. The user can copy commentary between <strong>Contributor</strong> cubes and applications using<br />

administration, system, and local links.<br />

After you run a link that includes commentary, annotation or attached documents, the target will<br />

have the same value and commentary as the source. This means that commentary in the target is<br />

removed if there is no commentary in the source. If you target a cell with more than one source<br />

cell, it will contain the aggregated value and the commentary from all the source cells. If you select<br />

only one type of commentary in the link, then the other type of commentary is not affected by<br />

running the link. You will not have multiple copies of commentary in target cells if you rerun the<br />

link.<br />

Note: The user can only copy Commentary using links that contain data.<br />

For more information, see "Managing Data" (p. 143).<br />

Breakback Considerations when Moving Commentary<br />

Breakback does not occur when attaching commentary either manually or by using an administration<br />

or system link. Commentary can be attached to a calculated cell without impacting cells making<br />

up that calculation.<br />

Note: Identical documents from different sources are treated as separate documents.<br />

Chapter 16: Commentary<br />

<strong>Administration</strong> <strong>Guide</strong> 291


Chapter 16: Commentary<br />

292 <strong>Contributor</strong>


Chapter 17: Previewing the Production Workflow<br />

The Preview window gives you a preview of the production e.List and workflow state and allows<br />

you to view properties of the e.List items. The icons indicate the current status of the data in the<br />

production application. Clicking Refresh enables you to keep track of the status of the icons. For<br />

example, when you have put a development application into production, you can see when e.List<br />

items have been reconciled, see "Reconciliation" (p. 54). In this case, the icons will change from<br />

Not started, out of date to Not started, reconciled .<br />

See "Workflow State Definition" (p. 295) for more information.<br />

To preview the data in the production application in the Preview window, expand the Preview tree,<br />

right-click the e.List item and click Preview.<br />

When you preview an e.List item it appears to behave like it does in the Web. For example, you<br />

can right-click in the grid and click Annotate cell, Add and an annotations window appears. You<br />

can type in the annotations window and when you close the window, you can view the annotation<br />

by moving your mouse over the red square. However, after you have closed the Preview, these<br />

changes are not saved.<br />

Note: Any action you perform in Preview has no bearing on the Production application.<br />

Previewing e.List item Properties<br />

Right-click the e.List item and click Properties. A five-tabbed window is displayed containing:<br />

● General<br />

● Owners<br />

● Editors<br />

● Reviewers<br />

● Rights<br />

Preview Properties - General<br />

The General tab contains the following information:<br />

Property<br />

e.List item type<br />

e.List item state<br />

Description<br />

Indicates whether it is a contribution or a review e.List item.<br />

The workflow state, see "Workflow State Defini-<br />

tion" (p. 295) for more information.<br />

<strong>Administration</strong> <strong>Guide</strong> 293


Chapter 17: Previewing the Production Workflow<br />

Property<br />

Date state changed<br />

User who last changed the state<br />

Number of children<br />

Number of locked children<br />

Number of saved children<br />

Preview Properties - Owners<br />

Description<br />

This gives the date and time that the workflow state<br />

changed in the following format: yyyy-mm-dd hh:mm:ss.<br />

The name of the user to have last changed the state.<br />

The number of items in the next level below this e.List item.<br />

The number of child items that are locked, indicating that<br />

data was submitted.<br />

The number of child items where work has started and been<br />

saved.<br />

This Owners tab contains the following information about the owners of the selected e.List item.<br />

Property<br />

Owner name<br />

E-mail address<br />

Current Owner<br />

Preview Properties - Editors<br />

Description<br />

The following information is displayed on this tab.<br />

The name of an owner of the e.List item. An owner is a<br />

user assigned to an e.List item with greater than View rights.<br />

The e-mail address of the user.<br />

This is checked if the owner is the current owner of the<br />

e.List item. The current owner is the last user to have<br />

opened an e.List item for editing.<br />

● Editor - the name of the last or current editor, the time they started editing the e.List, and<br />

whether they are working online or offline.<br />

● Annotator - the name of the last or current annotator, and the time they started creating an<br />

annotation.<br />

For information about editing while offline, see "Working Offline" (p. 89).<br />

Preview Properties - Reviewers<br />

294 <strong>Contributor</strong><br />

This lists the reviewers for the e.List item and their e-mail addresses.<br />

The Data Reviewed indicator indicates whether the e.List item was reviewed and the Data Viewed<br />

indicator indicates whether the data was viewed or not.


Preview Properties - Rights<br />

This displays the following information:<br />

Property<br />

e.List item Display Name<br />

User, Group, Role<br />

Rights<br />

Inherit from<br />

Description<br />

The e.List item display name.<br />

The User, Group, or Role assigned to the e.List item (more<br />

than one user, group, or role can be assigned to an e.List<br />

item).<br />

The level of rights that a user has to the e.List item.<br />

If the rights have been directly assigned to the user, this cell<br />

will be blank. If the rights have been inherited, this indicates<br />

the name of the e.List item the rights have been inherited<br />

from.<br />

You can print this information, or save it to file.<br />

Workflow State Definition<br />

The workflow state icons indicate the state of data in the IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

application.<br />

Icon<br />

State<br />

Not started<br />

Work in pro-<br />

gress<br />

Incomplete<br />

Ready<br />

Contribution<br />

The e.List item has not been edited<br />

and saved (it may have been edited<br />

but the changes not saved).<br />

The e.List item was edited and<br />

saved but not submitted.<br />

Not applicable.<br />

Not applicable.<br />

Chapter 17: Previewing the Production Workflow<br />

Review<br />

None of the items that make up this<br />

e.List item have been edited and<br />

saved.<br />

All items that make up this e.List<br />

item have been edited and saved. At<br />

least one item has not yet been sub-<br />

mitted.<br />

Some items that make up this e.List<br />

item have not been started. At least<br />

one item was started.<br />

All items that make up this e.List<br />

item have been submitted and are<br />

locked. This item can be submitted<br />

for review.<br />

<strong>Administration</strong> <strong>Guide</strong> 295


Chapter 17: Previewing the Production Workflow<br />

Icon<br />

State<br />

Locked<br />

Additional Workflow States<br />

Contribution<br />

The e.List item was submitted and<br />

can no longer be edited.<br />

Review<br />

The e.List item was submitted.<br />

In addition to these workflow states, there are additional icons that indicate variations on these<br />

states. The variations are:<br />

Icon<br />

Variation<br />

Has a current editor/annotator.<br />

Is out of date.<br />

Has a current editor/annotator and<br />

is out of date.<br />

Description<br />

The e.List item was opened for<br />

editing/annotating. An edit session is ended by<br />

the user closing the grid, or by submitting the<br />

e.List item.<br />

This indicates that the e.List item needs recon-<br />

ciling. This happens when Go to Production<br />

was run on the application and the e.List item<br />

has not been reconciled. If client side reconcili-<br />

ation is prevented, the user is unable to view<br />

the data until reconciliation has occurred.<br />

There is a current editor or annotator, and the<br />

data is out of date.<br />

These additional states only appear to the user in the front window of the <strong>Contributor</strong> application,<br />

not in the grid.<br />

Workflow State Explained<br />

296 <strong>Contributor</strong><br />

This diagram demonstrates the different workflow states that in <strong>Contributor</strong>.


Each icon represents the state of the e.List item. The lowest level e.List items (for example, labeled<br />

A1 Profit Center) are contribution e.List items, that is items that you enter data into. The higher<br />

level e.List items are review e.List items, and the state of a review e.List item is affected by the states<br />

of the contribution e.List items that feed into it.<br />

States for Contribution e.List Items<br />

Before data is entered and saved in an e.List item, its state is Not started. After you save an e.List<br />

item, the state becomes Work in progress and remains accessible for more editing. When you submit<br />

an item, the e.List item is Locked and no more changes can be made. The Locked state indicates<br />

that the e.List item is ready for review. A reviewer can review the e.List item in any state, but can<br />

only reject a Locked e.List item. When an e.List item is rejected, it returns to a state of Work in<br />

progress.<br />

States for Review e.List Items<br />

A review e.List item where none of the items that feed into it have been saved has a state of Not<br />

started (see A Region). When at least one of the items that make up a review e.List item is not saved,<br />

and at least one other item is saved, its state is Incomplete (see Total Company, Division X and B<br />

Region). When all the items that make up a review e.List item are saved and at least one is not<br />

submitted, its state is Work in progress (see Division Y and C Region).<br />

After all the e.List items that make up a review e.List item are locked, the state of the review e.List<br />

item is Ready (see D Region) and if acceptable, can be submitted to the next reviewer. After it is<br />

submitted, it becomes Locked (E Region).<br />

Chapter 17: Previewing the Production Workflow<br />

<strong>Administration</strong> <strong>Guide</strong> 297


Chapter 17: Previewing the Production Workflow<br />

Other Icons<br />

298 <strong>Contributor</strong><br />

Each of the Workflow state icons can have additional indicators that tell you whether the e.List<br />

item is being edited, is out of date, or both. They are a grid, a box or both a grid and a box.<br />

The following show examples of these indicators, but note that they can apply to all workflow<br />

states:<br />

● Has a current editor/annotator. The e.List item was opened for editing/annotating. An edit<br />

session is ended by the user closing the grid, or by submitting the e.List item .<br />

● Is out of date . This indicates that the e.List item needs updating.<br />

● Has a current editor/annotator and the data is out of date.


Chapter 18: Using <strong>Contributor</strong> With Other IBM<br />

Cognos Products<br />

Client and admin extensions help <strong>Contributor</strong> to work with other IBM Cognos products (p. 300).<br />

You can analyze and report on published <strong>Contributor</strong> data in IBM Cognos 8 Business Intelligence<br />

using the Generate Framework Manager Model admin extension (p. 306). Additionally, the <strong>Planning</strong><br />

Data Service provides access to unpublished <strong>Contributor</strong> data for IBM Cognos 8 Business Intelligence<br />

users.<br />

You can use Excel with <strong>Contributor</strong>, benefiting from the formatting capabilities of Excel (p. 311).<br />

You can take actuals from an Enterprise Resource <strong>Planning</strong> (ERP) system and combine them with<br />

planning information to perform comparative analysis using IBM Cognos Performance Applications<br />

(p. 312).<br />

You can also manage <strong>Contributor</strong> master dimensions with IBM Cognos 8 Business Viewpoint Client<br />

(p. 314).<br />

The following diagram illustrates some of the integration points between <strong>Planning</strong> and other IBM<br />

Cognos products.<br />

Published Data<br />

Real Time Data<br />

IBM Cognos Connection<br />

Performance<br />

Applications<br />

IBM Cognos 8<br />

Business Intelligence<br />

<strong>Planning</strong> OLAP Relational<br />

IBM Cognos 8<br />

Business Intelligence<br />

Transformer 7.4<br />

IBM Cognos 8<br />

<strong>Planning</strong> - <strong>Contributor</strong><br />

IBM Cognos 8<br />

Controller<br />

IBM Cognos 8<br />

Metrics Manager<br />

IBM Cognos 8<br />

<strong>Planning</strong> - Analyst<br />

IBM Cognos<br />

Finance<br />

<strong>Administration</strong> <strong>Guide</strong> 299


Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

Client and Admin Extensions<br />

Extensions are tools that provide additional functionality to <strong>Contributor</strong> as well as provide inter<br />

operability with other IBM Cognos products. There are two types of extensions:<br />

● "Classic Client Extensions" (p. 300)<br />

● "Admin Extensions" (p. 301)<br />

All extensions are installed as part of the main <strong>Planning</strong> installation. For more information, see the<br />

IBM Cognos 8 <strong>Planning</strong> Installation <strong>Guide</strong>.<br />

Note: Ensure that both the <strong>Administration</strong> Console computer and the client computers meet all of<br />

the software and hardware requirements before configuring and running <strong>Contributor</strong> client and<br />

administration extensions.<br />

For a current list of the software environments supported by IBM Cognos products, see the IBM<br />

Cognos Resource Center Web site (http://www.ibm.com/software/data/support/cognos_crc.html).<br />

Classic Client Extensions<br />

Classic web client users can use client extensions to take advantage of the functionality of Excel<br />

(p. 311). The <strong>Contributor</strong> Web Client includes the Get Data and Export for Excel functionality.<br />

Client extensions are activated through the menu bar in the <strong>Contributor</strong> grid.<br />

You can control when an extension is available for Classic <strong>Contributor</strong> Web Client users by enabling<br />

and disabling it in <strong>Contributor</strong> <strong>Administration</strong> Console. Get Data and Excel functionality is always<br />

available to <strong>Contributor</strong> web client users.<br />

Tip: For Classic <strong>Contributor</strong> Users, on the Configure Extensions tab, right-click the extension and<br />

click Enable or Disable.<br />

Organize Classic Client Extensions<br />

300 <strong>Contributor</strong><br />

Use extension groups to organize client extensions. Extension groups appear as items under the<br />

Tools menu on the Classic <strong>Contributor</strong> grid. A list of the member extensions appear when you click<br />

the group name. For example, you can create an extension group named Export to organize the<br />

export extensions.<br />

Steps<br />

1. In the <strong>Contributor</strong> <strong>Administration</strong> Console application tree, click Production, Extensions, Client<br />

Extensions, and then click the Extension Groups tab.<br />

2. Click Add, and type a name for the new group.<br />

Extension group names must be 15 characters or less and should be meaningful to the Web<br />

client user.<br />

3. Click OK.<br />

The name of the new extension group appears in the Extension Group list.<br />

Tips:<br />

● You can rename an extension group by clicking Edit in the Extension Group dialog box.


Configure Classic Client Extensions<br />

● You can reorder extension groups by using the arrow buttons on the Extension Group tab.<br />

You must configure a client extension before users can use it. An extension can run in the Classic<br />

web client in a custom or manual mode. In manual mode, extensions run when a user selects the<br />

extension from the Extension Group list under the Tools menu in the Classic web client. In custom<br />

mode, extensions run automatically in the Classic web client and do not need to be assigned to an<br />

extension group.<br />

Tip: You can reset a client extension back to its original settings by clicking the Reset button. This<br />

resets the configuration back to its original, unconfigured state and all settings and data are lost.<br />

Steps<br />

1. In the <strong>Contributor</strong> <strong>Administration</strong> Console application tree, click Production, Extensions, Client<br />

Extensions, and then click the Configure Extensions tab.<br />

2. Click the extension you want, and click Configure.<br />

The Extension Properties dialog box appears.<br />

3. In the Display Name box, type a name for the extension or leave the default name.<br />

4. If the Activation Mode box shows that Manual activation mode is selected, in the Extension<br />

Group box, click the appropriate Extension Group.<br />

5. In the Extension Properties-Users dialog box, click All Users or Selected Users.<br />

6. If you clicked Selected Users, select the check box next to each user who should have access.<br />

7. If you are configuring the Export for Excel extension, in the Location on client for saved<br />

selections box, type the full path of the saved selections folder.<br />

Note: If you are using Windows Vista, you will need administrative privileges to save a saved<br />

selection to the root directory (c:\). If you do not have administrative privileges, then specify a<br />

different path, for example c:\your folder\saved_selection_name.sav.<br />

8. Choose whether to enable this extension now by clicking Yes or No.<br />

9. Click Finish.<br />

Admin Extensions<br />

Run an Admin Extension<br />

Administrators use admin extensions to generate Framework Manager models, Transformer<br />

Models, and IBM Cognos PowerCubes from <strong>Contributor</strong> applications. This enables you to report<br />

on <strong>Contributor</strong> data in IBM Cognos 8 studios, and view data in PowerPlay Series 7.<br />

You run an Admin extension when you want it to perform its task. Before you can run an Admin<br />

extension, you must first configure it. For information about configuring the individual extensions,<br />

see the following topics:<br />

Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

<strong>Administration</strong> <strong>Guide</strong> 301


Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

● "The Generate Framework Manager Model Admin Extension" (p. 306)<br />

● "Generate Transformer Model" (p. 309)<br />

Step<br />

● In the <strong>Contributor</strong> <strong>Administration</strong> Console application tree, click Production, Extensions,<br />

Admin Extensions, select the extension, and click Run.<br />

You may be prompted to perform more tasks, depending on the extension.<br />

Integrating with IBM Cognos Business Intelligence Products<br />

IBM Cognos Business Intelligence (BI) users can access unpublished (real-time) and published<br />

<strong>Contributor</strong> data for analysis and reporting.<br />

For IBM Cognos 8 Business Intelligence users, the <strong>Planning</strong> Data Service provides access to<br />

unpublished <strong>Contributor</strong> data, and the Generate Framework Manager Model extension provides<br />

access to (table-only layout) published data. Note that you get better performance when reporting<br />

off published data than off live data. This is because the <strong>Planning</strong> Data Service has the added<br />

overhead of interpreting the model.<br />

You can also import data from IBM Cognos 8 data sources into <strong>Contributor</strong> applications and<br />

Analyst models. For more information, see "Importing Data from IBM Cognos 8 Data<br />

Sources" (p. 164).<br />

Using IBM Cognos 8 BI with <strong>Contributor</strong> Unpublished (Real-Time) Data<br />

302 <strong>Contributor</strong><br />

You can use IBM Cognos 8 Business Intelligence to report on and analyze unpublished (real-time)<br />

<strong>Contributor</strong> data.<br />

To create a <strong>Planning</strong> Package, you have two options.<br />

● Select the Create <strong>Planning</strong> Package option in the Go to Production wizard<br />

● Create a package directly in Framework Manager<br />

To determine which method to choose consider the following information.<br />

Create the <strong>Planning</strong> Package in the Go to Production Wizard<br />

The <strong>Planning</strong> Package that is published in the Go to Production wizard contains all cubes in the<br />

application. Thus, when a user opens this package in Query Studio, Analysis Studio, Report Studio,<br />

or Event Studio, they are presented with metadata for all of the cubes in the application. The user<br />

is free to choose metadata from multiple cubes for use in their reports. However, unless care is<br />

taken, users may inadvertently build queries that attempt to access values from more than one cube,<br />

which results in no data returned to the report.<br />

For more information, see "<strong>Planning</strong> Packages" (p. 244).<br />

Create the <strong>Planning</strong> Package Directly in Framework Manager<br />

If you create the package in Framework Manager, you can determine how many cubes to expose<br />

in a given package. By default, you get one cube in each package, which prevents users from


Framework Manager Project<br />

building queries that access more than one cube. However, this may result in large numbers of<br />

packages in IBM Cognos Connection which could be difficult to manage.<br />

Before you can create a package using Framework Manager, you must create a Framework Manager<br />

project. The Framework Manager project contains objects that you organize for Business Intelligence<br />

authors according to the business model and business rules of your organization. A package is a<br />

subset of the query subjects and other objects defined in the project.<br />

Framework Manager can use the metadata and data from external data sources to build a project.<br />

To import metadata, you must indicate which sources you want and where they are located. You<br />

then publish the package to the IBM Cognos 8 server so that the authors can use the metadata.<br />

You can create several packages from the same project, with each package meeting different<br />

reporting requirements. Framework Manager models accessing <strong>Contributor</strong> data are light-weight<br />

models only. A light-weight model, as well as the packages derived from that model, contain only<br />

the connection information to the cubes in the <strong>Planning</strong> application. The D-List and item metadata<br />

are extracted from the <strong>Planning</strong> application at runtime.<br />

Note: Cross tab reports in any of the Business Intelligence studios do not support text or date-based<br />

measures, including annotations, if configured for display. If a text or date-based measure is selected,<br />

it appears as "--" in the report.<br />

The Measures Dimension<br />

IBM Cognos 8 requires that one of the <strong>Contributor</strong> D-Lists is used as the measures dimension. A<br />

measures dimension is typically one that contains quantitative data items, such as revenue or<br />

headcount. The default measures dimension is determined based on the following:<br />

● Excluding the e.List, all dimensions with defined formats.<br />

● Other than the e.List, if no dimensions have defined formats, then the first dimension is used.<br />

● If only one dimension has defined formats, that dimension is used.<br />

● If more than one dimension has defined formats, the dimension with the lowest priority calcu-<br />

lations is used.<br />

The default measures dimension can be overridden when publishing, by selecting the Dimension in<br />

the Cubes screen. If you republish the data and change the dimension at a later date, be aware that<br />

this may break some saved reports.<br />

Create a Framework Manager Project and Publish a Package<br />

The Framework Manager project contains objects that you organize for Business Intelligence authors<br />

according to the business model and business rules of your organization.<br />

Steps<br />

Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

1. From the Windows Start menu, click Programs, IBM Cognos 8, Framework Manager.<br />

2. In the Framework Manager Welcome page, click Create a new project.<br />

3. In the New Project page, specify a name and location for the project.<br />

<strong>Administration</strong> <strong>Guide</strong> 303


Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

304 <strong>Contributor</strong><br />

4. Optionally, you can add the new project to a source control repository by doing the following:<br />

● Click Repository, and then select the Add to repository check box.<br />

● In the Connection box, click the repository connection.<br />

If you do not have a repository connection defined, you are prompted to create one. For<br />

more information, see the Framework Manager help.<br />

● In the Location in Repository box, browse to a location to add the project and then click<br />

Select.<br />

5. Log on if you are prompted to do so.<br />

6. In the Select Language page, click the design language for the project.<br />

You cannot change the language after you click OK, but you can add other languages.<br />

7. In the metadata source page, select Data Sources.<br />

8. If the data source connection you want is not listed, you must create it (p. 305).<br />

If the <strong>Planning</strong> Data Service is configured, a data source named IBM Cognos <strong>Planning</strong> - Con-<br />

tributor is available. This gives you access to cube (OLAP) data only. If you want to access<br />

table data, you must create a data source that points to these tables.<br />

9. Select the cube that you want to import.<br />

The name of the project is shown. This defaults to the cube name. You can change this.<br />

10. Add security to the package if required. See the IBM Cognos 8 Framework Manager User <strong>Guide</strong><br />

for more information.<br />

11. Click Next and then Finish.<br />

Note: You save the project file (.cpf) and all related XML files in a single folder. When you<br />

save a project with a different name or format, ensure that you save the project in a separate<br />

folder.<br />

12. When prompted to open the Publish Wizard, click Yes. This enables you to publish the new<br />

package to IBM Cognos Connection.<br />

13. In the Publish Wizard, choose where to publish the package:<br />

● To publish the package to the IBM Cognos 8 Server for report authors and business authors<br />

to use, click IBM Cognos 8 Content Store.<br />

● To publish the package to a network location, click Location on the network.<br />

14. To enable model versioning when publishing to the IBM Cognos 8 Content Store, select the<br />

Enable model versioning check box.<br />

15. In the Number of model versions to retain box, select the number of model versions of the<br />

package to retain.<br />

Tip: To delete all but the most recently published version on the server, select the Delete all<br />

previous model versions check box.


16. If you want to externalize query subjects, select the Generate the files for externalized query<br />

subjects check box.<br />

17. By default, the package is verified for errors before it is published. If you do not want to verify<br />

your model prior to publishing, clear the Verify the package before publishing check box.<br />

18. Click Publish.<br />

If you chose to externalize query subjects, Framework Manager lists the files that were created.<br />

19. Click Finish.<br />

Create a Data Source Connection<br />

Note: You can run the Framework Manager Metadata wizard repeatedly to import multiple<br />

cubes into the same Framework Manager project. For more information about creating<br />

Framework Manager projects, see the Framework Manager User <strong>Guide</strong>.<br />

You must create a data source connection if you are creating a <strong>Planning</strong> Package in Framework<br />

Manager.<br />

When you create an IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> data source, you must provide the<br />

information required to connect to the datastore. This information is provided in the form of a<br />

connection string.<br />

Data sources are stored in the Cognos namespace and must have unique names. For example, you<br />

cannot use the same name for a data source and a group.<br />

Before creating data sources, you need write permissions to the folder where you want to save the<br />

data source and to the IBM Cognos namespace. You must also have execute permissions for the<br />

<strong>Administration</strong> secured function.<br />

Steps<br />

1. In Framework Manager, click the Run Metadata Wizard command from the Action menu.<br />

2. Click Data Sources and Next.<br />

3. Click New and Next.<br />

4. In the name and description page, type a unique name for the data source and, if you want, a<br />

description and screen tip. Select the folder where you want to save it.<br />

5. In the connection page, under Type, click IBM Cognos <strong>Planning</strong> - <strong>Contributor</strong>.<br />

The connection string page for the selected database appears.<br />

6. Under External namespace, select the namespace set up previously in IBM Cognos Configuration.<br />

Tip: To test whether parameters are correct, click Test the connection. If prompted, type a user<br />

ID and password or select a signon, and click OK.<br />

7. Click Finish.<br />

The data source appears in the Directory tool in the portal or in the list of data sources in the<br />

Metadata Wizard in Framework Manager.<br />

Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

<strong>Administration</strong> <strong>Guide</strong> 305


Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

Tip: To test a data source connection, right-click the data source in the Data Sources folder and<br />

click Test Data Source.<br />

The Generate Framework Manager Model Admin Extension<br />

Generate Framework Manager Model creates a set of Framework Manager models from IBM<br />

Cognos <strong>Planning</strong> data published in a table-only layout, including a base model and a user model.<br />

It also publishes a package to IBM Cognos Connection.<br />

Base Model<br />

The base model contains the definitions of objects required to access IBM Cognos <strong>Planning</strong> data<br />

published in a table-only layout. The objects include table definitions (query subjects), dimension<br />

information, security filters, and model query subjects.<br />

User Model<br />

The user model provides a buffer to contain the modifications made by the Framework Manager<br />

modeler. When modifications are made to the <strong>Contributor</strong> application, or to the Analyst model,<br />

the base model can be updated using Generate Framework Manager Model. Then, the user model<br />

can be synchronized using the synchronize option in Framework Manager.<br />

The synchronization process makes all the modifications to the base model appear in the user model.<br />

This is done by synchronizing the user model with the base model and by reapplying any changes<br />

made to the user model by the modeler to the synchronized user model.<br />

The package published to IBM Cognos Connection is published from the User Model.<br />

Configuring Your Environment<br />

306 <strong>Contributor</strong><br />

Before you can use Generate Framework Manager Model, you must configure your environment.<br />

Do the following:<br />

Note: It is recommended that you install the <strong>Administration</strong> components (Analyst and <strong>Contributor</strong><br />

<strong>Administration</strong> Console) on the same machine as the <strong>Planning</strong> Server components.<br />

❑ Ensure that you can access IBM Cognos Connection<br />

For example, in the address bar of your Web browser, type http://computer_name/cognos8/.<br />

❑ Ensure that you can publish the IBM Cognos <strong>Planning</strong> data in a table-only layout.<br />

❑ Configure the Publish datastore to use the logon and password of the datastore server, not<br />

Trusted Connection.<br />

Understanding Generate Framework Manager Model<br />

When you generate a Framework Manager model, the following occurs:<br />

● An IBM Cognos 8 data source is created for the Table-only publish container.<br />

● The Framework Manager script player<br />

● creates models and packages object actions from the Table-only publish metadata<br />

● publishes packages


Objects in the Generated Framework Manager Model<br />

The models generated by Generate Framework Manager Model contain the following objects.<br />

Folders<br />

Framework Manager contains a series of folders containing objects of the same type. These folders<br />

are created in two top-level folders: Physical View and Business View. The Physical View Folder<br />

contains all the database query subjects and the Business View folder contains all the dimension<br />

and star schema objects.<br />

Database Query Subjects<br />

Database query subjects are created for all the tables needed to provide access to IBM Cognos<br />

<strong>Planning</strong> data. The tables included in the model depend on the query subjects selected, and may<br />

include<br />

● cube export tables<br />

● dimension item tables<br />

● dimension derived hierarchy tables<br />

● dimension complete hierarchy tables<br />

● annotation tables<br />

Joins<br />

Joins are created between related tables, such as the cube export data tables and derived hierarchy<br />

tables.<br />

Column Usage<br />

The usage attribute of the query items contained in the database query subjects are set to the correct<br />

value: fact, identifier, or attribute.<br />

Security Filters<br />

If the models are generated from a <strong>Contributor</strong> application, security filters are created for each cube<br />

export data query subject. The filters grant users access to the same e.List items as in the <strong>Contributor</strong><br />

application. A security filter is created for every user on every cube export data query subject.<br />

If the models are generated from Analyst, no security filters are created.<br />

Regular Dimensions<br />

For each derived hierarchy and complete hierarchy query subject, a regular dimension object is<br />

created and saved to the Derived Dimensions and Complete Dimensions folders respectively. These<br />

folders are located in the Business View folder.<br />

Measure Dimensions<br />

Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

For each cube export table, a measure dimension object is created. It is stored in a folder that has<br />

the same name as the cube. These folders are located in the Business View folder.<br />

<strong>Administration</strong> <strong>Guide</strong> 307


Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

Star Schema Groupings<br />

For each cube in the model, some star schema groupings are created. If the derived hierarchy lists<br />

are selected, a star schema grouping is created using the derived dimensions. If the complete hierarchy<br />

lists are selected, a star schema grouping is created using the Complete Dimensions folders.<br />

Data Source<br />

Data source refers to the data source created in the IBM Cognos Connection Portal.<br />

Package<br />

A package contains all the objects in the Framework Manager model. The administrator of the<br />

package is the user generating the model.<br />

In <strong>Contributor</strong>, the users who have access to the package are the users of the <strong>Contributor</strong> application.<br />

In Analyst, the only user to have access to the package is the user generating the model.<br />

Objects Created in IBM Cognos Connection<br />

When generating a Framework Manager model using Generate Framework Manager Model, a data<br />

source is added to IBM Cognos Connection. The associated connection string and signon is also<br />

created if applicable.<br />

Run the Generate Framework Manager Model Admin Extension<br />

308 <strong>Contributor</strong><br />

Run the Generate Framework Manager Model Admin Extension to create a set of Framework<br />

Manager models from IBM Cognos 8 <strong>Planning</strong> data.<br />

This extension cannot be automated.<br />

In the appropriate <strong>Contributor</strong> application, publish data using the Table-only layout. You must<br />

use an untrusted connection.<br />

Note: This extension uses the last published container.<br />

Steps<br />

1. Click Extensions, Admin Extensions, and double-click Generate Framework Manager Model.<br />

2. Select Create a new Framework Manager Model and click Next.<br />

3. Specify the following Framework Manager Model settings:<br />

● Framework Manager Model Location<br />

Where the Base model and User model are created on the hard disk. Only one model can<br />

exist in any location.<br />

● Package name<br />

The name of the package to be published to the portal. The Package name must not already<br />

exist on the portal.<br />

● Package location<br />

Where the package is stored in IBM Cognos Connection.


● Package screentip<br />

● Package description<br />

4. Select the cubes to be included in the model.<br />

5. Specify the type of data source query subjects to include in the model.<br />

When the package is published, it can be accessed from IBM Cognos 8 studios.<br />

The selections made in this extension are saved for the next time the extension is run and are used<br />

when updating a model.<br />

For troubleshooting information, see "Troubleshooting the Generate Framework Manager Model<br />

Extension" (p. 361).<br />

Update a Framework Manager Model<br />

You can update a Framework Manager model to include changes to the <strong>Contributor</strong> application.<br />

The new base model is re-imported and any changes you made to the user model are reapplied.<br />

Steps<br />

1. In the appropriate <strong>Contributor</strong> application, click Extensions, Admin Extensions, and double-<br />

click Generate Framework Manager Model.<br />

2. In Create or update model, select Update an existing Framework Manager Model.<br />

3. Enter the Project Location where the model you want to update is stored.<br />

4. Complete the steps in the wizard.<br />

5. Open Framework Manager and open the User Model.<br />

6. From the Project menu, click Synchronize and then click Run the script from the starting point.<br />

Generate Transformer Model<br />

Use the Generate Transformer Model to generate an IBM Cognos Transformer Model from a table-<br />

only database layout and create IBM Cognos PowerCubes. You can view the PowerCube in<br />

PowerPlay Series 7, or publish the PowerCube to a package in IBM Cognos Connection and view<br />

its content using any of the IBM Cognos 8 studios.<br />

Because the PowerCube is based on published data, the Generate Transformer Model extension<br />

generates a single Transformer model for each <strong>Contributor</strong> model. The extension automatically<br />

extracts the necessary information about your <strong>Contributor</strong> model from the publish tables and the<br />

application model, and then creates the equivalent model in Transformer. After the Transformer<br />

model is created, you can modify it using the Transformer interface and optionally, publish the<br />

cubes to an IBM Cognos Portal. Generate Transformer Model uses the last publish data source.<br />

With the Generate Transformer Model, you can:<br />

● generate a Transformer model<br />

Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

<strong>Administration</strong> <strong>Guide</strong> 309


Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

310 <strong>Contributor</strong><br />

● generate a Transformer model and a PowerCube. Transformer must be installed to generate a<br />

PowerCube<br />

● create a PowerCube from an existing Transformer model<br />

Before you can use the Generate Transformer Model Wizard, you must configure your environment.<br />

Do the following:<br />

● If you create PowerCube(s), ensure that you can access IBM Cognos Connection.<br />

For example, in the address bar of your Web browser, type http://computer_name/cognos8/.<br />

● Publish your IBM Cognos 8 <strong>Planning</strong> data in a table-only layout.<br />

● Before you can create PowerCube(s), you must first have Transformer installed and configured<br />

on the computer where the <strong>Planning</strong> Server components are installed.<br />

Security Considerations<br />

The Transformer model and PowerCubes generated can only be secured against a Series 7 Namespace.<br />

The name of the namespace in IBM Cognos Configuration must match the name of the Series 7<br />

namespace.<br />

We recommend that the user class for the administrator creating the Transformer model has the<br />

property: Members can view all users and/or user classes in the User Class permissions tab. This<br />

property is set in the administration console of Access Manager Series 7.<br />

Automation<br />

This extension can be automated. It must first be configured. For more information, see "Execute<br />

an Admin Extension " (p. 218).<br />

Steps<br />

1. In the <strong>Contributor</strong> <strong>Administration</strong> Console application tree, click Production, Extensions,<br />

Admin Extensions and double-click the Generate Transformer Model extension.<br />

2. Choose whether to generate a Transformer model, and a PowerCube, or just a PowerCube.<br />

3. Specify the location and file name for the Transformer model and the location for the Power-<br />

Cube.<br />

This location can contain only one model. The paths must be located on the <strong>Planning</strong> server,<br />

and can be UNC paths.<br />

4. You can choose to include security information. To do this you must specify a Series 7<br />

Namespace.<br />

5. Choose the cubes to add to the Transformer model.<br />

6. If you selected Create PowerCube, you can choose to create a <strong>Planning</strong> Package, enabling you<br />

to view its content using any of the IBM Cognos 8 studios.<br />

7. Click Finish.


Excel and <strong>Contributor</strong><br />

In addition to accessing <strong>Contributor</strong> through the Web, users can access <strong>Contributor</strong> using Contrib-<br />

utor for Excel. This enables users to apply Excel formatting and export data from <strong>Contributor</strong> to<br />

an Excel file.<br />

Design Considerations When Using <strong>Contributor</strong> for Excel<br />

Because of the formatting and other capabilities of <strong>Contributor</strong> for Excel, cubes with large two-<br />

dimensional window footprints tend to reduce performance. The two-dimensional window footprint<br />

is not related to e.List model size, which is the total number of cells in an application (per e.List<br />

slice). A two-dimensional window footprint is the number of rows in a cube multiplied by the<br />

number of columns that can currently be viewed on a worksheet.<br />

Performance is affected by the size and complexity of <strong>Contributor</strong> e.List models. Larger and more<br />

complicated models take longer to download than smaller models. <strong>Contributor</strong> for Excel does not<br />

impose any new limits on size and complexity.<br />

The performance of <strong>Contributor</strong> for Excel is also affected by cubes containing large numbers of<br />

<strong>Contributor</strong> cells visible on worksheets at one time. The following actions are affected:<br />

● breakback<br />

● entering a value<br />

● changing multi-dimensional pages<br />

● saving the model<br />

As a result, you may want to use the most relevant data and not all possible data. There are several<br />

ways to limit the two-dimensional window footprints of cubes. You can design compact, multidi-<br />

mensional cubes. If the model requires a long D-List, consider using access tables to send only the<br />

items needed to different e.List items. Finally, consider using cut-down models as another way of<br />

restricting portions of long D-Lists to some e.List items.<br />

Percentages and Consistency When Using <strong>Contributor</strong> for Excel<br />

Percentages are usually numbers between 0.00 and 1.00. <strong>Contributor</strong> permits a cell, with a value<br />

not between 0.00 and 1.00 to appear as a percentage by appending the percent (%) character.<br />

Model calculations then divide such a number by 100 to convert it to a percent.<br />

<strong>Contributor</strong> for Excel initially matches these formatting conventions, including the trailing %<br />

character as custom Excel formatting. However, if users reformat such cells, the true underlying<br />

value can be revealed and may be confusing. For example, reformatting a 5% increase as Genera<br />

shows the underlying value as 5.00 or 0.05.<br />

Excel accepts several forms of input in a cell with a % format. It converts user input of 8.00, 0.08,<br />

and 8% to a value of .08 and presents it as 8%. To eliminate possible confusion in your <strong>Contributor</strong><br />

model, build models in which values are used as consistently as possible in calculations, display,<br />

and input.<br />

Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

<strong>Administration</strong> <strong>Guide</strong> 311


Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

Specifying the Level of Granularity When Using <strong>Contributor</strong> for Excel<br />

Some cubes exist so that users can retain more granular data than is actually required for the cent-<br />

ralized planning process.<br />

To improve performance, you may want to remove these cubes and do one of the following:<br />

● Build Excel-based templates that replace such cubes with Excel worksheets linked to <strong>Contributor</strong><br />

cells.<br />

● Permit users to decide their own level of granularity and build their own incoming formulas.<br />

Print to Excel for Classic <strong>Contributor</strong> Web Client<br />

Classic <strong>Contributor</strong> Web Client users can print data using the print formatting options available<br />

from Excel. Using the Print to Excel functionality is the default standard for the Classic Web Client.<br />

The Activation Mode is set to custom. If the Classic Web Client does not have Excel installed, the<br />

standard print capability is provided.<br />

To configure the Print to Excel extension, see "Configure Classic Client Extensions" (p. 301).<br />

Export for Excel for Classic <strong>Contributor</strong> Web Client<br />

Classic <strong>Contributor</strong> Web Client users can use Export for Excel to export data from <strong>Contributor</strong> to<br />

a Excel file. From there, they can use the Excel formatting and graphing features. Classic Web Client<br />

Users must have the Export for Excel extension enabled.<br />

Web client users can specify a selection of their data to be used for a future export. Saved selections<br />

define specific <strong>Contributor</strong> data sets to be exported. The <strong>Contributor</strong> administrator must designate<br />

this location for Classic <strong>Contributor</strong> Web Client Users.<br />

Note: If you are using Windows Vista, you will need administrative privileges to save a saved<br />

selection to the root directory (c:\). If you do not have administrative privileges, then specify a<br />

different path, for example c:\your folder\saved_selection_name.sav.<br />

We recommend that the <strong>Contributor</strong> administrator consult with their network administrator to<br />

determine the best location for the saved selections folder.<br />

If the designated folder location does not already exist on the user's computer or on the network,<br />

it is created the first time the user creates a selection. If the designated folder is on a network, advise<br />

users to give their selections unique names so they do not overwrite other saved selections with the<br />

same name. Users are warned prior to overwriting a selection. If you choose not to designate a<br />

folder, you can export data but cannot save the selection information for a future export.<br />

To configure the Export for Excel extension, see "Configure Classic Client Extensions" (p. 301).<br />

Financial <strong>Planning</strong> with IBM Cognos Performance Applications<br />

and <strong>Planning</strong><br />

312 <strong>Contributor</strong><br />

You can use IBM Cognos <strong>Planning</strong> and IBM Cognos Performance Applications together to


● extract actuals from the data warehouse of an Enterprise Resource <strong>Planning</strong> (ERP) system and<br />

bring them into IBM Cognos <strong>Planning</strong>, as structural data for the Analyst model and as the<br />

initial figures from the previous planning cycle for <strong>Contributor</strong><br />

● return completed planning data to the data warehouse using an ETL tool such as Data Manager<br />

for comparative analysis<br />

● monitor live or published planning data during the planning cycle against current operational<br />

data in the Performance Applications warehouse<br />

The data warehouse extracts, and changes that occur during the planning cycle, are managed using<br />

the Import from IQD wizard. Monitoring is done directly against <strong>Contributor</strong> data using the<br />

appropriate extensions.<br />

Financial <strong>Planning</strong> involves the following process:<br />

❑ Preparing IBM Cognos Performance Applications Data for <strong>Planning</strong><br />

In the Performance Application, identify the key performance indicators (KPIs) to plan by,<br />

monitor, and report on. Because planning is often performed at a different level from actuals,<br />

you may need to add to the dimensions from the data warehouse. IBM Cognos consultants can<br />

help you in this identification and analysis.<br />

The Import from IQD wizard expects each dimension to have both an ID field and a description<br />

field, each of which must be unique across the dimension.<br />

❑ Preparing for the Model in Analyst<br />

After the planning measures and dimensions that are available from IBM Cognos Performance<br />

Applications have been identified, the Analyst user designs a model, and identifies any alternate<br />

data sources that are needed for the dimensions and measures. Because Performance Applications<br />

use multiple currencies for reporting, the Analyst user should determine what currency to use<br />

when data is published back into the Performance Applications warehouse.<br />

Note: If you create a D-List using the Import from IQD wizard, you should not add any items<br />

manually. If you do add items manually, these items will be removed every time you refresh<br />

the D-List.<br />

After planning models are designed and sourcing is identified, the solution to integrate the<br />

actuals information with planning information can be implemented using either the mapping<br />

table that is generated during the IQD import, or if the mapping tables are not required, you<br />

can use an IBM Cognos package as a source to populate D-Lists in Analyst.<br />

❑ Preparing e.Lists for <strong>Contributor</strong> Data<br />

As well as importing D-List data for the Analyst model, you can choose to generate e.Lists<br />

using data from IQD files, or if the data is modeled in Framework Manager and published as<br />

a package, you can also use <strong>Contributor</strong> <strong>Administration</strong> Links.<br />

For more information about financial planning with IBM Cognos performance applications and<br />

<strong>Planning</strong>, see the Analyst User <strong>Guide</strong>.<br />

Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

<strong>Administration</strong> <strong>Guide</strong> 313


Chapter 18: Using <strong>Contributor</strong> With Other IBM Cognos Products<br />

Managing <strong>Contributor</strong> Master Dimensions with IBM Cognos 8<br />

Business Viewpoint Client<br />

With Business Viewpoint Client, you can nominate master dimensions from <strong>Contributor</strong> into IBM<br />

Cognos 8 Business Viewpoint Studio, which is a master dimension repository. When you nominate<br />

a <strong>Contributor</strong> dimension, you can also choose to move over e.Lists, any rights, and access tables<br />

in that dimension.<br />

You can also subscribe to a master dimension from <strong>Contributor</strong>, which is creating a link between<br />

an e.List in the <strong>Contributor</strong> <strong>Administration</strong> Console and the master e.List data in Business Viewpoint<br />

Studio.<br />

When you subscribe to a master dimension from <strong>Contributor</strong>, a copy of the master dimension from<br />

Business Viewpoint Studio is moved to <strong>Contributor</strong>. If the master dimension is modified in either<br />

of these locations, the data will no longer be synchronized. To ensure that you have the same data<br />

in both places, you can run an update.<br />

You can also include a security model for the items for which you are subscribing, which lets you<br />

create or update the <strong>Contributor</strong> security to reflect the security model in Business Viewpoint Studio.<br />

For information on how to use the Business Viewpoint Client with C0ntributor, see the IBM Cognos<br />

8 Business Viewpoint Client User <strong>Guide</strong>. You can access it after you launch Business Viewpoint<br />

Client.<br />

Launching Business Viewpoint Client from <strong>Contributor</strong><br />

314 <strong>Contributor</strong><br />

You launch Business Viewpoint Client from <strong>Contributor</strong>, and then perform all master dimension<br />

management actions from within Business Viewpoint Client.<br />

Steps<br />

1. From the <strong>Contributor</strong> <strong>Administration</strong> Console Tools menu, click Business Viewpoint Client.<br />

Note: Business Viewpoint Client is the default menu name. You can change the name by editing<br />

the string in the XML file.<br />

2. Log into Business Viewpoint Client.


Chapter 19: Example of Using IBM Cognos 8<br />

<strong>Planning</strong> with Other IBM Cognos Products<br />

IBM Cognos 8 <strong>Planning</strong> integrates with all other IBM Cognos 8 Business Intelligence products. For<br />

example, you can create reports on planning data and you can create macros with administration<br />

links that are triggered by events in planning data.<br />

The examples in this section use the Great Outdoors New Stores sample available on the IBM<br />

Cognos Resource Center Web site http://www.ibm.com/software/data/support/cognos_crc.html.<br />

The items created in this example, including the report, event, and PowerCube are included with<br />

the sample download for your reference.<br />

Before downloading and deploying the sample, you will need to install the Sun One Directory Server<br />

(downloaded from the Sun One site), create an instance of a directory server, and then import the<br />

contributor_sample.ldif file located in your Cognos directory, for example: \<br />

samples\<strong>Planning</strong>\<strong>Contributor</strong>\en\Data.<br />

To create the application, you will also need to download the e.List and rights from the Document-<br />

ation page on http://www.ibm.com/software/data/support/cognos_crc.html.<br />

Download and Deploy the Sample<br />

The example requires the Great Outdoors New Stores Sample, including a <strong>Planning</strong> deployment<br />

archive and an IBM Cognos 8 deployment archive. The sample is available from the IBM Cognos<br />

Resource Center and must be imported using the IBM Cognos 8 and <strong>Planning</strong> deployment wizards.<br />

Steps to Download the New Stores Sample:<br />

1. Download, from the Documentation page on http://www.ibm.com/software/data/support/<br />

cognos_crc.html, the Great Outdoors New Stores Sample (go_new_stores_sample_en.zip).<br />

Tip: Search the IBM Cognos Resource Center Web Site for the document type Utility.<br />

2. Save the files go_new_stores_contributor_data.zip and Cognos8_new_stores.zip to the<br />

deployment location set in IBM Cognos Configuration.<br />

Tip: The default location is C:\Program Files\cognos\c8\deployment.<br />

3. In the deployment location, open the compressed go_new_stores_contributor_data.zip file. The<br />

IBM Cognos 8 deployment archive (Cognos8_new_stores.zip) must remain a compressed file.<br />

4. Save PowerCube.zip to \samples\<strong>Planning</strong>\<strong>Contributor</strong>\en\Data\Data_go_new_<br />

stores_contributor. Open the compressed file.<br />

Tip: The default location for the samples folder is C:\Program Files\cognos\c8\samples.<br />

<strong>Administration</strong> <strong>Guide</strong> 315


Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

Note: Ensure you import the new_stores_rights.txt file.<br />

Steps to Deploy IBM Cognos 8 Package<br />

1. In IBM Cognos <strong>Administration</strong>, import Cognos8_new_stores.zip from the deployment archive.<br />

Tip: On the Configuration tab, click Content <strong>Administration</strong> and select New Import .<br />

2. Complete the Import wizard to import the package and data source connection.<br />

3. On the Configuration tab, click Data Source Connections, select the properties for the new<br />

data source connection, new_stores_power_cube, and update the location of the PowerCube<br />

on the Connection tab to \samples\<strong>Planning</strong>\<strong>Contributor</strong>\en\Data\Data_go_<br />

new_stores_contributor\store_cost.mdc.<br />

Note: If you do not save the store_cost.mdc PowerCube to C:\Program Files\cognos\c8\samples\<br />

<strong>Planning</strong>\<strong>Contributor</strong>\en\Data\Data_go_new_stores_contributor\, then you will have to<br />

reestablish the <strong>Administration</strong> link to this data source.<br />

Need more help?<br />

● See the Deployment section in the IBM Cognos 8 <strong>Administration</strong> and Security <strong>Guide</strong><br />

Steps to Deploy the <strong>Planning</strong> Application<br />

1. Import the go_new_stores_contributor application sample into the <strong>Contributor</strong> <strong>Administration</strong><br />

Console using the deployment wizard.<br />

Tip: Click Tools, Refresh Console after the deployment to display the application, administration<br />

link, and macro.<br />

2. Add the go_new_stores_contributor application to a job server cluster.<br />

3. If you saved the store_cost.mdc Powercube to a location other than the default location, edit<br />

the <strong>Administration</strong> link data source and target application to map to the data source, new_<br />

stores_power_cube, and the imported planning application, go_new_stores_contributor.<br />

Tip: You do not change the mappings in the administration link.<br />

Need more help?<br />

● "Import a Model" (p. 170)<br />

● "Add Applications and Other Objects to a Job Server Cluster" (p. 57)<br />

● "Create an <strong>Administration</strong> Link" (p. 149)<br />

Example of Integration with IBM Cognos 8 Business Intelligence<br />

316 <strong>Contributor</strong><br />

The example in this section shows you some of the ways that IBM Cognos 8 <strong>Planning</strong> works with<br />

IBM Cognos 8 Business Intelligence. It demonstrates just a few of the many ways that you can view<br />

and use your planning data.


The Central Europe region of the Great Outdoors Corporation plans to increase sales in its new<br />

stores by holding promotions. The regional manager, Sébastien Pascal, wants a report delivered to<br />

him the first day of every month that shows the current Central Europe promotions plans compared<br />

to projected costs for the promotions and average monthly store revenue.<br />

To complete this task, you need to create a report on the contributions for Central Europe and use<br />

that report in an event that delivers a scheduled news item to Sébastien Pascal. You require IBM<br />

Cognos 8 Business Intelligence products: Framework Manager, Report Studio, and Event Studio.<br />

Run a <strong>Contributor</strong> Macro to Import Data<br />

A <strong>Contributor</strong> macro, Import Average Monthly Revenue, that imports the average monthly revenue<br />

for the stores from a PowerCube and completes go to production for the application, has been<br />

created and published to IBM Cognos Connection.<br />

The historical average monthly revenue for Great Outdoors stores is stored in a PowerCube. The<br />

<strong>Administration</strong> link in this macro moves the data from a Framework Manager package created<br />

from the PowerCube into the Average Monthly Revenue dimension of the Store Cost cube.<br />

Steps<br />

1. From the Content <strong>Administration</strong> page on the Configuration tab in IBM Cognos <strong>Administration</strong>,<br />

click <strong>Planning</strong> and then click Macros.<br />

2. Click Run with options for the Import Average Monthly Revenue macro, and select to run<br />

now.<br />

Tip: You can view the progress of the macro in the Monitoring Console on the Macros tab.<br />

Need more help?<br />

● "Run a Macro from IBM Cognos Connection" (p. 225)<br />

Create and Publish a Framework Manager Package<br />

To use the go_new_stores_contributor application as the basis for reporting, you must create a<br />

Framework Manager package. After the package is created and published, it can be used to trigger<br />

events and create reports.<br />

Steps<br />

1. Publish the go_new_stores_contributor application using Table-only Layout publish.<br />

Include all cubes and e.List Items in the publish and configure a Publish Datastore. Name the<br />

datastore, go_new_stores_table, and add it to the job server cluster.<br />

Tip: Clear Prefix column names with data type on the Options tab.<br />

You can view the progress of the publish in the Monitoring Console on the Job Server Clusters<br />

tab.<br />

Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

2. Run the Generate Framework Manager Model admin extension. Name the package<br />

new_stores_FM_model and store the Framework Manager Model in \samples\<br />

<strong>Planning</strong>\<strong>Contributor</strong>\en\Data\Data_go_new_stores_contributor.<br />

<strong>Administration</strong> <strong>Guide</strong> 317


Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

318 <strong>Contributor</strong><br />

Select all cubes and the data source query subjects Unformatted lists and Complete hierarchy<br />

lists for the model.<br />

3. In Framework Manager, open the model created by the Framework Manager extension.<br />

4. Create a filter on Measure Dimension Promotions Plan to exclude the ALL RETAILERS D-List<br />

item.<br />

Tip: Double-click on the Measure Dimension Promotions Plan in the Business View and click<br />

the Filters tab. Use Retailer Type and the not like operators.<br />

5. Rename Complete Dimension 2eList to REGIONS from within Promotions Plan/Complete Star<br />

Schema for Promotions Plan.<br />

Tip: <strong>Planning</strong> levels are numbered in Framework Manager. To make them easier to use in<br />

Report Studio, rename the levels to reflect the content.


Create a Report<br />

Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

6. Select the new_stores_FM_model package and publish the package to make it available in IBM<br />

Cognos Connection.<br />

Need more help?<br />

● "Create a Table-only Publish Layout" (p. 277)<br />

● "The Generate Framework Manager Model Admin Extension" (p. 306)<br />

● See the IBM Cognos 8 Framework Manager User <strong>Guide</strong><br />

You are now able to create a report on <strong>Planning</strong> data to compare the cost of promotions against<br />

the planned promotion value. This report will use a crosstab report to compare information that<br />

uses one or more criteria and a chart to reveal trends and relationships.<br />

Your final report for budget version 1 will look like this.<br />

<strong>Administration</strong> <strong>Guide</strong> 319


Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

320 <strong>Contributor</strong><br />

Steps to Create the Crosstab<br />

1. In Report Studio, create a new blank report that uses the sample package named<br />

new_stores_FM_model.<br />

2. Create a table (2 columns by 4 rows) to be used as the template for the report.<br />

Tip: Use the tool box to drag a table into the report area.<br />

3. Using a text item, create headings for Budget Version 1 (Central Europe) and Budget Version<br />

2 (Central Europe).<br />

Tip: Drag a text item to the first and the third rows of the first column.<br />

4. Drag a crosstab to the cell in the second row of the first column.<br />

5. Add Central Europe from the following hierarchy - Business View/Promotions Plan/Complete<br />

Start Schema for Promotions Plan\REGIONS\Complete hierarchy 2 eList\Members\Complete<br />

hierarchy 2eList(All)\All Subsidiaries\All Europe.<br />

Tip: Use the Source tab in the Insertable Objects pane.


Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

6. From within Promotions Plan, add the following data items to the rows:<br />

● From Measure Dimension Promotion Plan select Franchise/Corporate<br />

● From Measure Dimension Promotion Plan select Month of Promotion<br />

● From Complete Star Schema for Promotions Plan, select Members 1 through 10 from the<br />

Complete Dimension 3 ID numbers<br />

● From Measure Dimension Promotion Plan select Retailer Type<br />

Note: Make sure you choose the Retailer Type that you changed in your model. Drag<br />

Retailer Type to each member 1 - 10.<br />

7. In the Query Explorer, drag the Budge version 1 dimension from Complete Dimension 5 Versions<br />

in to the Slicer.<br />

<strong>Administration</strong> <strong>Guide</strong> 321


Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

322 <strong>Contributor</strong><br />

8. From within Promotions Plan, add the following Measure Dimension Promotions Plan data<br />

items to the columns:<br />

● Promotion Costs<br />

● Planned Promotion Value<br />

Note: To hide the ID numbers, edit the ID text to override the default text.<br />

9. In the Query Explorer, add Average Monthly Revenue to Data Items from New Store Plan and<br />

use the Budget version 1 dimension in the Slicer.<br />

10. Copy and paste the crosstab into the cell in the fourth row of the first column.<br />

Tip: Select the crosstab in the properties pane to create the copy.<br />

11. Create a second query, a copy of Query1, and apply it to the second crosstab. Change the Slicer<br />

so that Query 2 applies to Budget version 2.


Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

12. Run the report to view the crosstabs.<br />

Your crosstabs for budget version 1 and budget version 2 will look like this.<br />

<strong>Administration</strong> <strong>Guide</strong> 323


Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

324 <strong>Contributor</strong><br />

Steps to Create a Combination Chart<br />

1. From the tool box, drag chart to the table cell to the right of the crosstab in the second row.<br />

Select the default combination chart. In the properties pane for the combination chart, change<br />

Query to Query1 to use the Budget Version 1 data.<br />

2. From the data items Insertable Objects tab for Query 1, drag the following data items into<br />

the Category (x-axis):<br />

● Central Europe<br />

● Franchise/Corporate<br />

● Month of Promotion<br />

3. Drag the following data items from Query 1 into the Series:<br />

● Promotion Costs<br />

Below Promotion Costs, add the ID series one through ten (1-10)<br />

Hint: Drag each number and drop within each previous number.<br />

● Planned Promotion Value<br />

For Planned Promotion Value, change the chart type to line.<br />

● Average Monthly Revenue<br />

4. Create a Statistical Maximum Baseline in Properties, Chart Annotations, Baselines. Click OK.


5. Set Planned Promotion Value in the baseline properties.<br />

6. Change the Combination Chart Axis property of the Y2 Axis to Show.<br />

7. Reset the Y1 Axis maximum value to 400,000.<br />

Hint: Click the vertical scale on the left side of the chart, and then in the Properties - Y1 Axis<br />

box, set the Maximum Value.<br />

8. Change the Line Styles to dotted red line and rename Statistical Maximum to Planned Promotion<br />

Value.<br />

Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

<strong>Administration</strong> <strong>Guide</strong> 325


Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

326 <strong>Contributor</strong><br />

9. Copy and paste the combination chart into the cell in the fourth row of the second column and<br />

apply Query 2 to the second crosstab.<br />

10. Run the report to view what it will look like.<br />

11. Save the report as Central Europe Promotions Report.<br />

Your charts for budget version 1 and budget version 2 will look like this.


Need more help?<br />

● See the IBM Cognos 8 Report Studio User <strong>Guide</strong><br />

Create an Event Studio Agent<br />

Create and schedule an Event Studio Agent to deliver the report as a news item on the first day of<br />

every month if the value of the planned promotions is greater than $20,000.<br />

Tip: Create a signon to the data source connection so that users don’t have to enter database cre-<br />

dentials when they run reports or reports are run by an event. For more information, see the section<br />

Create or Modify a Data Source Signon in the IBM Cognos 8 <strong>Administration</strong> and Security <strong>Guide</strong>.<br />

Steps<br />

Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

1. Open Event Studio using the package named new_stores_FM_model.<br />

2. Create an agent with an event condition expression for Planned Promotion Value greater than<br />

20,000. This event will be scheduled to run once a month, the event condition expression will<br />

return a result so the event is triggered.<br />

Tip: The Expression should look like the following:<br />

[Promotions Plan Value]>20000.<br />

Validate the expression , then preview the results to check that the expression returns<br />

a result.<br />

3. Include the Run a report task and select the Central Europe Promotions Report.<br />

<strong>Administration</strong> <strong>Guide</strong> 327


Chapter 19: Example of Using IBM Cognos 8 <strong>Planning</strong> with Other IBM Cognos Products<br />

328 <strong>Contributor</strong><br />

4. Include a Publish a news item task.<br />

In the Headline box, type Central Europe New Store Promotions Available.<br />

Under Link to, click Select an entry and select the Central Europe Promotions Report.<br />

5. Select My Folders as the News list locations. The news item will be published to this location.<br />

6. Click Schedule the agent … and select the By Month tab. Schedule the agent for the first day<br />

of every month.<br />

7. Save the event as New Stores Event. In IBM Cognos Connection, click Run with options ,<br />

for the New Stores Event to run the event now.<br />

View the news item, containing Central Europe Promotions Report, in IBM Cognos Connection<br />

on the My Folders tab. This news item will be created the first day of every month.<br />

Need more help?<br />

● See the IBM Cognos 8 Event Studio User <strong>Guide</strong>


Chapter 20: Upgrading IBM Cognos 8 <strong>Planning</strong> -<br />

<strong>Contributor</strong><br />

You can upgrade users, user classes, groups, libraries, and <strong>Contributor</strong> applications from previous<br />

IBM Cognos <strong>Planning</strong> versions. Depending on your version, you can upgrade the entire <strong>Planning</strong><br />

Application Database or you can upgrade individual applications.<br />

Use the following table to determine which upgrade tool you need according to the version of<br />

<strong>Planning</strong> you currently have installed.<br />

Upgrade Tool<br />

<strong>Planning</strong> <strong>Administration</strong><br />

Domain Wizard<br />

Upgrade Application Wiz-<br />

ard<br />

Deployment Wizard<br />

Current <strong>Planning</strong><br />

Version<br />

7.3<br />

8.1<br />

7.2<br />

7.3<br />

8.1<br />

8.2 or higher<br />

Description<br />

Upgrades applications, administration links,<br />

and macros from the source <strong>Planning</strong><br />

<strong>Administration</strong> Domain.<br />

The security and access rights for the <strong>Planning</strong><br />

<strong>Administration</strong> Domain objects are remapped<br />

in the new environment.<br />

Upgrades individual or multiple applications<br />

at once.<br />

The security of each application is updated<br />

in the new environment, but the access rights<br />

are not updated.<br />

Enables you to export and import complete<br />

models, macros, administration links, or<br />

Analyst libraries.<br />

You can export individual or multiple Plan-<br />

ning <strong>Administration</strong> Domain objects.<br />

For detailed information about the upgrade process for <strong>Planning</strong>, see the IBM Cognos 8 <strong>Planning</strong><br />

Installation and Configuration <strong>Guide</strong>.<br />

Upgrade the <strong>Planning</strong> <strong>Administration</strong> Domain<br />

Administrators use the <strong>Planning</strong> <strong>Administration</strong> Domain Wizard from the <strong>Contributor</strong> Administra-<br />

tion Console to upgrade <strong>Planning</strong> <strong>Administration</strong> Domains. Each <strong>Planning</strong> <strong>Administration</strong> Domain<br />

must be upgraded separately.<br />

<strong>Administration</strong> <strong>Guide</strong> 329


Chapter 20: Upgrading IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

330 <strong>Contributor</strong><br />

When you upgrade from IBM Cognos 8 <strong>Planning</strong> version 7.3, we recommend that you upgrade the<br />

<strong>Planning</strong> <strong>Administration</strong> Domain into the current <strong>Planning</strong> store.<br />

When you upgrade from IBM Cognos 8 <strong>Planning</strong> version 8.2, you upgrade the <strong>Planning</strong> Adminis-<br />

tration Domain and the associated applications at the same time.<br />

Before you upgrade the <strong>Planning</strong> <strong>Administration</strong> Domain, you must configure at least one datastore<br />

(p. 48), job server cluster (p. 56), and job server (p. 56).<br />

Steps<br />

1. In the <strong>Contributor</strong> <strong>Administration</strong> Console, click Tools, Upgrade <strong>Planning</strong> <strong>Administration</strong><br />

Domain.<br />

2. Click Next.<br />

3. Configure the datastore server connection for the datastore server that contains the <strong>Planning</strong><br />

<strong>Administration</strong> Domain.<br />

4. Select the Datastore provider.<br />

The options are SQL Server, Oracle or DB2.<br />

5. Do one of the following:<br />

● For SQL Server, enter the Datastore server name, or click the browse button to list the<br />

available servers<br />

● For Oracle, enter the service name.<br />

● For DB2, enter the database name.<br />

6. Enter the information as described in the table below:<br />

Setting<br />

Trusted Connection<br />

Use this account<br />

Password<br />

Preview Connection<br />

Description<br />

Click to use Windows authentication as the method for<br />

logging on the datastore. You do not have to specify a sep-<br />

arate logon id or password. This method is common for<br />

SQL Server datastores and less common, but possible, for<br />

Oracle.<br />

Enter the datastore account that this application will use to<br />

connect. This box is not enabled if you use a trusted connec-<br />

tion.<br />

Type the password for the account. This box is not enabled<br />

if you use a trusted connection.<br />

Provides a summary of the datastore server connection<br />

details.


Setting<br />

Test Connection<br />

Description<br />

Mandatory. Click to check the validity of the connection<br />

to the datastore server.<br />

7. If you want to configure advanced settings, click Advanced.<br />

Typically these settings should be left as the default. They may not be supported by all datastore<br />

configurations.<br />

Enter the following information.<br />

Setting<br />

Provider Driver<br />

Connection Prefix<br />

Connection Suffix<br />

Description<br />

Select the appropriate driver for your datastore.<br />

Specify to customize the connection strings for the needs of<br />

the datastore.<br />

Specify to customize the connection strings for the needs of<br />

the datastore.<br />

8. Select the <strong>Planning</strong> <strong>Administration</strong> Domain that you want to upgrade, test the connection, and<br />

click Next.<br />

9. Click the namespace to secure the objects against and click Next.<br />

10. If you want to upgrade existing <strong>Planning</strong> <strong>Administration</strong> Domain objects, select the Replace<br />

objects if they exist in the current <strong>Planning</strong> Store check box.<br />

If you do not select this option and the objects exist, they are not upgraded.<br />

11. Click Next.<br />

12. In the Map <strong>Planning</strong> <strong>Administration</strong> Domain Objects page, in the Target column, click the<br />

options that you want.<br />

You must map the job servers and job clusters that are configured in the source <strong>Planning</strong><br />

<strong>Administration</strong> Domain to the upgraded job server and job server clusters so that macros are<br />

upgraded correctly.<br />

13. If you want to choose defaults that apply to all applications, click Map All Applications and<br />

complete the fields.<br />

14. Click Finish.<br />

A log located in the %Temp%epUpgrade directory notifies you of any warnings that occur while<br />

upgrading the <strong>Planning</strong> <strong>Administration</strong> Domain.<br />

Chapter 20: Upgrading IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

<strong>Administration</strong> <strong>Guide</strong> 331


Chapter 20: Upgrading IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

Upgrade <strong>Contributor</strong> Applications<br />

332 <strong>Contributor</strong><br />

You can upgrade an individual application or multiple applications at once.<br />

Tip: Upgrade applications separately if the end of the plan year for each application is different<br />

and you do not want to upgrade an application mid-year.<br />

Before you use the Upgrade Application Wizard, do the following:<br />

❑ Upgrade your directory server.<br />

❑ Upgrade other IBM Cognos products.<br />

❑ Stop scheduled scripts from running, if appropriate.<br />

❑ Ensure that all jobs are complete.<br />

The Upgrade Application Wizard does not upgrade the following:<br />

● data in the source application<br />

● Admin extensions<br />

IBM Cognos <strong>Planning</strong> 7.2 Admin extensions are removed because substantial changes were<br />

made to the extensions for the current version.<br />

● audit information<br />

History table data and Job metadata is not upgraded. When the Go to Production process is<br />

run, cut-down model information is automatically generated after upgrading.<br />

● <strong>Contributor</strong> for Excel<br />

This is a separate Web client installation. To upgrade, the previous version must be uninstalled<br />

and the new version installed.<br />

● scripts<br />

In <strong>Contributor</strong> 7.2, you automate <strong>Contributor</strong> functionality using scripts. This functionality<br />

was replaced by macros. You cannot migrate your 7.2 scripts to macros. For more information,<br />

see "Automating Tasks Using Macros" (p. 193).<br />

● published data<br />

Publish datastores are not upgraded. Prior publish datastores can be retained and are compatible<br />

with 8.3. However, if you need to recreate your publish datastore as part of your 8.3 deployment,<br />

it is recommended that you rebuild it as UTF-16 to better conform to global business standards<br />

and to ensure easier compatibility with future IBM Cognos releases.<br />

You cannot publish to the <strong>Contributor</strong> application container. You must publish to a separate<br />

container. We recommend that you compare the results of publishing from earlier versions of<br />

<strong>Contributor</strong> with publishing in the current version to ensure that the publishing is performing<br />

as required. Any publish scripts must be re-created using the new macro functionality.<br />

● Analyst><strong>Contributor</strong> links<br />

When you upgrade applications that contain Analyst><strong>Contributor</strong> links, you must open the<br />

link in Analyst and reselect the source and target of the link. For more information, see "Update


a Link from a Computer That Cannot Access the Original Datastore" (p. 351), and the Analyst<br />

User <strong>Guide</strong>.<br />

Analyst and <strong>Contributor</strong> macros that use Analyst><strong>Contributor</strong> links will fail if you do not<br />

update the source and target of the link.<br />

To upgrade an application, you must have the <strong>Planning</strong> Rights <strong>Administration</strong> capability. By default,<br />

this capability is granted to the <strong>Planning</strong> Rights Administrators role.<br />

Before upgrading an earlier <strong>Contributor</strong> version to the current version, we recommend that you<br />

install on a separate server and then upgrade each application.<br />

We recommend that you back up the data stores that you intend to upgrade.<br />

For more information, see "Security" (p. 29) and the IBM Cognos 8 <strong>Administration</strong> and Security<br />

<strong>Guide</strong>.<br />

Steps<br />

1. Under Datastores, click the required datastore, right-click Applications, and click Upgrade<br />

Application.<br />

2. Click Add.<br />

3. Select the Datastore provider.<br />

The options are SQL Server, Oracle or DB2.<br />

4. Do one of the following:<br />

● For SQL Server, enter the Datastore server name, or click the browse button to list the<br />

available servers<br />

● For Oracle, enter the service name.<br />

● For DB2, enter the database name.<br />

5. Enter the information as described in the table below:<br />

Setting<br />

Trusted Connection<br />

Use this account<br />

Password<br />

Chapter 20: Upgrading IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

Description<br />

Click to use Windows authentication as the method for<br />

logging on the datastore. You do not have to specify a sep-<br />

arate logon id or password. This method is common for<br />

SQL Server datastores and less common, but possible, for<br />

Oracle.<br />

Enter the datastore account that this application will use to<br />

connect. This box is not enabled if you use a trusted connec-<br />

tion.<br />

Type the password for the account. This box is not enabled<br />

if you use a trusted connection.<br />

<strong>Administration</strong> <strong>Guide</strong> 333


Chapter 20: Upgrading IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

334 <strong>Contributor</strong><br />

Setting<br />

Preview Connection<br />

Test Connection<br />

Description<br />

Provides a summary of the datastore server connection<br />

details.<br />

Mandatory. Click to check the validity of the connection<br />

to the datastore server.<br />

6. If you want to configure advanced settings, click Advanced.<br />

Typically these settings should be left as the default. They may not be supported by all datastore<br />

configurations.<br />

Enter the following information.<br />

Setting<br />

Provider Driver<br />

Connection Prefix<br />

Connection Suffix<br />

Description<br />

Select the appropriate driver for your datastore.<br />

Specify to customize the connection strings for the needs of<br />

the datastore.<br />

Specify to customize the connection strings for the needs of<br />

the datastore.<br />

7. Select the application that you want to upgrade, test the connection, and click Next.<br />

8. Choose whether to create the datastore now and continue upgrading the application (Create<br />

and populate datastore now) or to exit the wizard and create and populate the datastore using<br />

scripts Generate datastore scripts and data files. Then, in either case, click Next.<br />

9. If you chose to use a script, give the script to your DBA to have the datastore created.<br />

You can later link to the new datastore using the <strong>Contributor</strong> <strong>Administration</strong> Console which<br />

will resume the upgrade wizard.<br />

10. If you chose to create the datastore now, do the following:<br />

● Click the namespace which will secure the upgraded application and click Next.<br />

● Click Finish.<br />

The Upgrade Application(s) page appears with the application that you specified added to<br />

the list of applications that can be upgraded.<br />

● Repeat steps 1 to 10 for each application that you want to upgrade.<br />

● Click Upgrade.<br />

The results of the upgrade show in the Upgrade log for application(s) page.


The wizard:<br />

● creates a new application datastore, or enables you to create a script that can be run by a<br />

database administrator<br />

● shows you if there are users who are working off-line because off-line data cannot be upgraded<br />

due to a new caching file that is used in the current version of <strong>Contributor</strong><br />

● automatically updates <strong>Contributor</strong> translation information in the datastore, requiring no manual<br />

configuration<br />

● upgrades access tables and saved selections requiring no further configuration<br />

● retains all configuration options, except those stated<br />

● saves an import/upgrade log named application_dataStore_ImportLog.txt to your<br />

%temp%/epUpgrade directory<br />

● upgrades the following client extensions for the Classic <strong>Contributor</strong> Web Client: Excel Export<br />

(Export for Excel), Client Loader (Get Data), Excel Print (Print to Excel)<br />

● applies default access rights<br />

For more information, see "Configuring Access to the <strong>Contributor</strong> <strong>Administration</strong> Con-<br />

sole" (p. 38).<br />

You must add the application to a job server or job server cluster (p. 52), run Go to Production<br />

(p. 243), and set up the <strong>Contributor</strong> Web site to enable users to access <strong>Contributor</strong> applications<br />

(p. 78).<br />

Upgrade Security<br />

If your version 7.2 <strong>Planning</strong> application was secured using <strong>Contributor</strong> native security, you can<br />

upgrade directly to a IBM Cognos 8 namespace.<br />

If your <strong>Planning</strong> application or <strong>Planning</strong> <strong>Administration</strong> Domain was secured by a Series 7 namespace<br />

that was administered by Access Manager, you can upgrade your security to an IBM Cognos 8<br />

namespace using the <strong>Contributor</strong> <strong>Administration</strong> Console deployment wizard.<br />

To upgrade your security, you must configure IBM Cognos 8 <strong>Planning</strong> to use both the Series 7<br />

namespace that was originally used as well as the namespace to which you are upgrading.<br />

You can use the Deployment Wizard to export the <strong>Planning</strong> application or the <strong>Planning</strong> Adminis-<br />

tration Domain, and then import the application or domain again. During the import, you can map<br />

the security to your new namespace.<br />

Chapter 20: Upgrading IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

After you have upgraded the security for all of your applications or your <strong>Planning</strong> <strong>Administration</strong><br />

Domain, you can remove the Series 7 namespace from your configuration.<br />

For more information, see "Deploying the <strong>Planning</strong> Environment and Viewing the Status of<br />

Deployments" (p. 170) and the Installation and Configuration <strong>Guide</strong>.<br />

<strong>Administration</strong> <strong>Guide</strong> 335


Chapter 20: Upgrading IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

Accessing <strong>Contributor</strong> Applications<br />

336 <strong>Contributor</strong><br />

<strong>Contributor</strong> applications are accessed from the IBM Cognos Connection portal, typically http://<br />

servername/cognos8.<br />

For users with bookmarks to the <strong>Contributor</strong> client, either inform users of the application URL,<br />

or use URL redirection to the new URL.<br />

We recommend that you use the full client installation to deploy the Web application to the Web<br />

client computer rather than the CAB download option if you have limited WAN bandwidth or your<br />

users do not have sufficient rights to their local machine to install the CAB files.<br />

For more information, see the IBM Cognos 8 <strong>Planning</strong> Installation and Configuration <strong>Guide</strong>.


Chapter 21: Analyst Model Design Considerations<br />

You can design an Analyst model to be used for a <strong>Contributor</strong> application and create links between<br />

Analyst and <strong>Contributor</strong> (p. 347).<br />

Designing an Analyst Model for <strong>Contributor</strong><br />

When designing an Analyst model that is used to create a <strong>Contributor</strong> application, the following<br />

things must be considered:<br />

● "Analyst Library <strong>Guide</strong>lines" (p. 337)<br />

● "D-Cube Restrictions" (p. 338)<br />

● "D-Links" (p. 339)<br />

● "Dimensions" (p. 341)<br />

● "Creating Applications with Very Large Cell Counts" (p. 345)<br />

Analyst Library <strong>Guide</strong>lines<br />

When you design an Analyst model to be used for a <strong>Contributor</strong> application, consider whether it<br />

has D-Lists, formats, and A-Tables that can be shared with other <strong>Contributor</strong> applications. If so,<br />

you can create a common library to contain any D-Lists to be shared between Analyst models. You<br />

then create the main Analyst library, which must contain all D-Cubes, the e.List, and all update<br />

links.<br />

The following restrictions apply:<br />

● A maximum of two libraries can be used per Analyst model.<br />

All objects on which the model depends must be contained in the main library and the common<br />

library.<br />

● Update links must target the specific cubes in the main library.<br />

● The <strong>Contributor</strong> administrator must have write access to all objects used in the Analyst model.<br />

● D-List names used by the Analyst model must be unique.<br />

This includes D-List used as format lists.<br />

● D-Cube names must be unique.<br />

● If a cube consists of D-Lists that are all from the common library, it is an assumption cube.<br />

Because of this, such assumption cubes do not appear when selecting the e.List in the Adminis-<br />

tration Console during application creation.<br />

● D-Cubes used as allocation tables must be contained in the main library.<br />

<strong>Administration</strong> <strong>Guide</strong> 337


Chapter 21: Analyst Model Design Considerations<br />

D-Cube Restrictions<br />

You do not explicitly select the other library. This is the first library other than the template library<br />

that is referenced when looking for dependencies on other objects. If there are references to more<br />

than one other library, errors are reported. It may be necessary to trace dependencies in Analyst to<br />

establish where the reference occurred.<br />

D-Cube options can cause problems in <strong>Contributor</strong> applications.<br />

The following options are not supported in <strong>Contributor</strong> but do not stop a <strong>Contributor</strong> application<br />

from working:<br />

● All settings in the D-Cube, Options menu: Widths, Lines, Zeros, Break-back, and Stored Copy<br />

● Integer break-back<br />

This is ignored, producing different results in <strong>Contributor</strong> if break-back is switched on in<br />

<strong>Contributor</strong><br />

● D-Cube Sort<br />

The following D-Cube cell options are ignored: holds, locks, protects, and annotations.<br />

Forward-referenced Weighted Averages<br />

338 <strong>Contributor</strong><br />

Forward-referenced weighted averages prevent a <strong>Contributor</strong> application from being created and<br />

synchronized.<br />

If you weight an item by a calculated item, the priority of that calculated item must be lower than<br />

the priorities of subtotals in other dimensions. Or, if all the priorities are equal, the dimension<br />

containing the weighted average must be first in the D-Cube dimension order.<br />

Forward weighted averages are present if an item is weighted by a high priority calculation, and<br />

subtotals in other dimensions are medium or low priority. e.List review items are medium priority<br />

subtotals in a <strong>Contributor</strong> application, regardless of any priority settings in the Analyst placeholder<br />

e.List. Forward weighted averages are also present if an item is weighted by a medium priority<br />

calculation, and other dimensions containing medium or low priority subtotals appear in the D-<br />

Cube dimension order.<br />

This restriction applies both to assumption and contribution cubes.<br />

Example<br />

In the following example, Sales is calculated as Total Units * Price and Price is weighted by Total<br />

Units.<br />

The weighted average in Price, Q1 is calculated correctly by <strong>Contributor</strong> if the weightings in cells<br />

Total Units, Jan to Total Units, Mar are already calculated. In other words Q1 must be a higher


D-Links<br />

priority calculation than Total Units. Or, if equal priority, the time dimension should be later in<br />

the cube than the Sales calculation dimension.<br />

If Total Units is higher priority than Q1, the cell Price, Q1 are calculated before the weightings in<br />

cells Total Units, Jan to Total Units, Mar are calculated. As a result, the weighted average could<br />

not be calculated correctly.<br />

There is a lot to consider relating to D-Links when designing a <strong>Contributor</strong> model in Analyst.<br />

Target areas of D-Links are automatically protected in <strong>Contributor</strong> to prevent a D-Link from<br />

overwriting data provided by a planner. You do not have to protect D-Link target areas using access<br />

tables.<br />

D-Links Run Automatically in <strong>Contributor</strong><br />

Special D-Links<br />

All model update D-Links must be included in D-Cube update lists, but there is no need for update<br />

macros. This is because when a planner clicks on a tab in the Web front end, <strong>Contributor</strong> checks<br />

whether any D-Links into the cube must be run. They do not need to be run if the source data for<br />

the D-Links is unchanged. If D-Links must be run, <strong>Contributor</strong> runs all the update D-Links for the<br />

cube in the order they appear in the D-Cube update list before the selected tab appears.<br />

When a planner saves or submits, <strong>Contributor</strong> checks again to see whether D-Links must be run.<br />

These links are run before data is sent back to the server.<br />

Data in a model can be changed using the <strong>Contributor</strong> <strong>Administration</strong> Console through assumption<br />

cubes (after application creation or synchronize), or by importing data. Planners see a changed<br />

application after all changed data blocks are updated through the reconciliation process.<br />

D-Links not included in D-Cube update lists are ignored, as well as import D-Links, which are D-<br />

Links from external sources.<br />

In <strong>Contributor</strong>, update links are used in the <strong>Contributor</strong> model only if the Execute check box for<br />

the D-Link is selected.<br />

Lookup D-Links, internal D-Links, and some break-back D-Links run automatically as relevant<br />

data is changed on a particular tab.<br />

For example, with lookup D-Links, if a planner changes a D-List formatted value in the lookup<br />

target cube and presses Enter, lookup D-Links are run into the cube.<br />

Automatically executing internal D-Links can be useful for solving problems, such as to express<br />

values as a percent of a total. A lookup link can change its own source data so that the same D-<br />

Link needs to be run again. In <strong>Contributor</strong>, such internal D-Links run until the source data stops<br />

changing or up to a maximum of 100 times.<br />

A D-Link that targets a subtotal to perform a break-back allocation to detail items is named a<br />

break-back D-Link. The detail items can be writable by a planner, although normally another D-<br />

Link supplies the weightings for these detail items. The planner can change values in these detail<br />

items if they are not supplied by another D-Link. The break-back D-Link runs automatically when<br />

they press Enter.<br />

Chapter 21: Analyst Model Design Considerations<br />

<strong>Administration</strong> <strong>Guide</strong> 339


Chapter 21: Analyst Model Design Considerations<br />

D-Links and Data Types<br />

Invalid D-Links<br />

340 <strong>Contributor</strong><br />

The <strong>Contributor</strong> D-Link engine has a formal treatment of data types that Analyst does not have.<br />

<strong>Contributor</strong> recognizes four data types: number, date, text, and D-List formatted.<br />

Some operations on these data types are not supported in <strong>Contributor</strong>. For example, the <strong>Contributor</strong><br />

link engine does not:<br />

● transfer D-List formatted values into numeric formatted cells<br />

● add numbers to D-List formatted items<br />

● multiply dates<br />

● subtract text from numbers<br />

● add, multiply, or subtract text values or D-List formatted items<br />

All operations on mixed data types are considered invalid by the <strong>Contributor</strong> link engine with one<br />

exception. You can add numbers to or subtract numbers from dates.<br />

The results obtained in <strong>Contributor</strong> when performing invalid data type operations depend on the<br />

D-Link mode. Fill puts zeros into the relevant target cells, whereas Substitute leaves the relevant<br />

target cells alone. Add and Subtract effectively behave like Substitute, adding or subtracting zero.<br />

In Analyst, all operations on data types are permitted. The Analyst D-Link engine simply operates<br />

on the underlying values.<br />

Invalid D-Links prevent <strong>Contributor</strong> applications from being created, and also prevent synchroniz-<br />

ation.<br />

D-Links that use the e.List in any way other than the following are invalid:<br />

● Link between an assumption cube, which does not have an e.List, and the target, a cube with<br />

an e.List, where nothing is selected and it is unmatched.<br />

● Where the e.List is present in both the source and the target, and the matching is done using<br />

match descriptions with the default options: Case Sensitive On, Match Calculated Target Items<br />

Off.<br />

The following are invalid D-Links:<br />

● A D-Link from a cube with the e.List that targets an assumption cube.<br />

● D-Cube allocation tables that are not assumption cubes, which means they contain the e.List.<br />

● Allocation tables that contain deleted dimension items but which have not had these items<br />

removed or edited in Analyst.<br />

● A D-Link that needs editing.<br />

Analyst issues a warning such as:<br />

A dimension has been removed from the target<br />

cube since<br />

the link was last saved. Please edit and resave the link.


● A D-Link that targets the wrong cube.<br />

For example, you include a D-Link in an update list for a particular cube. Then, by editing the<br />

D-Link, you change the target cube of the link, leaving the D-Link in the update list for the<br />

original cube.<br />

D-Links Between Assumption Cubes and Contribution Cubes<br />

Dimensions<br />

Dimension Order<br />

D-Links between contribution cubes and from an assumption cube to a contribution cube are<br />

allowed.<br />

D-Links between assumption cubes have no effect in a <strong>Contributor</strong> application, but do not cause<br />

an error or warning. You should run the D-Link in Analyst before building the model.<br />

D-Links from a contribution cube to an assumption cube are not allowed.<br />

A dimension is also referred to as a D-List in Analyst. Consider the following when designing<br />

dimensions:<br />

● "Dimension Order" (p. 341)<br />

● "Supported BiFs" (p. 343)<br />

● "Format Priorities in Analyst and <strong>Contributor</strong>" (p. 345)<br />

● "Scaling Differences Between Analyst and <strong>Contributor</strong>" (p. 345)<br />

As a general rule, in an Analyst model, choose dimensions in the following order:<br />

1. Calculation D-Lists such as P&L, and Balance sheet D-Lists.<br />

2. The e.List.<br />

3. Other aggregation D-Lists, such as products, customers, divisions, cost-centers, regions, or<br />

subsidiaries.<br />

4. Time D-Lists, such as months, quarters, or years.<br />

5. Only one timescale D-List can be chosen in each D-Cube.<br />

6. Control D-Lists, such as Actual/Budget/Variance.<br />

We recommend that the e.List is second in the dimension order, because this affects the size of the<br />

data blocks. The data blocks store all detail cells for each cube, together with any calculated cells<br />

for which the calculation comes earlier in the calculation sequence than the aggregations up the<br />

e.List. These calculations are referred to as pre-aggregation calculations. The dimension order is<br />

the primary method for controlling the calculation sequence. As a result, the position of the e.List<br />

in the dimension order affects the number of cells stored in the blocks, and therefore the block size.<br />

In many cases you can choose a different dimension order without affecting the calculations, and<br />

this can be used to minimize the block size.<br />

Chapter 21: Analyst Model Design Considerations<br />

<strong>Administration</strong> <strong>Guide</strong> 341


Chapter 21: Analyst Model Design Considerations<br />

Example<br />

For example, in the cube Revenue Plan, the dimensions are<br />

● Product Gross Margin<br />

● Indoor and Outdoor Products<br />

● Channels<br />

● e.List<br />

● Months<br />

● Versions<br />

With this order, the calculated items on the dimensions Indoor and Outdoor Products and Channels<br />

are stored on the data blocks.<br />

The dimensions can be reordered as follows without changing the calculation results<br />

● Product Gross Margin<br />

● e.List<br />

● Indoor and Outdoor Products<br />

● Channels<br />

● Months<br />

● Versions<br />

The calculated totals on the products and channels dimensions are no longer stored on the data<br />

blocks. They are recalculated when the data is loaded on the client or during publish. In general,<br />

the e.List is not the first dimension because there is typically one dimension of the cube for which<br />

the calculations must be pre-aggregation. However, in many cubes there are other hierarchical<br />

dimensions in addition to the e.List (products and channels in the example), and the order of these<br />

can be switched without affecting the calculations.<br />

Low priority calculations are pre-aggregation and are always stored on the data blocks regardless<br />

of dimension order. High priority calculations are post-aggregation and are never stored on the<br />

data blocks.<br />

Hierarchial Dimensions in Analyst<br />

342 <strong>Contributor</strong><br />

For a dimension to be shown as a hierarchy in Analyst, the following requirements need to be met:<br />

● The dimension must not be a time dimension.<br />

● The dimension’s items must all be detail or simple sums.<br />

● Do not use weighted or time averages.<br />

● Do not use the Force to Zero calculation option.<br />

● Priorities can only be 0 or 10.<br />

● Do not use any formatting.


Supported BiFs<br />

When creating dimensions that are used in cubes in the <strong>Contributor</strong> application, the following BiFs<br />

(built in functions) are available. For information about each BiF, see the Analyst User <strong>Guide</strong>.<br />

@Cumul<br />

@Days<br />

@DaysOutstanding<br />

@DCF<br />

@Decum<br />

@Delay<br />

@DelayDebt<br />

@Delay Stock<br />

@DepnAnnual<br />

@Deytd<br />

@Differ<br />

@Drive<br />

@ErlangDelayAgents<br />

@ErlangDelayFull<br />

@ErlangDelayLite<br />

@ErlangLossLite<br />

@Feed<br />

Switch-over Dates and Generic Time Scales<br />

@Forecast<br />

@Funds<br />

@FV<br />

@Grow<br />

@ICF<br />

@IRR<br />

@Lag<br />

@Last<br />

@Lease<br />

@Lease Variable<br />

@Linavg<br />

@Mix<br />

@MovAvg<br />

@MovMed<br />

@MovSum<br />

@NPer<br />

@NPV<br />

@Proportion<br />

@PV<br />

@Rate<br />

@Repeat<br />

@SeasonLite<br />

@StockflowAF<br />

@Tier<br />

@Time<br />

@Time (Methods<br />

2,3,4,5,6,7,8,9,10,12,15,16)<br />

@Timesum<br />

@TMin<br />

@TMax<br />

@TRound<br />

@Ytd<br />

A switchover date is used by certain BiFs to define the dividing point between past and future.<br />

Generic time scales have no date associated with each period and all the periods are the same length.<br />

For all BiFs, switch-over dates cannot be used in conjunction with generic time scales.<br />

For @NPV and @IRR, use of generic time scales is not supported.<br />

Chapter 21: Analyst Model Design Considerations<br />

<strong>Administration</strong> <strong>Guide</strong> 343


Chapter 21: Analyst Model Design Considerations<br />

@Last Differences<br />

@Last looks back along the series of data in the input row and returns the most recent non-zero<br />

value.<br />

@Time Restrictions<br />

Date Formats<br />

344 <strong>Contributor</strong><br />

In Analyst, any positive number greater than 1E-13 is non-zero. Negative numbers must be greater<br />

than -1E-12.<br />

In <strong>Contributor</strong>, any positive number greater than 1E-15 is non-zero. Negative numbers must be<br />

greater than -1E-14.<br />

The implementation of @Time in <strong>Contributor</strong> is identical to the implementation in Analyst except<br />

for the following restrictions:<br />

When using Method 1, the calculation will give different results depending whether the dimension<br />

on which the calculation is defined comes before or after the e.List in the D-Cube's dimension<br />

sequence. In other words, its results depend on the time at which it is executed.<br />

@Time is not supported in <strong>Contributor</strong> in the circumstances listed below. In all these cases the<br />

function returns zero.<br />

● Method 2 (date last saved) is not supported. If you use @Time(2) in a D-List, a warning appears<br />

while you create or synchronize the application, and the result is always 0.<br />

● Methods 3, 4, 5, 8,10, 12, and 16 return a result of 0 with generic timescales.<br />

● Methods 9 and 15 return a result of 0 with a generic timescale, or if the switchover date is not<br />

set.<br />

For information about using these built in functions, see the IBM Cognos 8 <strong>Planning</strong> - Analyst User<br />

<strong>Guide</strong>.<br />

The following date formats are supported. Using any others prevents a <strong>Contributor</strong> application<br />

from being created and prevents synchronization.<br />

DD/MM/YY<br />

DD.MM.YY<br />

MM/DD/YY<br />

MM.DD.YY<br />

DD-Mon-YY<br />

DD Mon YY<br />

Day DD-Mon-YY<br />

YYYYMMDD<br />

DD-MM-YYYY<br />

MM-DD-YYYY<br />

DDMMYYYY<br />

MMDDYYYY<br />

DD/MM/YYYY<br />

MM/DD/YYYY


YY-MM-DD<br />

YYYY-MM-DD<br />

YYMMDD<br />

Format Priorities in Analyst and <strong>Contributor</strong><br />

DD.MM.YYYY<br />

MM.DD.YYYY<br />

In Analyst, there is a rule of precedence between the various formats, based on a priority among<br />

the data types as follows:<br />

● dimension (first)<br />

● text/date/time/number (equal second)<br />

A dimension format applied on any dimension overrides a text/date/time/number format on any<br />

other dimension. If different formats with the same priority are applied on different dimensions,<br />

the first of these formats is used, taken in dimension order. Analyst treats formats applied to the<br />

D-Cube consistently with those applied to dimensions: a format applied to the cube is included in<br />

sequence after all the other dimensions. Thus, a dimension format applied to the cube overrides<br />

text/date/time/number formats on the dimensions, but is overridden by another dimension format<br />

on a dimension.<br />

In <strong>Contributor</strong>, the format is tied to the calculation priority and dimension order. This means that<br />

a format on a calculated item applies to all cells to which that calculation applies. However, this<br />

means that there are cases where <strong>Contributor</strong> and Analyst resolve the formats differently when<br />

formats are applied to detail items on two dimensions.<br />

Lower ordered dimensions take priority in <strong>Contributor</strong>, which means that the second dimension<br />

has formatting priority over the first, and the third has priority over the second. High priority cal-<br />

culations have priority over any format on a detail item. A high priority calculation format on the<br />

third dimension overrides a high priority calculation format on the second dimension, and so on.<br />

Scaling Differences Between Analyst and <strong>Contributor</strong><br />

Where an item in a D-List has a scaling format applied, the behavior is different in Analyst and<br />

<strong>Contributor</strong>.<br />

● In Analyst the numbers are entered, ignoring any scaling applied in the formatting. For example,<br />

an item is scaled to 1000s. If you type in 22k, it shows as 22, because the underlying value is<br />

22000. If you type in 22, it shows as 0, because the value is 0.022k, assuming that less than<br />

two decimal places are showing.<br />

● In <strong>Contributor</strong>, the numbers are entered as shown. If the cell shows 1000 for an underlying<br />

value of 10, and you type in 1200, the new value shows as 1200 with the underlying value now<br />

being 12.<br />

Creating Applications with Very Large Cell Counts<br />

Chapter 21: Analyst Model Design Considerations<br />

When creating or synchronizing an application, the system must check for forward-referenced<br />

weighted averages. It opens as many items from each cube as necessary to see whether forward-<br />

<strong>Administration</strong> <strong>Guide</strong> 345


Chapter 21: Analyst Model Design Considerations<br />

Example<br />

referenced weighted averages exist. In some cases, this can cause the system to run out of memory,<br />

such as when very large cubes are used in the Analyst model. Access tables are applied to cubes to<br />

reduce their size when used in <strong>Contributor</strong>.<br />

Rules exist to determine which items are opened. All items are selected from any dimension that<br />

has any calculations other than simple sums, has weighted averages or other calculation options,<br />

or has formats or time scales. If the dimension is all-detail, only the first item is selected. If the<br />

dimension has some calculated items, the first level subtotal with the fewest children is selected,<br />

with its children.<br />

For most large cubes, the entire cube is not opened. Problems can occur when every dimension that<br />

is not a simple hierarchy consists of a number of detail items, and a single total. In such a case, the<br />

entire cube would be opened.<br />

You can make small changes to such cubes so that the application can be created. For example,<br />

this dimension would be opened in full:<br />

A<br />

B<br />

C<br />

:<br />

:<br />

Z<br />

Total (A to Z) = A+B+C+...+Z<br />

You can include additional calculated items to the hierarchy so that these items are opened instead<br />

of the total of all detail items.<br />

For example, you can add this extra dummy total.<br />

Total A = +A<br />

Then, when the check for forward-referenced weighted averages is performed, only items A and<br />

Total A are used, reducing the memory requirements by a factor more than 10. Such a dummy total<br />

can be excluded from the <strong>Contributor</strong> application by setting it to no-data in an access table.<br />

It is also good practice in such cases to keep the Analyst cube as small as possible by including only<br />

a single-item e.List.<br />

Break-Back Differences Between Analyst and <strong>Contributor</strong><br />

346 <strong>Contributor</strong><br />

Break-back in <strong>Contributor</strong> is slower than in Analyst. However, forward calculations in <strong>Contributor</strong><br />

are the same, or slightly faster.


Analyst<strong>Contributor</strong> Links<br />

You can transfer data between Analyst and <strong>Contributor</strong> using the Analyst D-Link function. All the<br />

standard features of a D-Link are available, such as the use of A-tables, D-Cube allocations, local<br />

allocation tables, match descriptions, and matching on codes.<br />

These types of links are available:<br />

● Analyst to <strong>Contributor</strong><br />

● <strong>Contributor</strong> to Analyst<br />

● <strong>Contributor</strong> to <strong>Contributor</strong><br />

For small amounts of data, an Analyst<strong>Contributor</strong> link can be a quick and effective method of<br />

transferring data. However, for large amounts of data, it is more effective to use "<strong>Administration</strong><br />

Links" (p. 147).<br />

Analyst<strong>Contributor</strong> links work in the same way as a standard Analyst D-Link. They treat Con-<br />

tributor as a single large cube, which means that with large models, you can quickly run into memory<br />

problems. We recommend that Analyst<strong>Contributor</strong> links be used only for ad-hoc transfers of<br />

small amounts of data of no more than 5 to 10 e.List items.<br />

You can avoid memory problems for links that target an entire e.List in <strong>Contributor</strong> by using the<br />

@SliceUpdate macro. This macro processes the link in slices of the e.List, making it a much more<br />

scalable solution.<br />

Most D-Links that have <strong>Contributor</strong> as a source or target behave the same as standard Analyst D-<br />

Links. The few exceptions are as follows:<br />

● Only cubes that contain the e.List are available as a source or target for Analyst<strong>Contributor</strong><br />

links.<br />

● Lookup D-Links are not allowed when <strong>Contributor</strong> is the target.<br />

This is because Lookup D-Links depend on the data in the target cube. In a Web environment,<br />

this data can be changing all the time.<br />

● You cannot target calculated items in <strong>Contributor</strong>.<br />

This includes totals on the e.List dimension as well as any total in other D-Lists.<br />

● Match descriptions in Analyst D-Links to or from <strong>Contributor</strong> treat the pipe symbol as a blank.<br />

The pipe symbol is used in Analyst as a line-feed for column headers. It is stripped out when<br />

you create a <strong>Contributor</strong> application from an Analyst model.<br />

● You cannot target an assumption cube in <strong>Contributor</strong>, or use it as a source.<br />

● You cannot target No Data cells as defined by Access tables in <strong>Contributor</strong>.<br />

However, as the Administrator, you are not subject to the restrictions imposed on <strong>Contributor</strong><br />

users entering data using the Web client. Hidden and read-only cells are not applicable. You<br />

can write to these cells just as you can using normal import routines.<br />

● Analyst<strong>Contributor</strong> links cannot be run inversely.<br />

Chapter 21: Analyst Model Design Considerations<br />

<strong>Administration</strong> <strong>Guide</strong> 347


Chapter 21: Analyst Model Design Considerations<br />

Otherwise, most D-Link types are permitted. You can use Match Descriptions, local allocation<br />

tables, A-tables, and D-Cube allocations. You can cut subcolumns, so that you can match on codes.<br />

You can run accumulation links both ways, but lookup links run from <strong>Contributor</strong> to Analyst only.<br />

If you use a saved allocation table and rename D-List items in the <strong>Contributor</strong> application when<br />

using <strong>Contributor</strong> as a source or target in a D-Link, the allocation table must be manually updated<br />

for the link to work.<br />

Analyst users who do not have the <strong>Contributor</strong> <strong>Administration</strong> Console installed are not able to<br />

run Analyst<strong>Contributor</strong> D-Links.<br />

When you install Client tools onto a workstation, it is installed only for the user doing the installa-<br />

tion.<br />

To run a <strong>Contributor</strong>Analyst link, users must have Analyst and the <strong>Contributor</strong> <strong>Administration</strong><br />

Console installed. They must also have rights to Analyst and the appropriate <strong>Contributor</strong> applica-<br />

tions.<br />

In addition, organizations may prevent access to the database or the Web server using the IP address,<br />

limiting who can run these D-Links.<br />

D-Links from ASCII and ODBC directly into <strong>Contributor</strong> are not allowed. You must use <strong>Contributor</strong><br />

Import to do this.<br />

Set Up a Link Between Analyst and <strong>Contributor</strong> and Between <strong>Contributor</strong><br />

Applications<br />

You set up a link between Analyst and <strong>Contributor</strong> or between <strong>Contributor</strong> applications in the<br />

same way as you would a standard D-Link, choosing <strong>Contributor</strong> as the source or target of a D-<br />

Link. Only cubes that contain the e.List item are available as a source or target for these links. All<br />

the data is prepared in Analyst.<br />

Steps<br />

1. In the Analyst D-Link editor, choose <strong>Contributor</strong> Data as the source or target.<br />

2. If more than one datastore server is available, choose one.<br />

3. Click a <strong>Contributor</strong> application.<br />

Tip: You may need to click the Refresh button to the right of the Name list to view the list of<br />

available <strong>Contributor</strong> applications.<br />

4. Click Test Connection, and click OK.<br />

5. Click a <strong>Contributor</strong> cube.<br />

6. Pair dimensions against the target (or source) cube as you would for a standard D-Link.<br />

Analyst><strong>Contributor</strong> D-Links<br />

348 <strong>Contributor</strong><br />

These links can target either the production or development version of <strong>Contributor</strong>. If targeting the<br />

development version, they appear on the <strong>Contributor</strong> screens only after the Go to Production process<br />

is completed.


Important: If targeting the production application, the link changes the data, even if the user has<br />

submitted data.<br />

When you run an Analyst><strong>Contributor</strong> link that targets a development application, the data is read<br />

out of Analyst when you run the D-Link. When you run the Go to Production process in the Con-<br />

tributor <strong>Administration</strong> Console, or through Automation, the prepared data is written directly to<br />

the import queue in the data store for the <strong>Contributor</strong> application as a prepared data block, e.List<br />

item by e.List item.<br />

There may be a delay between the Go to Production process and the data being reconciled in the<br />

Web client. If, in the meantime, a planner edits one of the cells targeted by the link, that cell is<br />

overwritten during reconciliation. This behavior is very similar to the reconciliation that takes place<br />

when you import data into <strong>Contributor</strong> from text files or using DTS.<br />

When you run an Analyst to <strong>Contributor</strong> link that targets the production application, the data is<br />

read out of Analyst when you run the D-Link. An automatic activate process is run that applies the<br />

data to a cube. If running the link using macros, you must run the @DLinkActivateQueue macro.<br />

<strong>Contributor</strong>>Analyst Links<br />

When you run a <strong>Contributor</strong>>Analyst link, the following occurs:<br />

● A snapshot is taken of the production version of the <strong>Contributor</strong> Application.<br />

To ensure a consistent read if you are using the @SliceUpdate macro, take the <strong>Contributor</strong><br />

application offline, or use the @DLinkExecuteList macro.<br />

● A publish job is created and immediately set to ready.<br />

Note that the job is not run, because the <strong>Contributor</strong> job executor is not used. Analyst transfers<br />

the data.<br />

● A <strong>Contributor</strong> session is loaded and the entire data block is loaded for each e.List item.<br />

If the link is set up for more than one e.List item, it is equivalent to loading a multi-e.List item<br />

view which is very memory intensive.<br />

● The data is written directly to the Analyst cube data file (H2D file).<br />

<strong>Contributor</strong>><strong>Contributor</strong> links<br />

These links go from the production version of a <strong>Contributor</strong> source to either the development or<br />

production version of the <strong>Contributor</strong> target.<br />

They are typically used between separate applications. If the applications are small, <strong>Contributor</strong>>Con-<br />

tributor links can be fast. However, if you transfer data between larger applications this way, you<br />

may run into problems due to memory use and scalability problems. You can avoid these issues by<br />

using the @SliceUpdate macro. It can be more effective to use administration links in the <strong>Contributor</strong><br />

<strong>Administration</strong> Console, which copy data between <strong>Contributor</strong> cubes and applications. This process<br />

is scalable and can move large volumes of data into either the development or production version<br />

of the <strong>Contributor</strong> application.<br />

Chapter 21: Analyst Model Design Considerations<br />

<strong>Administration</strong> <strong>Guide</strong> 349


Chapter 21: Analyst Model Design Considerations<br />

Copying Analyst<strong>Contributor</strong> Links<br />

Save As Method<br />

Library Method<br />

There are three methods for making copies of links. Save As, Library Objects, and Library Copy<br />

Wizard. These three methods of copying Analyst<strong>Contributor</strong> links affect whether the link refers<br />

to the original application or a new application.<br />

This method results in a copy which refers to the original <strong>Contributor</strong> application(s) and/or Analyst<br />

D-Cubes.<br />

Steps<br />

1. In Analyst, open the link.<br />

2. From the File menu, click Save As.<br />

3. Choose a Library in which to copy the link.<br />

4. Enter a name for the link copy.<br />

5. Click OK.<br />

This method lets you select the link with or without other objects and choose either to copy or<br />

move the link. This results in a link which refers to the original <strong>Contributor</strong> application(s) although<br />

the source or target Analyst D-Cube (if it is not a <strong>Contributor</strong> > <strong>Contributor</strong> link) could be changed<br />

by this method if certain reference options are chosen when copying.<br />

Steps<br />

1. In Analyst, from the File menu, click Library, Objects.<br />

2. Select the link with or without other objects and move it down.<br />

3. Click the Copy selected objects button.<br />

4. In the Copy dialog box, enter a new name for the link, select a target library in which to copy<br />

the link, and select how to remap references.<br />

5. Click OK.<br />

Library Copy Wizard Method<br />

350 <strong>Contributor</strong><br />

This method lets you make a copy of an Analyst<strong>Contributor</strong> link which refers to a new application<br />

based on a copied library.<br />

Things to Watch Out For<br />

● Using the Library Copy wizard to create duplicate objects within the same library will not work<br />

for making copies of links because if you copy template D-Cubes containing the e.List, and<br />

include the e.List itself in the selection of objects, the e.List will be copied. Thus when you<br />

synchronize <strong>Contributor</strong>, the new cube you have created will be an assumption cube as it does<br />

not contain the original e.List.


● A link can only be pointed to an application where it will refer to template cubes which were<br />

copied at the same time. You cannot copy a link into a library which already contains suitable<br />

template cubes and then refer the link to an application based on that library.<br />

● If you make a copy of a link using this method and copy the link and its associated objects at<br />

the same time, then you will not be able to refer the link back to the original application. You<br />

will have to make a new application based on the copied library and refer the link to this new<br />

application.<br />

● If you copy macros which refer to <strong>Contributor</strong> applications using the Library Copy wizard,<br />

Steps<br />

then the macros will continue to refer to the original application. You must open the copied<br />

macros and manually edit them to refer to any new applications based on copied libraries.<br />

1. Use the Library Copy wizard to copy the link and any related Analyst template cubes at the<br />

same time.<br />

2. Create a <strong>Contributor</strong> application based on the copied Analyst template D-Cubes.<br />

3. Point the link to your new application by using one of two ways.<br />

Links and Memory Usage<br />

● Open the link and then select your new <strong>Contributor</strong> application when prompted.<br />

● From the File menu, click Library, Object. Double-click the link to move it down and then<br />

right click the link and select Change <strong>Contributor</strong> source on D-Links.<br />

The following factors affect memory usage when transferring data using links:<br />

● the density of data<br />

● the available RAM<br />

● the use of multi-e.List item views with access tables, the size of which are not decreased as much<br />

as single e.List item views.<br />

● the maximum workspace setting (MAXWS) in Analyst<br />

This is the amount of space reserved for Analyst. As a general rule, this should not be more than<br />

half the available RAM. If you set this option too high the Analyst process can use so much memory<br />

that it does not leave enough for the <strong>Contributor</strong> process.<br />

Update a Link from a Computer That Cannot Access the Original Datastore<br />

If a <strong>Contributor</strong> cube is used as a source or target, and the link is opened from a computer that<br />

cannot access the original datastore, you are prompted to reselect the connection and application<br />

to point to the data store and application name that holds the cube the link was built on. All<br />

matching is then preserved. Save the link so it will run in the future.<br />

Multiple data sources can be used. If two applications are built from the same Analyst library, the<br />

GUIDs match when pointing the link to the original data store.<br />

Chapter 21: Analyst Model Design Considerations<br />

<strong>Administration</strong> <strong>Guide</strong> 351


Chapter 21: Analyst Model Design Considerations<br />

To run a link from a workstation that does not have access to the original datastore you must<br />

manually open the link and reselect the connection. You can also update the connection for several<br />

links at once.<br />

Steps<br />

1. From the File menu, click Library, Objects and select one or more links that you want to update<br />

and move then to the bottom pane.<br />

2. In the bottom pane, right-click and click Change <strong>Contributor</strong> Source on D-Links.<br />

3. Enter the connection details for the new data store.<br />

4. Select the appropriate substitution option.<br />

This updates all the selected links with the new connection details.<br />

Multiple D-Links Using the @DLinkExecuteList Macro<br />

@DLinkExecuteList is a macro designed to run a series of D-Links in order.<br />

The @DLinkExecuteList macro behaves similarly to a series of @DLinkExecute steps, with a subtle<br />

difference when D-Links have <strong>Contributor</strong> as a source. When the macro runs the first D-Link that<br />

has <strong>Contributor</strong> as a data source, it logs the time and reads the database. All subsequent D-Links<br />

that have the same <strong>Contributor</strong> source use the same data. This ensures consistency across D-Links<br />

coming from the same <strong>Contributor</strong> data source. If a subsequent D-Link in the macro or submacro<br />

has a different <strong>Contributor</strong> data source, the old source is closed and the new one is opened.<br />

Run D-Links While Making Model Changes<br />

Example<br />

352 <strong>Contributor</strong><br />

If you want to make changes to the <strong>Contributor</strong> model and import data into <strong>Contributor</strong> using<br />

Analyst><strong>Contributor</strong> D-Links, synchronize first and then run the D-Link. For instance, if you have<br />

inserted a new product as part of a model change, you cannot import data into the new product<br />

until the <strong>Contributor</strong> model is synchronized. You must then run Go to Production to activate the<br />

model and data changes.<br />

You can use <strong>Contributor</strong>><strong>Contributor</strong> links to preserve data during cube dimensional restructuring,<br />

like when adding a dimension.<br />

Steps<br />

1. Take the <strong>Contributor</strong> application offline.<br />

2. Run a link from the production version of the <strong>Contributor</strong> application to the import queue of<br />

the development application.<br />

3. Run Go to Production.


Effects of Fill and Substitute Mode on Untargeted Cells<br />

Example Table One<br />

In <strong>Contributor</strong>Analyst D-Links, Fill and Substitute Modes behave the same way as standard D-<br />

Cube to D-Cube D-Links in Analyst.<br />

Fill and Substitute modes generally apply only to lookup and accumulation D-Links. In Substitute<br />

mode, the data in the untargeted area of the D-Link remains unchanged. In Fill mode, the untargeted<br />

cells are set to zero when the D-Link is run.<br />

However, this applies only to lookup and accumulation D-Links. In standard D-Links, if an item<br />

is not included in the right side of an allocation table, the original target data is unchanged,<br />

regardless of whether you use Fill or Substitute mode. Similarly, on normal real dimension pairings<br />

that use match descriptions, unmatched items are unchanged when the D-Link is run.<br />

In <strong>Contributor</strong>, for a lookup or accumulation link that targets both detail and calculated items at<br />

the same time, the zeros to fill and the data to set are merged into a single update of the target cube.<br />

In Analyst, for lookup and accumulation D-Links, any cell within the target area of the cube is set<br />

to zero if no data exists in the source cube for that cell. The fill is applied first to zero the data, and<br />

then the data is written as if in substitute mode.<br />

The different methods for Fill for Analyst and <strong>Contributor</strong> causes different breakback behavior to<br />

occur. If the Analyst method is required for <strong>Contributor</strong>, you can use one link to zero the data,<br />

then an accumulation link running in substitution mode.<br />

The following table shows the action applied to untargeted cells for different types of D-Links when<br />

a D-Cube or <strong>Contributor</strong> is the cube source.<br />

D-Link type<br />

Allocation Tables<br />

Lookup<br />

Example Table Two<br />

Accumulation<br />

Match descriptions<br />

Fill<br />

Keep<br />

Zero<br />

Zero<br />

Keep<br />

Substitute<br />

Keep<br />

Keep<br />

Keep<br />

Keep<br />

The following table shows the action applied to untargeted cells for different types of D-Links when<br />

a file map (ODBC or ASCII) is the cube source.<br />

D-Link type<br />

Allocation Tables<br />

Lookup<br />

Fill<br />

Keep<br />

Not Applicable<br />

Chapter 21: Analyst Model Design Considerations<br />

Substitute<br />

Keep<br />

<strong>Administration</strong> <strong>Guide</strong> 353


Chapter 21: Analyst Model Design Considerations<br />

D-Link type<br />

Accumulation<br />

Match descriptions<br />

Fill<br />

Not Applicable<br />

Zero<br />

Effect of Access Tables in <strong>Contributor</strong><br />

354 <strong>Contributor</strong><br />

Substitute<br />

Keep<br />

If <strong>Contributor</strong> is the source, cells marked as No Data are treated as zero when running a D-Link<br />

into an Analyst or <strong>Contributor</strong> target D-Cube.<br />

If <strong>Contributor</strong> is the target, you cannot target No Data cells as defined by Access Tables in Contrib-<br />

utor. However, as the Administrator, you are not subject to the restrictions imposed on <strong>Contributor</strong><br />

users entering data using the Web client. You can write to these cells just as you can using normal<br />

import routines.


Appendix A: DB2 UDB Supplementary Information<br />

This section provides an introduction to the IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> for administrators<br />

with responsibility for DB2 Universal Database (UDB) databases within the large enterprise,<br />

specifically IBM DB2® Universal Database (UDB) version 8.1 for UNIX/Windows/Linux.<br />

It assumes a familiarity with the tools provided by the database and a knowledge of security and<br />

backup practices within the enterprise.<br />

The <strong>Contributor</strong> Datastore<br />

The implementation of a <strong>Contributor</strong> datastore is database specific: on DB2 UDB each application<br />

exists as a separate schema.<br />

The following table conveys the database terminology used throughout this section and the DB2<br />

UDB equivalent.<br />

IBM Cognos 8 <strong>Planning</strong><br />

terminology<br />

Datastore server<br />

Datastore application<br />

Publish datastore<br />

Description<br />

The location where one or more data-<br />

store applications are stored.<br />

The container that holds the datastore<br />

objects for the <strong>Contributor</strong> application,<br />

such as tables.<br />

A datastore container to which a Con-<br />

tributor administrator publishes data.<br />

DB2 UDB equivalent<br />

Database<br />

Schema<br />

Schema<br />

Requirements for the DB2 UDB Database Environment<br />

Consider the following requirements when creating a DB2 UDB database environment for IBM<br />

Cognos 8 <strong>Planning</strong> data.<br />

● For DB2 UDB the target database for any operation must have been created beforehand by a<br />

database administrator (DBA). We recommend that you create a new DB2 database to host<br />

IBM Cognos applications. You may also choose to create a separate database to host <strong>Contributor</strong><br />

publish datastores.<br />

Installed component<br />

<strong>Contributor</strong> Server<br />

Required client<br />

DB2 <strong>Administration</strong> client<br />

<strong>Administration</strong> <strong>Guide</strong> 355


Appendix A: DB2 UDB Supplementary Information<br />

Installed component<br />

<strong>Contributor</strong> <strong>Administration</strong> Client<br />

Required client<br />

None<br />

● Connections are made to the IBM Cognos 8 <strong>Planning</strong> DB2 UDB database and all SQL statements<br />

are fully qualified SCHEMA.TABLENAME.<br />

For more information on installation requirements and procedures, see the IBM Cognos Resource<br />

Center.<br />

Background Information For DB2 UDB DBAs<br />

<strong>Contributor</strong> has been designed to respect local enterprise database best practice. With the correct<br />

permissions, <strong>Contributor</strong> will work without your intervention, creating and deleting tables. You<br />

can also set up <strong>Contributor</strong> so that you can review, amend and execute DDL scripts against the<br />

enterprise database. This is done using the Generate Scripts (GEN_SCRIPTS) option which may be<br />

turned on or off.<br />

Security and Privileges<br />

356 <strong>Contributor</strong><br />

Specific operations in <strong>Contributor</strong> expect to issue data definition statements against the DB2 UDB<br />

database. The user ID that is used to connect to the database to perform these operations must have<br />

the appropriate privileges in DB2 UDB.<br />

The privileges required for <strong>Contributor</strong> accounts are determined by setting the Generate Scripts<br />

(GEN_SCRIPTS) option. Enterprise <strong>Planning</strong> Series components create schemas and tables and<br />

populate them with data. Therefore users specified for connections must have been granted privileges<br />

to create schemas and tables.<br />

If you allow it, the <strong>Contributor</strong> application<br />

● create and drop schemas<br />

● create and drop tables<br />

● create and drop indexes<br />

● create and drop views<br />

Alternatively, the application generates DDL scripts to do these things. You can then review the<br />

scripts and execute them yourself. Additionally, <strong>Contributor</strong> needs to be able to bulk load data<br />

using the DB2 UDB import utility, as well as carry out a non-logged delete of all data in a table.<br />

When determining whether to generate DDL scripts, you may need to consider whether <strong>Contributor</strong><br />

should execute DDL against your enterprise database without review. You should also consider<br />

whether local policy allows the <strong>Contributor</strong> security context sufficient privileges to be able to create<br />

and remove <strong>Contributor</strong> datastores.<br />

You may want to generate DDL scripts to comply with your own storage standards or to customize<br />

storage clauses to take advantage of sophisticated enterprise storage. You may also want to amend<br />

or add sizing clauses to the <strong>Contributor</strong> default DDL.


Naming Conventions<br />

Metadata<br />

Backup<br />

Standards<br />

Static object names are the same across all applications and do not change during the life cycle of<br />

the <strong>Contributor</strong> application. Examples of static objects include the applicationstate table, which<br />

contains the <strong>Contributor</strong> application definitions, and the history table, which log events and data<br />

changes.<br />

Dynamic objects, primarily the import tables and publish tables and views, are named after objects<br />

within the Analyst model. Object names correspond to <strong>Contributor</strong> model object names with a<br />

subsystem prefix. Examples of dynamic objects include im_cubename, which contains the import<br />

staging tables.<br />

During application creation, <strong>Contributor</strong> forces dynamic object names and <strong>Contributor</strong> application<br />

datastore names to conform to the following conventions:<br />

● only lowercase letters a to z and numeric characters are allowed<br />

● no punctuation is allowed except for the underscore<br />

The application datastore name defaults to the name of the Analyst library that is used to create<br />

the application. Dynamic object names are based on the <strong>Contributor</strong> object name to which they<br />

correspond, such as publish data is et_cubename.<br />

Every <strong>Contributor</strong> datastore contains the metadata subsystem. The content of the metadata tables<br />

is critical to the functioning of the <strong>Contributor</strong> application. The metadata provides a mapping from<br />

internal <strong>Contributor</strong> model identifiers to the external database objects.<br />

DDL scripts may be amended to conform to local storage conventions. You must not amend database<br />

object names within the DDL script or allow the information contained within the metadata tables<br />

to become out of sync with the underlying database objects.<br />

<strong>Contributor</strong> does not back up data stored in the DB2 UDB database. You must back up the Con-<br />

tributor datastore using tools supplied by other vendors. We do not anticipate problems restoring<br />

from backups that use these tools, provided that<br />

● the backup is taken of the whole datastore application (and the CM datastore?)<br />

● no attempt is made to restore individual tables from backups taken at different times<br />

All SQL is standard ANSI SQL and is executed via ADO / OLEDB.<br />

The design of the datastore objects remove the need for complex table joins (the only place JOINs<br />

are used is within the Reporting Views) and the few SORTs are typically on small result sets.<br />

Data for transmission over HTTP (to and from the users entering the numbers into the model) is<br />

compressed and stored as XML documents.<br />

Appendix A: DB2 UDB Supplementary Information<br />

<strong>Administration</strong> <strong>Guide</strong> 357


Appendix A: DB2 UDB Supplementary Information<br />

Preventing Lock Escalation<br />

By default, <strong>Contributor</strong> expects that DB2 will use row-level locking for concurrency control. Table<br />

locking or any lock escalation may prevent <strong>Contributor</strong> from functioning normally. In particular,<br />

operations, such as Validate Users and Reconcile do not work when table locks are applied.<br />

Validate Users checks to see if users in the Web client have the rights to access the <strong>Contributor</strong><br />

data.<br />

Reconcile ensures that the structure of the <strong>Contributor</strong> data is up to date in the Web client.<br />

Steps<br />

1. Set locks to default to row-level locking and try to avoid upgrading the locks to table-level<br />

locking.<br />

2. To prevent lock escalation, ensure that the LOCKLIST and MAXLOCKS settings are not too<br />

small.<br />

Large Object Types<br />

Large object types (LOBs), binary large objects (BLOBs), and character large objects (CLOBs) are<br />

used to store the XML documents comprising compressed data. Examples of this are the storage<br />

of the model definition as well as user data for transmission over HTTP. In addition, users are able<br />

to submit free-form comments or annotations which are formatted into XML documents and stored<br />

as large objects.<br />

You may have a policy on large object storage. The <strong>Contributor</strong> application allows you to specify<br />

custom tablespaces for data, indexes, and large objects.<br />

Alternatively, you can use the default tablespace, USERSPACE1.<br />

We recommend that you store large objects within system-managed tablespaces. Projecting storage<br />

and growth is essential to a successful long-term implementation. While most of the tables used for<br />

<strong>Contributor</strong> datastores will grow and shrink, such as jobitem, other tables should be monitored<br />

for growth, primarily nodestate.<br />

Update of LOBS is via the OLEDB driver for DB2 UDB, IBMDADB2.<br />

Notes: Currently, DB2 UDB does not use the buffer pool to manage LOBS. The datastores chosen<br />

for tables containing LOBS should be placed in file containers that will be buffered by the operating<br />

system.<br />

Job Architecture<br />

358 <strong>Contributor</strong><br />

For more information, you may want to refer to the IBM DB2 documentation on performance<br />

considerations for LOBs.<br />

<strong>Contributor</strong> operates a consistent and proven code stream across multiple database providers; that<br />

is, a large proportion of the code (excepting database administration and DDL functions) is common<br />

across different databases. The code is distributed within a classic n-tier architecture.<br />

Data processing is carried out by job servers via the job architecture


Concurrency<br />

A job may contain multiple job items which represent atomic items of work. Jobs are queued for<br />

execution and picked up automatically.<br />

A single-processor machine normally executes a single job item at a time. Job items are executed<br />

by job servers.<br />

Members of the job server cluster identify items of work by polling the job subsystem within the<br />

<strong>Contributor</strong> datastore at regular intervals. Each job server continues to execute job items until no<br />

more work exists. An individual job server may be asked to monitor one or more <strong>Contributor</strong><br />

applications. An job server may therefore be polling one or more <strong>Contributor</strong> datastores which<br />

contain job subsystems within the <strong>Contributor</strong> environment.<br />

The job architecture enables database administrators to limit the number of DML operations carried<br />

out against the enterprise database by adding and removing job servers from currently executing<br />

jobs.<br />

Capacity <strong>Planning</strong><br />

UDB configuration parameters related to applications are dependent on the expected concurrency<br />

on the database.<br />

For IBM Cognos 8 <strong>Planning</strong>, database concurrency is a function of the number of job servers and<br />

the number of threads per server. The maximum number of concurrent applications can be<br />

determined by adding up all the active job tasks for all applications plus the epjobexec job itself<br />

plus active connections for the <strong>Administration</strong> Console plus any run-time server side components.<br />

Capacity planning and system sizing is dependent on model size and the number of <strong>Contributor</strong><br />

applications. Data volumes may grow during Publish. For more information, see "Reporting Data:<br />

Understanding the Publish Job Process" (p. 359).<br />

Importing and Reporting Data<br />

Data is imported and published using two separate jobs: Import and Publish.<br />

Existing planning data may be imported into IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> import staging<br />

tables (prefixed with im_). This functionality is supported for tab separated files via the Adminis-<br />

tration console and uses the DB2 import utility.<br />

Alternatively, you may choose a tool, such as IBM Cognos Data Manager to populate the tables<br />

directly.<br />

Whichever method you choose, the import data is processed and compressed by the Prepare Import<br />

job and data is made available to web client users by the Reconcile job. The Prepare Import job<br />

retrieves data from the datastore, processes it, and reinserts it into the application datastore in XML<br />

format.<br />

Reporting Data: Understanding the Publish Job Process<br />

When publishing data, <strong>Contributor</strong><br />

Appendix A: DB2 UDB Supplementary Information<br />

<strong>Administration</strong> <strong>Guide</strong> 359


Appendix A: DB2 UDB Supplementary Information<br />

Data Loading<br />

Job Failure<br />

360 <strong>Contributor</strong><br />

● publishes to a separate datastore, where data expands from a compressed large object XML<br />

format to a relational tabulated format<br />

In this scenario, publishing data may require increased database processing resources. Tuning<br />

logging, tablespace, and database parameters can help minimize the overall impact on your<br />

database performance.<br />

● using import load replace, truncates potentially large tables or large number of simple SQL<br />

insert statements at start of job followed by a bulk load of data per node per cube plus<br />

annotations<br />

Logging needs to be monitored in this case to avoid a drop in performance.<br />

The process of publishing data results in more storage being consumed because of how objects are<br />

represented. While a conventional relational table can contain LOB columns, because the Publish<br />

process uses LOBs to hold encoded data, which expands when transformed into simpler represent-<br />

ations, storage requirements are increased further.<br />

The administrator can limit the data to be published and limit the volume of data by adding a<br />

publish data dimension.<br />

Publish is broken up into units of work and processed via the job cluster. Data is uploaded using<br />

the DB2 import utility.<br />

<strong>Contributor</strong> supports options to accumulate all the data into large text files before uploading to<br />

the target tables in the publish datastore. This is an interrupted publish. It does not reduce the size<br />

of the publish data but it may fit more easily into enterprise procedures.<br />

If an attempt at loading data fails because of inadequate disk space, the Publish job will cancel the<br />

job. After you have allocated more tablespace, the <strong>Contributor</strong> <strong>Administration</strong> Console user should<br />

attempt to run the job from the beginning.<br />

If IBM Cognos 8 <strong>Planning</strong> fails to create a table during Publish then the next time Publish is run,<br />

the application attempts to create a table again.


Appendix B: Troubleshooting the Generate<br />

Framework Manager Model Extension<br />

Use troubleshooting information to help solve problems you may encounter generating Framework<br />

Manager Models.<br />

Unable To Connect to the Database While Using Oracle<br />

To correct an error message that Generate Framework Manager Model cannot find the correct<br />

ODBC Driver (installed when you install Oracle), specify which Oracle ODBC driver to use in<br />

Configuration Manager.<br />

Note: Make sure you specify the Oracle driver and not the Microsoft ODBC Driver for Oracle.<br />

Unable to Create Framework Manager Model<br />

You may get an error message stating that you are unable to create a Framework Manager model<br />

using the Framework Manager Script Player. A log file is created at installation_Location\<br />

DOCUME~1\cognos01\LOCALS~1\temp\6\BMTReport.log.<br />

This log file has two sections. The first section is the output generated using the Framework Manager<br />

Script Player. The second part contains the actions that were executed.<br />

Search on the word skip to see the errors in the log file.<br />

In the Framework Manager Script Player section, look to see if the log file contains the following<br />

error (the database values may differ):<br />

Action: FindOrCreateDataSource failed, skipping…<br />

Reason: QE-DEF-0285 Logon failure.<br />

QE-DEF-0321 The userid or password is either missing or invalid.<br />

QE-DEF-0068 Unable to connect to at least one database during a multi-database attach to 1<br />

databases(s) in: test_sales_market_table<br />

UDA-SQL-0031 Unable to access the "test_sales_marketi_table" database.<br />

UDA-SQL-0129 Invalid login information was detected by the underlying database.<br />

ORA-01017: invalid username/password; logon denied<br />

This error has the following potential causes:<br />

● The user running the Generate Framework Manager Model does not have access to the Signon<br />

created for the data source connection.<br />

● You are using Oracle or DB2 in a multi-computer environment and the configuration to access<br />

the datastore is not configured in the same way on all computers.<br />

<strong>Administration</strong> <strong>Guide</strong> 361


Appendix B: Troubleshooting the Generate Framework Manager Model Extension<br />

Unable to Retrieve Session’s Namespace<br />

You may receive error #CM-REQ-4159 stating that the session’s namespace cannot be retrieved.<br />

This error may occur<br />

● at the very end of the Generate Framework Manager Model process when the Finish button is<br />

pressed and the system is trying to generate the Framework Manager model<br />

● when testing the Gateway URL from Generate Framework Manager Model using a distributed<br />

environment when the IBM Cognos 8 BI Server is on the same computer as IBM Cognos 8<br />

<strong>Planning</strong><br />

To resolve this issue, delete the directory data source, and publish to a new container.<br />

Steps<br />

1. Stop and restart the services.<br />

2. In IBM Cognos Connection, in the upper-right corner, click Launch, IBM Cognos Administra-<br />

tion.<br />

3. On the Configuration tab, click Data Source Connection. Delete any directory data sources.<br />

4. In Analyst, republish the data to a new publish container.<br />

Unable to Change Model Design Language<br />

362 <strong>Contributor</strong><br />

You cannot change the design language of the model that the Generate Framework Manager Model<br />

Wizard creates. It is always English.


Appendix C: Limitations and Troubleshooting when<br />

Importing IBM Cognos Packages<br />

Use the following limitations and troubleshooting information to help solve problems you may<br />

encounter when importing an IBM Cognos Package into IBM Cognos <strong>Planning</strong>.<br />

Limitations for Importing IBM Cognos Packages<br />

The following are the known limitations for importing IBM Cognos packages into IBM Cognos<br />

<strong>Planning</strong>.<br />

Aggregation of Semi-Additive Measures in SAP BW<br />

The aggregation of semi-additive measures in SAP BW (aggregation exceptions for key figures in<br />

SAP BW terms) is not supported in this release. An example of a semi-aggregate measure is anything<br />

that can be classed as a movement, such as stock or headcount numbers. These measures can be<br />

aggregated across some dimensions but not all.<br />

These measures are fully supported by the IBM Cognos SAP BW OLAP provider when used on its<br />

own. Except for some aggregations that are not supported in Framework Manager, semi-aggregate<br />

measures in other OLAP providers and relational sources are supported.<br />

Tips:<br />

● Extract the measures via the OLAP interface in a separate <strong>Administration</strong> link.<br />

● IBM Cognos <strong>Planning</strong> supports a wide range of aggregation types, for example, weighted<br />

averages. You can load the leaf-level values from SAP BW into IBM Cognos <strong>Planning</strong> for<br />

aggregation. This requires that IBM Cognos <strong>Planning</strong> is at the same level of aggregation as SAP<br />

BW which might require a change to IBM Cognos <strong>Planning</strong> or SAP BW.<br />

● If you have Data Manager installed, and have a good working knowledge of it, you can bypass<br />

the <strong>Administration</strong> link and achieve the desired result for most aggregation types by moving<br />

the data directly into the IBM Cognos <strong>Planning</strong> import tables.<br />

Aggregation Support<br />

SAP BW aggregation types that are not supported by Framework Manager, but that are supported<br />

in IBM Cognos 8 queries by pushing the aggregation to the SAP BW system, are not supported by<br />

the new IBM Cognos <strong>Planning</strong> access method for SAP BW.<br />

Tip: You can load the leaf-level values and do the aggregation in IBM Cognos <strong>Planning</strong> where more<br />

complex aggregations can be achieved, but there are some aggregations that cannot be replicated.<br />

This also requires that IBM Cognos <strong>Planning</strong> is at the same level of aggregation as SAP BW which<br />

might require a change to IBM Cognos <strong>Planning</strong> or SAP BW. You can alternatively use the OLAP<br />

interface.<br />

<strong>Administration</strong> <strong>Guide</strong> 363


Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

364 <strong>Contributor</strong><br />

<strong>Administration</strong> Links Rows Rejected<br />

It can be difficult to know which rows have been rejected or inserted by the Data Movement Service.<br />

Tip: Switch the flag for write<strong>Contributor</strong>XML and writeDmsSpecOnSuccess from False to True in<br />

the \Cognos\c8\bin\ dataimportserviceconfig.xmlTags file:<br />

A generated file named: dmresult__.xml can be used to see records<br />

that are inserted, updated, or rejected.<br />

<strong>Administration</strong> Links with IBM Cognos Package as Source<br />

<strong>Administration</strong> links that have an IBM Cognos package as the source can move data only into<br />

Development, not Production.<br />

Tip: Use Macros to run <strong>Administration</strong> links and add a Go to Production step to move the data<br />

into the Production application automatically.<br />

<strong>Administration</strong> Links and Marked Data<br />

When two or more numeric Query Items are marked as data and mapped to the same target, the<br />

data is not aggregated, instead two rows are delivered to the import table.<br />

The import table itself takes the last entry as the loaded entry, so if there are two deliveries into the<br />

import table for the same target cell, then the last entry is used. This is normal behavior for the<br />

import table.<br />

Tip: Remodel the data in the source or in Framework Manager to avoid this scenario.<br />

Non-Numerics Marked as Data Mapping<br />

Non-numeric Query Items, like text or dates, when marked as data in planning links, can only be<br />

mapped as 1-1 in the manual mapping part of the links (both <strong>Administration</strong> and D-Links). Links<br />

cannot have non-numeric Query Items marked as data in them at all if there are any 1 to many<br />

mappings of marked data Query Items, even if only numerics are actually being mapped as 1 to<br />

many. Only numerics can be in such links.<br />

Tip: Create separate links for the numerics and non-numerics. If non-numerics need to be mapped<br />

as 1 to many, then adapt the IBM Cognos <strong>Planning</strong> model to run the value in once and perform<br />

the 1 to many mapping using D-Links. You can create multiple <strong>Administration</strong> links if the model<br />

cannot be changed.<br />

Framework Manager Model and SAP BW Usage Limitation<br />

When using the SAP BW feature in IBM Cognos <strong>Planning</strong>, you can use only Single-Provider InfoCube<br />

objects, not Bex Queries or InfoQueries.<br />

Tip: There is currently no workaround for this except to use the SAP BW OLAP interface.<br />

Framework Manager Expressions or Concatenations<br />

Framework Manager Expressions such as concatenations or IF THEN ELSE statements do not<br />

work across the OLAP and relational elements of the Framework Manager model, so a statement<br />

cannot include references to both the relational and the OLAP parts of the Framework Manager<br />

model.


Using BW OLAP Queries<br />

If you use BW OLAP queries for IF THEN ELSE statements, then wrap string values in quotes and<br />

do not use a mixture of string and numeric values in the same expression.<br />

Multi-Provider SAP BW Cubes Not Supported<br />

Multi-provider SAP BW cubes are not supported in the new SAP BW data access method for Plan-<br />

ning.<br />

Tips:<br />

● Use the InfoCubes that underpin the multi-provider<br />

● Create an InfoCube specifically for IBM Cognos <strong>Planning</strong><br />

● Use the IBM Cognos OLAP interface to SAP BW<br />

<strong>Administration</strong> Link and Data Movement<br />

Because of a limitation in the Data Movement Service, an individual <strong>Administration</strong> link element<br />

(one query) can only run against one processor, so additional processors do not make an individual<br />

link element perform better.<br />

Tip: To improve performance, you may create separate link elements within a single link so when<br />

the link executes, the link elements will be executed in parallel. Or you can create separate links<br />

and run them in parallel.<br />

Model Properties Not Automatically Updated<br />

Model properties are not automatically updated when new objects are imported into the Framework<br />

Manager model.<br />

Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

For example, if the model builder imports only some of the dimensions from a cube and then creates<br />

a detailed fact query, and then imports additional dimensions, the new dimensions will not have<br />

an important property set that is required by IBM Cognos <strong>Planning</strong>. The detailed fact query subject<br />

does not have the correct properties set for the dimensions added after the fact query was created.<br />

Tip: The workaround is to delete the detail fact query subject and recreate it.<br />

Publishing Multiple InfoCubes in One Package<br />

It is possible to have more than one InfoCube in a Framework Manager model and to run the<br />

Detailed Fact Query Subject feature against each of them. It is then possible to publish the entire<br />

model as a Package, IBM Cognos <strong>Planning</strong> can use the Package only if the IBM Cognos <strong>Planning</strong><br />

user uses metadata from one InfoCube per link.<br />

Tip: Publish each InfoCube within a model in its own Package.<br />

IBM Cognos Package Security<br />

Only IBM Cognos Packages with an assigned database signon can be used with IBM Cognos<br />

<strong>Planning</strong> since no log-on dialog box is displayed during import.<br />

<strong>Administration</strong> <strong>Guide</strong> 365


Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

SAP BW Hierarchies<br />

Members in SAP BW hierarchies must have unique business keys, as assigned to the Business Key<br />

role in Framework Manager, across all levels.<br />

When working with SAP BW data, the Dimension Key field for any dimension should be hidden<br />

in the Model (not the Package) - both for the OLAP and Detailed Fact Query Subject access before<br />

the Package is published. It is not intended for direct use from within IBM Cognos <strong>Planning</strong>.<br />

Query Prompts<br />

Query Prompts defined in Framework Manager are not supported in any IBM Cognos <strong>Planning</strong><br />

links.<br />

Tip: Change the model in Framework Manager.<br />

SAP Variables Not Supported<br />

SAP variables that generate Prompts in IBM Cognos 8 Business Intelligence are not supported by<br />

IBM Cognos <strong>Planning</strong>.<br />

Tip: Do not use the SAP variables when the package will be consumed by IBM Cognos <strong>Planning</strong>.<br />

Troubleshooting Modeled Data Import<br />

Use this troubleshooting information to help solve problems you may encounter when importing<br />

data from an IBM Cognos Package.<br />

You can troubleshoot modeled data import functionality in the following ways:<br />

● Viewing generated files that contain information about errors<br />

● Using error messages to troubleshoot<br />

● Techniques to troubleshoot problems with an import<br />

Viewing Generated Files<br />

366 <strong>Contributor</strong><br />

There are several files available to help troubleshoot the Modeled Data Import functionality. All<br />

of the files are written to your temp directory, usually: C:\Windows\Temp.<br />

Some of the files are generated automatically, but for others you must activate the generation of<br />

the files.<br />

Files Generated Automatically<br />

● _Result.xml. On link error it is not produced. On link success it is always<br />

produced.<br />

● _ImportFile.xml. Deleted by <strong>Contributor</strong>/Analyst if link is successful.<br />

● .cmd. Deleted by <strong>Contributor</strong>/Analyst if link is successful.


Generated Files<br />

Files Manually Activated<br />

● contribXml_.xml. On link error: optionally by dataimportserviceconfig.xml; on<br />

link success - optionally by dataimportserviceconfig.xml.<br />

● dmspec__.xml. On link error: always; on link success -<br />

optionally by dataimportserviceconfig.xml.<br />

● dmresult_< adminlink name>_.xml. On link error: always; on link success -<br />

optionally by dataimportserviceconfig.xml.<br />

● dmrejected_< adminlink name>_.xml. On link error: optionally by dataimportser-<br />

viceconfig.xml.; on link success - optionally by dataimportserviceconfig.xml. In both cases, the<br />

file is written only if the switch is on and the link rejects rows.<br />

Steps to Activate the Files<br />

1. Open the dataimportserviceconfig.xml file in the bin directory. This file contains parameters<br />

for the Modeled Data Import component.<br />

2. Switch the flag from False to True for the following tags:<br />

● write<strong>Contributor</strong>XMl<br />

● writeDmsSpecOnSuccess<br />

Setting the value to true will cause the file to be written when the import succeeds.<br />

Note: For writeRejectedRowsToFile, if you set the flag to True, rejected rows will be written upon<br />

a success or failure. If you set the flag to False, then no rejected rows will be written, whether there<br />

is a success or failure.<br />

Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

By default, this file is saved in the temporary directory where Data Manager is installed. You can<br />

change the location by changing the rejectedRowsFilePath using the syntax C:\\\\.<br />

The following files are generated automatically to help you troubleshoot the import functionality.<br />

.cmd<br />

Used with _ImportFile.xml, this file can be used to rerun the import outside of<br />

<strong>Contributor</strong>. This can be useful if it is unclear whether the Modeled Data Import is actually being<br />

executed. Problems might be uncovered by running the import outside of <strong>Contributor</strong>. Double<br />

clicking the file will execute it. By adding a pause command after the Java command inside the file,<br />

the command file window will stay open until a key is pressed.<br />

Example: testAnalystFileWrite1.cmd<br />

This file is created by <strong>Contributor</strong> and Analyst, and is deleted after the Modeled Data Import<br />

completes successfully. If the Modeled Data Import data fails, the file remains. The file contains<br />

the commands and parameters to run the Modeled Data Import. It also contains a valid passport<br />

that is only good until it expires. If the valid passport is copied along with .xml<br />

while the import is occurring, a copy of the file can be used later. If the passport in a cmd file has<br />

<strong>Administration</strong> <strong>Guide</strong> 367


Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

Manually Activated Files<br />

368 <strong>Contributor</strong><br />

expired, the expired passport can be replaced with a valid one. A valid passport can be taken from<br />

a recently created cmd file.<br />

_Result.xml<br />

This file is used by <strong>Contributor</strong> to determine the success or failure of the import.<br />

Example: TestCaseSap2_Result.xml<br />

If the link executed successfully, this file contains a subset of the results contained in the dmres-<br />

ult__.xml file. Also, if the dmrejected file (described above) is created,<br />

this file will contain a message that lets you know where the dmrejected file can be found. Contrib-<br />

utor reads this file to get the resulting status of the import. This file name doesn't contain a<br />

timestamp. If the link failed, this file will contain a portion of the exception message that can be<br />

found in the <strong>Planning</strong> error log.<br />

_ImportFile.xml<br />

Used with .cmd, this file can be used to rerun the import outside of <strong>Contributor</strong>.<br />

This file is created by <strong>Contributor</strong> and Analyst, and is deleted after the Modeled Data Import<br />

completes successfully. If the Modeled Data Import data fails, the file remains.<br />

This file contains the commands and parameters to run the Modeled Data Import, information<br />

about the matched dimensions, unmatched dimensions, data dimensions, and import table connection<br />

info and column names.<br />

The following files must be manually generated to help you troubleshoot the import functionality.<br />

contribXml_.xml<br />

This file can be used to check if <strong>Contributor</strong> or Analyst is producing a valid file for Modeled Data<br />

Import. You can use this file to step through the Modeled Data Import code, if the model and cube<br />

can be reproduced, or access to model and cube are provided.<br />

Example: contribXml__Wed Jan 10 12_29_13 CST 2007.xml<br />

This file contains the adminlink xml that the <strong>Contributor</strong> Application or Analyst model has sent<br />

to the Modeled Data Import. It also contains information about the matched dimensions, unmatched<br />

dimensions, data dimensions, and import table connection info and column names.<br />

dmspec__.xml<br />

This file can be validated against Data Manager's Data Movement Service schema to validate it for<br />

well-formedness and proper content. It can also be used to create a Data Manager package. Packages<br />

can be imported into the Data Manager user interface to be inspected and executed. Problems with<br />

the spec can be discovered when creating the package and executing the package within the Data<br />

Manager user interface.<br />

Example: dmspec_TestCaseSap4_Wed Jan 10 12_29_32 CST 2007.xml. This file contains the Data<br />

Manager's Data Movement Service spec file. These are the commands sent to Data Manager to<br />

import the data from a source to the target.


dmresult__.xml<br />

This result information, before it's written out, is used to create the _Result.xml<br />

file. The _Result.xml file is a subset of the information in this file.<br />

Example: dmresult_TestCaseSap4_Wed Jan 10 12_29_32 CST 2007.xml<br />

This file contains the result of executing the spec in the Data Movement Service. If the import was<br />

successful, the dmresult file will contain 'T' for the componentSuccess e.List node, the number of<br />

rows read from the datasource, number of rows rejected, and the number of rows inserted into the<br />

import table or written to the Analyst output file.<br />

If unsuccessful, the dmresult file will contain either 'F' for the componentSuccess e.List node and<br />

useful error message information, or no information at all.<br />

dmrejected__.xml<br />

This file is a raw output of rows that were rejected in processing the link within Data Manager.<br />

These rows come directly from the datasource that the link reads. Rows are rejected when data<br />

from the query items do not match expected target allocations in the target cube. For most data<br />

sources, the rejected rows will contain data from the query items the link references, making<br />

troubleshooting easier because, for example, the value in the rejected file would contain the<br />

descriptions. However, for SAP <strong>Administration</strong> links where the Detailed Key Figures performance<br />

enhancement is being used, the rejected rows will contain key values, not descriptions.<br />

Example: dmrejected_TestCaseSap4_Wed Jan 10 12_29_32 CST 2007.xml<br />

Using Error Messages to Troubleshoot<br />

Occasionally import links will fail. Likely causes of these failures will be problems in interpreting<br />

the model metadata when determining what should be imported, uses fields that aren't allowed in<br />

links, or certain limits in xml parsing or data retrieval/filtering being exceeded.<br />

When a failure occurs, error information is written to the <strong>Planning</strong>ErrorLog.csv file. Error messages<br />

specific to the processing of the link itself will have "Admin Links Import" in the Component<br />

column (regardless of whether Analyst or <strong>Contributor</strong> ran the link), "Data Import Service" in the<br />

Source column, and the error message in the Error Description column.<br />

The following error messages may occur when importing data from an IBM Cognos Package. View<br />

the description and fix options for each error message to help you troubleshoot how to make the<br />

link run.<br />

Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

Added Query Items with Concatenation in the Expression<br />

Error Message: DS-DBMS-E402: COGQF driver reported the following:~~COGQF failed to execute<br />

query - check logon / credential path~~DS-DBMS-E402: COGQF driver reported the following:<br />

~~GEN-ERR-0015 Initially, in data source type(s) &amp;apos;BW&amp;apos;, function &amp;<br />

apos;ces_concatenate&amp;apos; is not supported in &amp;apos;OlapQueryProvider&amp;apos;<br />

. After decomposition, in data source type(s) &amp;apos;BW&amp;apos;,<br />

Description: Query Items added to a Framework Manager model under a Query Subject that use<br />

the concatenation functions ( '||' or '+') in the Query Item's expression are not supported in this<br />

<strong>Administration</strong> <strong>Guide</strong> 369


Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

370 <strong>Contributor</strong><br />

release. The query engine used to retrieve data when processing the link is not able to handle these<br />

query items. It does not matter if the link is SAP or not.<br />

Fix: Added Query Items with concatenation in the expression can be used within Analyst to build<br />

D-Lists that can be included in a cube. However, these same query items cannot be used as a source<br />

query item within a link in Analyst or <strong>Contributor</strong>. To get the appropriate mapping to occur when<br />

choosing the source query items for the link, pick one of the query items used in the concatenation<br />

expression. Then, when mapping to the target dimension that includes the concatenated values,<br />

map the single source query item to the target dimension and use a sub-string on the target dimension<br />

to achieve the appropriate mapping.<br />

Expected ConformanceRef Not Found<br />

Error Message: Error encountered in Modeled Data Import of the Data Import Service.~~Caused<br />

by: java.lang.RuntimeException: Processing of this link has been aborted because the link contains<br />

a mixture of query items with and without the metadata conformanceRef attribute.The conforaman-<br />

ceRef attribute for query item [SottBwp1NikeBw].[Orders].[New Query Subject].[TestQueryItem]<br />

can not be determined.This link will have to be run using a package that doesn't contain the Detailed<br />

Fact Query Subject (aka Detailed_Key_Figures).~~~at<br />

Description: ConformanceRef is a hidden Framework Manager attribute used to link the OLAP<br />

query items to the relational query items and exists when the Detailed Fact Query Subject is created<br />

for a SAP model. When processing a link with a SAP model that has the Detailed Fact Query Subject<br />

created and a query item is discovered in the link that cannot be linked to the Detailed Fact Query<br />

Subject, an exception occurs. Examples of this are if any query item in a dimension or the Key<br />

Figures that has been added to the model since the Detailed Fact Query Subject was created, or if<br />

a query item is added under a query subject folder.<br />

Query items like this have an expression that pulls values from one or more dimension or Key Figures<br />

values. These can never be linked to the Detailed Fact Query Subject. If the first query item of the<br />

link can't be referenced to the Detailed Fact Query Subject, then the Detailed Fact Query Subject<br />

won't be used for the entire link, and the link should run successfully. But, if a query item that can't<br />

be linked to the Detailed Fact Query Subject is processed after one or more query items that can<br />

be linked, then the link will fail.<br />

Fix: Deleting and regenerating the Detailed Fact Query Subject and republishing the package will<br />

fix query items added to the Key Figures or a dimension.<br />

When dealing with query items added to a query subject, deleting and not regenerating the Detailed<br />

Fact Query Subject, then republishing the package will allow the link to run. However, the added<br />

query item can't contain an expression using concatenation.<br />

Entity Expansion Limit exceeded<br />

Error Message: com.cognos.ep.dataimportservice.modeleddataimport.ModeledDataImport.main<br />

(Unknown Source)~~Caused by: org.apache.axis.AxisFault: ; nested exception is: ~~org.xml.sax.<br />

SAXParseException: Parser has reached the entity expansion limit "64,000" set by the Applica-<br />

tion.~~~at org.apache.axis.AxisFault.makeFault(AxisFault.java:129)


Note: The above error message occurred when the expansion limit was set to 64,000. The error<br />

message would refer to 200,000 if a client were to encounter the error with the IBM Cognos default<br />

setting.<br />

Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

Description: Framework Manager import uses the Java coding language when processing the link<br />

information from Analyst or <strong>Contributor</strong>. Java XML parsers have a built-in default limitation on<br />

how large an XML document can be. Exceeding this limitation will cause an exception to be thrown<br />

and the link will fail. The Framework Manager import code provides a configurable override to<br />

this limit. The Java limit of 64,000 has been increased to 200,000 by the override value. However,<br />

it might be possible to build a link that exceeds this increased limit.<br />

Fix: Open the dataimportserviceconfig.xml file located in the bin folder of the IBM Cognos<br />

installation directory. Find the parameter with a name of "EntityExpansionLimit" and increase the<br />

value. A suggested increase would be to make the value 300,000. Increasing the expansion limit<br />

may mean that more internal memory will be needed, causing a out of memory problem if the<br />

maximum heap size isn't also increased. Find the JavaMaximumHeapSize parameter and increase<br />

that as well. Doubling the value to 256M should be safe for 300,000, but memory limitations on<br />

the machine may still cause out of memory issues.<br />

Too Many Filter Values<br />

Error Message: ~DS-DBMS-E400: COGQF driver reported the following on connection &amp;<br />

apos;3&amp;apos;:~~Unhandled Exception~~databuild -- failed<br />

Description: There is a limitation on the number of filter values in the IBM Cognos query engine,<br />

and exceeding that limit by building a query with a very large number of filter values causes the<br />

link to fail. This can happen in Analyst when building a D-Link with one or more matched dimen-<br />

sions mapping to large cube dimensions. It happens in Analyst or <strong>Contributor</strong> when the link contains<br />

one or more unmatched source dimension that are filtered with a large number of values. It happens<br />

in <strong>Contributor</strong> when the link contains one or more matched dimension manually mapped with a<br />

large number of values. Look at the link itself to determine if the quantity of filters might be causing<br />

the problem. The point where a problem occurs is somewhere around 300 total filter values. If this<br />

error message is encountered, and the link deals with a large number of filter values, the fix<br />

description below is the best way to get the link to run.<br />

Fix: Open the qfs_config.xml file in the configuration folder under the IBM Cognos 8 installation<br />

directory. In the provider e.List with the name of "OlapQueryProvider", add the following:<br />

<br />

Save the file and restart the IBM Cognos 8 service.<br />

Note: Turning this parameter on affects all queries, not just queries in Framework Manager links.<br />

While performance may suffer when this is on, it can be turned off or removed from the configuration<br />

file after the link has executed.<br />

Comparing _businesskey With Random Values<br />

Error Message: OP-ERR-0070 'Customer' can only be compared to actual values of the query item.<br />

Filters referring to arbitrary string constants should be applied to attribute query items.<br />

<strong>Administration</strong> <strong>Guide</strong> 371


Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

Description: In the SAP models, each level in a hierarchy contains a query item that has a role of<br />

_businessKey. This query item is not intended for use in links, and therefore should not be used.<br />

This query item is a special field that contains specific key values. If the query item is compared to<br />

values that are not in the field's domain of values, an exception is thrown.<br />

Fix: Since these query items are not intended for use in links, they should be hidden from view in<br />

the model (not the package) - both for the OLAP and Detailed Fact Query Subject access before<br />

the package is published.<br />

Techniques to Troubleshoot Problems with an Import<br />

Perform the following techniques to troubleshoot problems with the modeled data import function-<br />

ality.<br />

Using Data Manager User Interface to Troubleshoot an Import Problem<br />

372 <strong>Contributor</strong><br />

Perform the following steps to create a Data Manager package and import the package into the<br />

Data Manager user interface.<br />

Steps<br />

1. Create a (.bat) file to convert the dmspec__.xml into a package<br />

file<br />

● The command in the .bat file is:<br />

"Cognos<br />

Installation Directory\bin\CatAdapterTest"<br />

-x "D:\DMSpec\generatePkg\TestCaseSap4.xml" -p "D:\DMSpec\generatePkg\<br />

TestCaseSap4.pkg"<br />

where "D:\DMSpec\generatedPkg\TestCaseSap4.xml" is the dmspec file<br />

to process and the "D:\DMSpec\generatePkg\TestCaseSap4.pkg" is the<br />

resulting package file.This example shows the files have been<br />

copied or renamed to an easier name to type. The path to the xml<br />

and pkg files will have to match the computer’s directory structure.<br />

● Following the above command with a pause command on the second line will leave the<br />

command window open to view the package creation. This is useful if the package creation<br />

fails so you can see messages.<br />

2. Run the bat file. A (.pkg) file will be created if successful. If unsuccessful, error message on why<br />

the package couldn’t be created will be displayed.<br />

3. Open the Data Manager user interface, then open an existing catalog or create a new catalog.<br />

4. Import the package file to create a build. From the File menu, click Import Package and navigate<br />

to the package file that you just created. It is not necessary to backup the catalog.<br />

From the package file, a new build will appear under the Builds and Jobstreams e.List. You<br />

can click the new build to see a graphical representation of it.<br />

5. Fix the Connection. The build will not run until the Framework Manager package has been<br />

associated with the build's source connection and the target connection is correct.


● Click the new build and note the number of the source connection that appears on the very<br />

left. Also, note the name of the target connection, on the far right of the build. This is<br />

usually CON1, or something similar.<br />

● Expand the Library/Connections e.List and find the corresponding connections.<br />

● Right-click the source connection and click Properties. Click the Connection Details tab.<br />

● Connection types will be selected on the left. Click Published Framework Manager Package.<br />

On the right, the Package box will be empty.<br />

● Click the … button. An IBM Cognos 8 logon window will appear. After that, a list of<br />

published Framework Manager packages will appear and select the appropriate package<br />

and click OK.<br />

● From the Connection Properties window, click Test Connection to verify that the connection<br />

is now good. Click OK.<br />

● Right-click the target connection and click Properties, and then click Connection Details.<br />

● Verify that the Connection Details are correct. Click Test Connection. Click OK to save<br />

any changes.<br />

● Click Save.<br />

6. Highlight the build and click Execute. Even if the build is highlighted, it won't execute unless<br />

it was the last thing clicked.<br />

A command window will open showing the status and results of the build execution.<br />

7. To determine problems with the v5 query, right-click the datasource icon in the build and click<br />

Properties.<br />

Select one row. Click the Query tab and click Run. Problems with the V5 will be displayed as<br />

it tries to run.<br />

Rerunning an Import Outside of <strong>Contributor</strong><br />

If the import fails, the cmd and _Importfile.xml will remain in the directory. If<br />

the import succeeds, the two files will be deleted. If you want to re-run a successful import, you<br />

have to copy the two files after the import starts, but before it ends. Remember, the cmd file contains<br />

a passport. Once the passport expires or the C8 server is bounced, the passport is useless.<br />

Note: Check the status of a INTER_APP_LINKS in the <strong>Administration</strong> Console. If the cmd file has<br />

been deleted, the contents can be copied from the Failure Information dialog box.<br />

Steps<br />

Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

1. Edit the cmd file and add a line at the end of the file that contains pause. This will keep the<br />

command window open after the import finishes.<br />

Another option is to redirect the cmd file output to a text file. Add >> c:\windows\temp\<br />

linkoutput.txt to the end of the first line. This will redirect the output to the linkoutput.txt<br />

file and can be viewed after the cmd file execution completes.<br />

<strong>Administration</strong> <strong>Guide</strong> 373


Appendix C: Limitations and Troubleshooting when Importing IBM Cognos Packages<br />

2. Save the changes.<br />

3. Locate and double-click the cmd file to view the output from the import. Also, dmspec,<br />

dmresult, and _Result files are created like when running within <strong>Contributor</strong>.<br />

Keeping the Analyst Export File created by Data Manager.<br />

Job Timeouts<br />

374 <strong>Contributor</strong><br />

After a run loads data into an Analyst Cube, the temp file that is created by Data Manger is typically<br />

deleted. If you wish to keep that file around for debugging purposes add a registry key called<br />

DropExportFile in the Analyst settings registry folder, and set the value to 0 (zero).<br />

An <strong>Administration</strong> link that executes for more than 30 minutes may appear to have timed out,<br />

showing up as Cancelled in the Monitor Links section of the <strong>Administration</strong> Console.<br />

Even though the execution of the link may have failed, you can look in Task Manager for dmrunspec.<br />

There will be one for each link element in the link. If the <strong>Administration</strong> link is marked as failed<br />

or cancelled - check for dmrunspec instances.<br />

To increase timeouts you need to edit epJobExecutorResources.xml, located at \<br />

cognos\c8\bin, and increase the value for Wait this long to see if RUNNING Job Items complete.<br />

Default setting is 1800 (30 minutes). The file is installed as read-only. We recommend that you<br />

back up the file and reset the read-only flag to writeable. After changing this setting, the <strong>Planning</strong><br />

service needs to be stopped and restarted on the machine that is executing the link.


Appendix D: Customizing IBM Cognos 8 <strong>Planning</strong><br />

- <strong>Contributor</strong> Help<br />

This section provides extra information about creating information for planners and reviewers.<br />

Creating Cube Help<br />

Detailed Cube Help<br />

You can create help for each cube. There are two types of help:<br />

● Simple cube help: This is one line (the limit is 200 characters) of plain text only. This appears<br />

at the top of the grid, below the tabs. For more information, see "Creating General Messages<br />

and Cube Instructions" (p. 78).<br />

● Detailed cube help. This appears as a separate web page when the user clicks the Help button<br />

at the top of the grid. This is described in the following sections.<br />

The administrator writes the help text either in plain text, or using HTML formatting.<br />

You can customize the format of the detailed cube help using HTML text formatting (p. 375). If<br />

you choose to use no formatting, the help will display in a standard format.<br />

After the <strong>Contributor</strong> application has been made live, the user has opened the <strong>Contributor</strong> applic-<br />

ation and loaded the grid, the user accesses the cube help by clicking the help button at the top of<br />

the grid.<br />

If you do not write any help for the cube, it will default to the default <strong>Contributor</strong> browser help.<br />

Using HTML Formatting<br />

Sample HTML text<br />

Use basic HTML tags to apply text formats to text in Instructions. In addition to formatting text,<br />

tags can be used to include hypertext links and images.<br />

Some basic rules for using HTML text tags:<br />

● Text tags are used in pairs with the text they alter between them. For example, my text<br />

here.<br />

● A start tag consists of a left and right angle bracket, with a tag name in between. For example,<br />

.<br />

● An end tag consists of a left and right angle bracket, a tag name, and a forward slash. For<br />

example, .<br />

Sample Help Text<br />

2nd heading level<br />

<strong>Administration</strong> <strong>Guide</strong> 375


Appendix D: Customizing IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> Help<br />

376 <strong>Contributor</strong><br />

3rd heading level<br />

This is a paragraph.<br />

This is a paragraph with a hyperlink to the Cognos web site<br />

This is a paragraph with a sample e-mail link to E-<br />

mail technical support.<br />

<br />

This is the first numbered list item.<br />

This is the second numbered list item.<br />

This is the third numbered list item.<br />

<br />

This is another paragraph.<br />

<br />

This is a bulleted list item.<br />

This is another bulleted list item.<br />

<br />

Help text<br />

Sample Help Text<br />

This is a paragraph.<br />

IBM web site<br />

E-mail technical support<br />

Description<br />

indicates the start of text that is displayed<br />

in heading 1 style.<br />

Sample Help Text is the text that is displayed<br />

in heading 1 style.<br />

indicates the end of heading 1.<br />

Indicates the start of a paragraph.<br />

Indicates the end of a paragraph.<br />

This is a hypertext link. For more information,<br />

see "Creating Hypertext Links" (p. 377).<br />

This is an email link. For more information, see<br />

"E-mail Link Example" (p. 378).


Help text<br />

<br />

This is the first numbered list item.<br />

Description<br />

This is a numbered list. The tag indicates<br />

the start of an ordered list and indicates<br />

the end of the ordered list.<br />

This is the second numbered list item. indicates the start of a list item and <br />

This is the third numbered list item.<br />

<br />

<br />

This is a bulleted list item.<br />

This is another bulleted list item.<br />

<br />

indicates the end of a list item. You can have as<br />

many list items as needed between the and<br />

tags.<br />

This is a bulleted list. The tag indicates the<br />

start of an unordered list and indicates<br />

the end of the unordered list.<br />

indicates the start of a list item and <br />

indicates the end of a list item.<br />

Using Images, Hypertext Links, and E-Mail Links in <strong>Contributor</strong> Applications<br />

In <strong>Contributor</strong> Help Text, you can enter instructions that appear to users in the <strong>Contributor</strong><br />

application.<br />

Adding images to instructions<br />

You can add images in .jpg or .gif format to instructions, for example, a company logo.<br />

Steps<br />

Creating Hypertext Links<br />

1. Create a folder for images in the same directory that you have used for the web site.<br />

2. Reference the graphic in the following way:<br />

<br />

3. You must use the full path to reference the image, otherwise it will not display to all users.<br />

You can use hypertext links to allow users to jump from a <strong>Contributor</strong> application to other web<br />

pages. You can put links in Instructions and Cube Instructions.<br />

Example<br />

To link to a file named File.html located in the subdirectory Path found on the server www.ibm.com,<br />

you enter the following:<br />

Appendix D: Customizing IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> Help<br />

text or image<br />

<strong>Administration</strong> <strong>Guide</strong> 377


Appendix D: Customizing IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> Help<br />

E-mail Link Example<br />

378 <strong>Contributor</strong><br />

You can use e-mail links in <strong>Planning</strong> Instructions and Cube Instructions to allow users to e-mail<br />

someone directly from the <strong>Contributor</strong> application. When they click the e-mail link, your default<br />

e-mail tool is launched.<br />

Example<br />

To add a link to your technical support contact, you could use:<br />

E-mail technical support<br />

This will appear in a manner similar to this in the Web browser:<br />

E-mail technical support


Appendix E: Error Handling<br />

This section covers the following areas:<br />

● How IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong> tracks problems, and information on the files you<br />

may be asked to provide to IBM Cognos Resource Center.<br />

● How to use the epLogfetcher to locate and retrieve different error logging files.<br />

● Timing and logging registry - timing and logging provides timing for processes. This is written<br />

to a file named <strong>Planning</strong>TraceLog.csv, which is in the same locations as the <strong>Planning</strong>ErrorLog.<br />

csv file and is useful for troubleshooting.<br />

Error Logs and History Tracking<br />

This section describes the different ways in which <strong>Contributor</strong> tracks problems, and describes the<br />

files you may be asked to provide in order to resolve them.<br />

You may be asked to supply a number of different files to IBM Cognos Resource Center, depending<br />

on the nature of the problem.<br />

Type<br />

Application XML<br />

History tracking<br />

<strong>Administration</strong><br />

Console History<br />

tracking<br />

JCE error logs<br />

Description<br />

Contains details about<br />

the <strong>Contributor</strong> applica-<br />

tion. Can represent the<br />

Development or Produc-<br />

tion model.<br />

Tracks actions per-<br />

formed by users.<br />

Tracks actions<br />

performed via the<br />

<strong>Administration</strong><br />

Console/Job system.<br />

Errors with the calcula-<br />

tion engine.<br />

Location<br />

User Defined.<br />

Database table named<br />

history.<br />

Database table named<br />

P_ADMINHISTORY.<br />

Job server, administra-<br />

tion machine or client<br />

machine on the local<br />

Temp folder<br />

(%TEMP%)<br />

File name<br />

applicationname.xml<br />

Not applicable<br />

Not applicable<br />

jce*.tmp<br />

<strong>Administration</strong> <strong>Guide</strong> 379


Appendix E: Error Handling<br />

Type<br />

General Error logs<br />

Logging and tim-<br />

ing<br />

Description<br />

Errors in the Administra-<br />

tion Console.<br />

Provides timing for pro-<br />

cesses, and verbose log-<br />

ging.<br />

Location<br />

Local Temp folder<br />

(%TEMP%), on<br />

administration<br />

machine, web server<br />

or client.<br />

Local Temp folder<br />

(%TEMP%), on<br />

administration<br />

machine, web server<br />

or client.<br />

These files are described in more detail in the following sections.<br />

Application XML issues<br />

Timeout Errors<br />

File name<br />

<strong>Planning</strong>ErrorLog.csv<br />

<strong>Planning</strong>TraceLog.csv<br />

Details about the <strong>Contributor</strong> application are held in XML format. If there are problems with a<br />

<strong>Contributor</strong> application, you may be asked by IBM Cognos Resource Center to save the XML as<br />

it is at that particular state and send the XML file. You can save the state of both the development<br />

application and the current production application.<br />

See "Save Application XML for Support" (p. 79) for more information.<br />

If you are experiencing timeout errors on long running server calls. Change the default remote service<br />

call timeout value (default 480 minutes) to allow for longer calls.<br />

Steps<br />

1. On the System Settings page, click the System Settings tab.<br />

2. Change the default call timeout to allow for longer calls.<br />

For information about the Maximum e.List items to display as hierarchy options, see "Import e.List<br />

and Rights" (p. 96).<br />

History Tracking<br />

380 <strong>Contributor</strong><br />

The history tracking feature in Application Options (p. 74) tracks the actions performed by users.<br />

When you have Actions timestamps and errors, or Full debug information with data selected,<br />

information is recorded in the database in a table named history.<br />

You can use history tracking if you have problems with, for example:<br />

● Workflow - in which case you set history tracking to Actions timestamps and errors.


● Aggregation (if it appears that you have incorrect aggregation), you should set it to Full debug<br />

information with data.<br />

The actionid contained in the history table is made up of two codes: a result code and an action<br />

code. The first 2 digits are the result code and the rest make up the action code.<br />

The following table shows the result codes and their meanings.<br />

Result Code (Hexadecimal)<br />

00<br />

01<br />

02<br />

03<br />

04<br />

05<br />

06<br />

07<br />

08<br />

09<br />

0A<br />

0B<br />

0C<br />

Result<br />

Success<br />

Not Owner<br />

Being Edited<br />

Data Changed<br />

Annotation Changed<br />

Locked<br />

Not Locked<br />

Not All Children Locked<br />

Annotation Backup Failed<br />

Data Backup Failed<br />

Grantor Locked<br />

Already Reconciled<br />

Not Initialized<br />

The following table shows the Actionid from the history table and the action that it refers to. Note<br />

that sometimes actionids may be combined. For example, if a user has made a change and submitted,<br />

you might get the Actionid oxo4Ao.<br />

Actionid (Hexadecimal)<br />

0x0000<br />

0x0001<br />

0x0002<br />

Action<br />

None<br />

Get Data<br />

Get Annotations<br />

Appendix E: Error Handling<br />

<strong>Administration</strong> <strong>Guide</strong> 381


Appendix E: Error Handling<br />

Actionid (Hexadecimal)<br />

0x0004<br />

0x0008<br />

0x0010<br />

0x0020<br />

0x0040<br />

0x0080<br />

0x0100<br />

0x0200<br />

0x0400<br />

0x0800<br />

0x1000<br />

0x2000<br />

0x4000<br />

0x8000<br />

Action<br />

Get Import Block<br />

Annotate<br />

Edit<br />

Save<br />

Start<br />

Submit<br />

Reject<br />

Reconcile<br />

Release<br />

Take Offline<br />

Bring Online<br />

Check Data Up To Date<br />

Update<br />

Edit If Owner<br />

See "Change Application Options" (p. 74) for further information.<br />

Calculation Engine (JCE) error logs<br />

382 <strong>Contributor</strong><br />

JCE error logs can be found either on the Web server, the <strong>Administration</strong> Console machine,<br />

administration server, the job server, or the client. They only appear on the client when there have<br />

been problems using the Web browser.<br />

You may get a server side JCE Error log in the following circumstances:<br />

● If the <strong>Administration</strong> Console crashes<br />

● Problems when importing the e.List<br />

● Problems during import<br />

● Problems during the Go to Production process<br />

● Problems during synchronize


● Problems during publish<br />

This is not an exhaustive list, and you may get an error message telling you that a log was created.<br />

To search for a JCE error log, search for JCE*.tmp.<br />

These log files are stored in hidden folders.<br />

General Error Logging<br />

Most areas of the application log their errors to a file named <strong>Planning</strong>ErrorLog.csv. In Windows<br />

NT4, this is stored in the Temp directory. In Windows 2000, it is normally be found in:<br />

Documents and Settings\user name\Local Settings\Temp\<br />

These logs can be created on the client, the <strong>Administration</strong> Console machine, administration server,<br />

web server, or the job server. Most components and applications log to this file including IBM<br />

Cognos 8 <strong>Planning</strong> - Analyst.<br />

How Errors are Logged in the <strong>Administration</strong> Console<br />

Errors that occur in the <strong>Administration</strong> Console are logged to help bug tracking.<br />

Due to the distributed nature of the execution of <strong>Contributor</strong>, it is also necessary to distribute the<br />

error handling/logging. For example, it does not make sense to log all the errors on the users’ web<br />

client for errors that happen on the web server.<br />

The logs created by components are put on the machine on which they execute.<br />

The log file created is in tab separated format, but it has a .csv extension so that it automatically<br />

loads into Excel when it is double clicked (if Excel is installed). Excel 97 gives you a warning that<br />

the format is not recognized, but actually opens the file without any problems. Opening the file in<br />

Notepad makes it difficult to read as the columns do not align due the varying length of the log<br />

entries. If the log cannot be written, for example, the file is locked by another process, or is read-<br />

only, then the log entry is written to the NT event log (if it exists) and the application fails with a<br />

message saying that logs cannot be written. If no log is written for a long time, the application can<br />

continue to execute even if the logging would cause problems in the future - this is normal "on<br />

demand" resource usage as recommended by Microsoft.<br />

During execution you may get an error that passes through multiple components as it works its<br />

way up the call stack. In this case there will be entries in the log for each procedure that is affected<br />

with a line number. For errors to be traced correctly, we need the log for each machine and user<br />

context affected by the error. This can be difficult if you do not know where all the code is<br />

executing, so it is advisable to send technical support all the logs you can find. We can usually tell<br />

if there is a log entry missing from the identity information that is associated with each error.<br />

An error log contains the following fields:<br />

Appendix E: Error Handling<br />

<strong>Administration</strong> <strong>Guide</strong> 383


Appendix E: Error Handling<br />

384 <strong>Contributor</strong><br />

Field Name<br />

GUID<br />

Stack<br />

Date Time<br />

Component<br />

Module<br />

File<br />

Version Information<br />

Procedure<br />

Line Number<br />

Source<br />

Error Number<br />

Error Description<br />

Description<br />

Each distinct error has a unique GUID (a unique identifier)<br />

by which it is identified. By looking at entries in the log<br />

with matching GUIDs it is possible to group log entries by<br />

particular errors. It is possible to cross reference errors<br />

between different log files if the error stack spans server<br />

side and client side components.<br />

This is used in conjunction with the GUID to determine the<br />

source of the error, and the call stack that the error was<br />

passed through before being reported. A value of 1 indicates<br />

the source of the error, and the highest value is the point<br />

at which it was reported to the user. Again these sequences<br />

can span log files.<br />

The date and time at which the error occurred. The time is<br />

taken from the machine where the component represented<br />

in the current log entry is running.<br />

The name of the component represented in the current log<br />

entry.<br />

The code module in which the error occurred.<br />

The source file name.<br />

The version of the component represented in the current<br />

log entry.<br />

The procedure within the file where the error occurred.<br />

The line number within the procedure where the error<br />

occurred. This enables a developer to trace exactly which<br />

call caused the error, and in conjunction with the error<br />

number and error message, it gives a high degree of detail<br />

about the problem.<br />

The origin of the error. May or may not be within IBM<br />

Cognos components.<br />

The identifier for the error condition.<br />

A description of the error which has occurred.


Field Name<br />

User Domain/User Name<br />

Machine Domain/Machine Name<br />

Previous User Domain/ Previous<br />

User Name<br />

Previous Machine Domain\ Previous<br />

Machine Name<br />

Process ID<br />

Thread ID<br />

Description<br />

The User Domain/User Name under which the component<br />

represented in the current log entry was executing.<br />

The Machine Domain/Machine Name on which the<br />

component represented in the current log entry was<br />

executing.<br />

The domain on which the user was logged into and the<br />

previous user name.<br />

The machine from which the call was made to the current<br />

component. This is the indicator to go and look for error<br />

logs on this machine, where it may be possible to find cor-<br />

responding entries (matched on GUID), lower down the<br />

call stack. It may also provide clues to other errors that<br />

occurred prior to the issue being investigated.<br />

The current process ID.<br />

The current thread ID.<br />

It is imperative that these logs are provided to development when reporting problems.<br />

Using the LogFetcher Utility<br />

The LogFetcher utility should only be used on the advice of IBM Cognos Resource Center.<br />

The LogFetcher utility retrieves planning log files from multiple machines. It retrieves the following<br />

logs:<br />

● <strong>Planning</strong>ErrorLog --for general errors. This is typically the first log you would look at if you<br />

have a problem.<br />

● <strong>Planning</strong>Timer--for timer files. These files will be present if timing has been enabled (see below.)<br />

● AnalystLog--Analyst errors.<br />

● IISLog--Web connectivity or download problems.<br />

● JLog--J server errors. It contains errors relating to data, links, and calculations.<br />

Steps<br />

1. Run epLogFetcher.exe from installation_location\Cognos\c8\bin\<br />

2. Right click in the top panel and click Add.<br />

3. Enter the search criteria for the log files:<br />

Appendix E: Error Handling<br />

<strong>Administration</strong> <strong>Guide</strong> 385


Appendix E: Error Handling<br />

386 <strong>Contributor</strong><br />

Criteria<br />

Machine to Search (or IP<br />

Address)<br />

Select Protocol<br />

File to retrieve<br />

Working Folder<br />

Description<br />

Enter the machine name or IP address with the log files.<br />

To search the local machine, enter localhost.<br />

Select HTTP if you are looking for components on the<br />

<strong>Administration</strong> server.<br />

Select COM if your <strong>Administration</strong> Console is on a sep-<br />

arate machine to the <strong>Administration</strong> server and you are<br />

searching locally Check - was MTS server.<br />

Select one of the following:<br />

<strong>Planning</strong>ErrorLog<br />

<strong>Planning</strong>Timer<br />

AnalystLog<br />

IISLog<br />

JLog<br />

Enter or browse for a folder to retrieve the files to on<br />

your local machine.<br />

4. Click Add. This adds the search criteria to the top panel. Repeat steps 2 to 4 until you have<br />

added all the log files you need.<br />

5. To start the search, select the lines containing the search criteria and click View File(s).<br />

Tip: You can do this one line at a time, or you can select multiple lines by holding down CTRL<br />

and clicking. The results of the search are displayed in the lower panel.<br />

6. In the lower panel select the files you want to bring into the working folder and click Get File(s).<br />

If you select two or more files with the same name into the same working directory, a number is<br />

appended to the file name.<br />

This tool only finds IIS logs if they are in the default path. It is not capable of retrieving logs from<br />

remote client machines.


Appendix F: Illegal Characters<br />

The following ASCII characters are not allowed as e.List item and user names, e.List item and user<br />

captions, user logons and user email.<br />

They are also not allowed in dimension names or in namespace names.<br />

Note that these are non-printing characters below ASCII code 32.<br />

Decimal<br />

0<br />

1<br />

2<br />

3<br />

4<br />

5<br />

6<br />

7<br />

8<br />

9<br />

10<br />

11<br />

12<br />

13<br />

14<br />

15<br />

16<br />

Char<br />

NUL<br />

SOH<br />

STX<br />

ETX<br />

EOT<br />

ENQ<br />

ACK<br />

BEL<br />

BS<br />

TAB<br />

LF<br />

VT<br />

FF<br />

CR<br />

SO<br />

SI<br />

DLE<br />

Description<br />

Null<br />

Start of heading<br />

Start of text<br />

End of text<br />

End of transmission<br />

Enquiry<br />

Acknowledge<br />

Bell<br />

Backspace<br />

Horizontal tab<br />

NL line feed, new line<br />

Vertical tab<br />

NP form feed, new page<br />

Carriage return<br />

Shift out<br />

Shift in<br />

Data link escape<br />

<strong>Administration</strong> <strong>Guide</strong> 387


Appendix F: Illegal Characters<br />

388 <strong>Contributor</strong><br />

Decimal<br />

17<br />

18<br />

19<br />

20<br />

21<br />

22<br />

23<br />

24<br />

25<br />

26<br />

27<br />

28<br />

29<br />

30<br />

31<br />

Char<br />

DC1<br />

DC2<br />

DC3<br />

DC4<br />

NAK<br />

SYN<br />

ETB<br />

CAN<br />

EM<br />

SUB<br />

ESC<br />

FS<br />

GS<br />

RS<br />

US<br />

Description<br />

Device control 1<br />

Device control 2<br />

Device control 3<br />

Device control 4<br />

Negative acknowledge<br />

Synchronous idle<br />

End of transmission block<br />

Cancel<br />

End of medium<br />

Substitute<br />

Escape<br />

File Separator<br />

Group Separator<br />

Record Separator<br />

Unit Separator


Appendix G: Default Options<br />

Grid Options<br />

The following sections describe the default options for a <strong>Contributor</strong> application.<br />

In the Grid Options, you can set the following:<br />

Option name<br />

Set breakback option<br />

Saved data<br />

Typed data not entered<br />

Data entered but not saved<br />

Allow multi e.List item views<br />

Allow slice and dice<br />

Recalculate after every cell change<br />

Application Options<br />

In Application Options you can set the following:<br />

Option name<br />

History tracking<br />

Cut-down models<br />

Allow reviewer edit<br />

Allow bouncing<br />

Prompt to send email when user takes owner-<br />

ship<br />

Use client-side cache<br />

Default<br />

All cubes on<br />

Black<br />

Green<br />

Blue<br />

Off<br />

On<br />

Off<br />

Default<br />

Action time stamps and errors<br />

No cut-down models<br />

Off<br />

On<br />

Off<br />

On<br />

<strong>Administration</strong> <strong>Guide</strong> 389


Appendix G: Default Options<br />

Option name<br />

Prevent off-line working<br />

Prompt to send email on reject<br />

Prompt to send email on save<br />

Prompt to send email on submit<br />

Web client status refresh rate<br />

Record Audit Annotations<br />

Annotations Import Threshold<br />

Annotations Paste Threshold<br />

Display Audit Annotations in Web Client<br />

XML Location and Filename<br />

Default<br />

Off<br />

On<br />

Off<br />

Off<br />

5 minutes<br />

Off<br />

0 (all rows imported in a single transaction are<br />

recorded as a single entry)<br />

0 (all rows pasted in a single transaction are<br />

recorded as a single entry)<br />

This defaults to the local temporary directory and the name of the Analyst model used to create<br />

the application.<br />

Admin Options<br />

390 <strong>Contributor</strong><br />

You can configure the import and publish actions using the following options:<br />

Option name<br />

Datastore Version Number (DB_VERSION)<br />

Import Block Size (IMPORT_BLOCK_SIZE<br />

Import Location (IMPORT_LOCATION)<br />

Import Options IMPORT_OPTIONS)<br />

Publish Options (PUBLISH_OPTIONS)<br />

Optimize Publish Performance<br />

Off<br />

Default<br />

-1 (All)<br />

Blank<br />

Blank<br />

Blank<br />

Manage Indexing


Option name<br />

Generate Scripts (GEN_SCRIPTS)<br />

Table Only Publish Post-GTP (POST_GTP_<br />

TABLE_PUBLISH)<br />

Act as System Link Source (LINK_SOURCE)<br />

Display warning message on Zero Data<br />

Base Language<br />

Scripts Creation Path<br />

Default<br />

No (for DBA). Note that if the DBAuthority<br />

key is set to USER, in the datastore,<br />

GEN_SCRIPTS will be set to true.<br />

No<br />

No<br />

No<br />

EN<br />

Blank<br />

Note that Admin Options are not visible to users when the DBAuthority key is not set to DBA in<br />

the registry.<br />

Go to Production Options<br />

You can set the following options prior to creating the production application:<br />

Option name<br />

Prevent Client-side reconciliation<br />

Copy development e.List item publish setting<br />

to production application<br />

<strong>Planning</strong> Package<br />

Name<br />

Screen tip:<br />

Description:<br />

Overwrite the package access rights at the next<br />

Go To Production<br />

Go to Production Wizard Options<br />

Default<br />

Off<br />

On<br />

Name of the Package<br />

Blank<br />

Blank<br />

On<br />

You can set the following Go to Production Wizard options:<br />

Appendix G: Default Options<br />

<strong>Administration</strong> <strong>Guide</strong> 391


Appendix G: Default Options<br />

Option name<br />

Back-up Datastore<br />

Display invalid owners and editors<br />

Create <strong>Planning</strong> Package<br />

Workflow States: Leave<br />

Workflow States: Reset<br />

Publish Options-View Layout<br />

Default<br />

On<br />

Off<br />

On<br />

On<br />

Off<br />

You can set the following options to set the View-Layout publish options and configure the publish<br />

datastore connection:<br />

Option name<br />

Publish Datastore<br />

Do Not Populate Zero/Null/Empty Data<br />

Publish only cells with writable access<br />

Use plain number formats<br />

Remove all data before publishing new data<br />

Include user annotations<br />

Include audit annotations<br />

Publish Options-Table Only Layout<br />

392 <strong>Contributor</strong><br />

Default<br />

No Container Set<br />

On<br />

Off<br />

On<br />

On<br />

On<br />

Off<br />

You can set the following options to set the Table-Only Layout publish options and configure the<br />

publish datastore connection:<br />

Option name<br />

Publish Datastore<br />

Create columns with data types based on the<br />

‘dimension for publish’<br />

Default<br />

name of publish datastore<br />

On


e.List<br />

Rights<br />

Option name<br />

Only create the following columns<br />

Include Rollups<br />

Include zero or blank values<br />

Prefix column names with data type<br />

Include User Annotations<br />

Include Audit Annotations<br />

Include Attached Documents<br />

Default<br />

When importing the e.List with just the compulsory columns in the file, you get the following<br />

defaults:<br />

Option name<br />

Publish<br />

View Depth<br />

Review Depth<br />

Access Tables<br />

Off<br />

On<br />

Off<br />

On<br />

On<br />

Off<br />

Off<br />

Default<br />

When importing rights with just the compulsory columns in the file, you get a default of Submit.<br />

No<br />

All<br />

All<br />

If no access levels are set, the following defaults apply:<br />

● All cubes apart from assumptions cubes have a global access level of Write.<br />

● Assumption cubes (cubes used to bring data into an application) have a global access level of<br />

Read.<br />

The following rules apply for an imported access table.<br />

Appendix G: Default Options<br />

<strong>Administration</strong> <strong>Guide</strong> 393


Appendix G: Default Options<br />

Option name<br />

Name of e.List<br />

AccessLevel<br />

Default<br />

Applies to the whole e.List if omitted.<br />

No Data.<br />

The base access level for rule based access tables is Write.<br />

Delete Commentary<br />

394 <strong>Contributor</strong><br />

You can set the following options:<br />

Option name<br />

Delete user annotations<br />

Delete audit annotations<br />

Delete attached documents<br />

Delete annotations before<br />

Delete any annotations containing text<br />

Default<br />

Off<br />

Off<br />

Off<br />

Off<br />

Off


Appendix H: Data Entry Input Limits<br />

The data entry limits for IBM Cognos 8 <strong>Planning</strong> - Analyst and IBM Cognos 8 <strong>Planning</strong> - <strong>Contributor</strong><br />

are affected by a number of different factors. Limitations may be imposed by a number of different<br />

factors such as operating system, datastore provider, and computer hardware.<br />

Note: The limits described here are guidelines, and are not hard and fast rules.<br />

Limits For Text Formatted Cells<br />

The data entry limits for text formatted cells for Analyst, <strong>Contributor</strong> Browser, and the Analyst<br />

for Excel are 32K (32,767 characters). Note however that in some cases, further limits are imposed<br />

by the datastore provider.<br />

For <strong>Contributor</strong> for Excel, the maximum number of characters is 911. Special characters, like<br />

Returns and Tabs are not supported. If you need to copy and paste multiple paragraphs of text into<br />

a cell from another document, ensure that you remove the returns after each paragraph before you<br />

copy and paste text into a cell. Otherwise <strong>Contributor</strong> for Excel will truncate the incoming text<br />

after the first paragraph.<br />

For annotations, the maximum number of characters are 3844 characters. For attached documents,<br />

the maximum number of characters in the comments section are 5o characters.<br />

View Publish<br />

● SQL Server = unlimited<br />

● UDB = unlimited<br />

Note that the publish views cast down to a varchar: SQL = 8000, UDB = 1500 (that is you only<br />

see 8000 characters in the SQL view)<br />

● Oracle = 4000 characters<br />

Table-only Publish<br />

Table-only publish varies by format<br />

Text fields (epReportingText)<br />

● SQL Server = 8000<br />

● UDB = unlimited<br />

● Oracle = 4000<br />

Limits for Numerical Cells<br />

The following limits are for Analyst and <strong>Contributor</strong> numerical cells.<br />

<strong>Administration</strong> <strong>Guide</strong> 395


Appendix H: Data Entry Input Limits<br />

396 <strong>Contributor</strong><br />

Numerical Cells in Analyst<br />

There are no limits to the number of characters in numeric cells that you can enter in Analyst,<br />

however, when the numbers get too large to fit, they will display in a scientific format, for example<br />

2345628363E205.<br />

Numerical Cells in <strong>Contributor</strong><br />

You can enter up to 60 characters in a numerical formatted cell in the <strong>Contributor</strong> Web client.


Glossary<br />

access tables<br />

In <strong>Contributor</strong>, controls access to cells in cubes, whole cubes, and assumption cubes.<br />

accumulation D-links<br />

D-links that consolidate data from a source D-cube to a D-cube based on text data.<br />

administration job<br />

An administration task that runs on job servers and is monitored by the <strong>Contributor</strong> <strong>Administration</strong><br />

Console. These tasks are commonly referred to as jobs. Some examples of jobs are reconcile, publish,<br />

cut-down models, links.<br />

administration link<br />

A link that enables an administrator to move data between <strong>Contributor</strong> applications. An adminis-<br />

tration link can contain multiple applications and cubes as the sources and targets of the link. A<br />

link can contain multiple elements which target either the development or the production application.<br />

<strong>Administration</strong> links run using the job architecture and so are scalable.<br />

administration machine<br />

In IBM Cognos <strong>Planning</strong>, the computer that is used to operate the <strong>Contributor</strong> <strong>Administration</strong><br />

Console.<br />

administration server<br />

In IBM Cognos <strong>Planning</strong>, the server that contains the planning components package (COM+<br />

package) and where control of the online application is maintained. You connect to this machine<br />

when you first run the <strong>Contributor</strong> <strong>Administration</strong> Console.<br />

application<br />

In IBM Cognos <strong>Planning</strong>, a <strong>Contributor</strong> application. <strong>Contributor</strong> applications are used for the<br />

collection and review of data from hundreds, or thousands of Web servers. One application can be<br />

used by many users in different locations at the same time.<br />

Application server<br />

See Job Server.<br />

assumption cube<br />

In IBM Cognos <strong>Planning</strong>, a cube that contains data that is moved into the <strong>Contributor</strong> application<br />

when the application is created or synchronized. It does not contain the e.List. Therefore, data<br />

applies to all e.List items, and is not writable. The data it contains is often named "assumption<br />

data."<br />

<strong>Administration</strong> <strong>Guide</strong> 397


Glossary<br />

398 <strong>Contributor</strong><br />

A-table<br />

In Analyst, an allocation table that shows how two lists correspond. It is useful for transferring<br />

data when no character matches are possible between lists of items.<br />

BiF<br />

Built in Function. In IBM Cognos <strong>Planning</strong> a BiF is a special calculation formula that was set up<br />

specifically for planning. For example, depreciation, discounted cashflow, forecasting using different<br />

drivers, and stock purchase prediction based on future sales.<br />

bounce<br />

In IBM Cognos <strong>Planning</strong>, a term used to refer to the removal of the currently editing owner of an<br />

e.List item in the <strong>Contributor</strong> Web client. A planner or reviewer may "bounce" the owner.<br />

commentary<br />

In IBM Cognos <strong>Planning</strong>, commentary represents any additional information attached to <strong>Contributor</strong><br />

cells, tabs, or e.List items, including both user annotations and attached files. You can use adminis-<br />

tration links, system links and local links to copy commentary.<br />

contribution<br />

In IBM Cognos <strong>Planning</strong>, data that is entered into an e.List in the <strong>Contributor</strong> application.<br />

<strong>Contributor</strong> <strong>Administration</strong> Console<br />

A tool which enables administrators to publish an Analyst business model to the Web, manage<br />

access settings and model distribution, and configure the user's view of the model.<br />

cube<br />

A physical data source containing a multidimensional representation of data. A cube contains<br />

information organized into dimensions and optimized to provide faster retrieval and navigation in<br />

reports. In IBM Cognos <strong>Planning</strong>, a cube (see also D-Cube) corresponds to a tab on <strong>Contributor</strong><br />

client user interface.<br />

current owner<br />

In <strong>Contributor</strong>, the person who is editing or lasted opened an e.List item for edit.<br />

cut-down models<br />

In IBM Cognos <strong>Planning</strong>, customized copies of the master model definition that have been cut down<br />

to include only the specific elements required for a particular e.List item.<br />

datastore<br />

In IBM Cognos <strong>Planning</strong>, the location where one or more <strong>Contributor</strong> applications are stored. A<br />

datastore contains the information needed to connect to a database supporting the <strong>Contributor</strong><br />

applications.


D-cube<br />

In IBM Cognos <strong>Planning</strong>, a multi-page speadsheet made up of two or more dimensions. A D-cube<br />

must contain at least two dimensions. In <strong>Contributor</strong> a D-cube is referred to as a cube.<br />

dimension<br />

In IBM Cognos <strong>Planning</strong>, the rows, columns, and pages of a cube are created from dimensions.<br />

Dimensions are lists of related items such as Profit and Loss items, months, products, customers,<br />

and cost centers. Dimensions also contain all the calculations. One dimension can be used by many<br />

cubes.<br />

In IBM Cognos 8 BI, a dimension is a broad grouping of descriptive data about a major aspect of<br />

a business, such as products, dates, or locations. Each dimension includes different levels of members<br />

in one or more hierarchies and an optional set of calculated members or special categories.<br />

D-link<br />

In Analyst, a link that copies information in and out of cubes, and sometimes to and from text or<br />

ASCII files.<br />

D-list<br />

An alternative term for dimension.<br />

D-list format<br />

Lets you enter text from another D-List in a row or a column. The format may be used in database-<br />

type functions to consolidate data in a similar manner to query-style reports.<br />

drill down<br />

In IBM Cognos <strong>Planning</strong>, drill down is a technique used to analyze D-Cube data that was imported<br />

by a D-Link. You can drill down on any single cell in a D-Cube. If the cell contains data transferred<br />

by a D-Link, drill down opens a view of the source data. If the data was imported from another D-<br />

Cube, drill down opens the appropriate selection from the source D-Cube. If the data was imported<br />

from an external source (a mapped ASCII file or an ODBC database), drill down extracts the relevant<br />

data from the source file and displays it in a special drill-down results dialog box.<br />

In IBM Cognos 8 BI, drill down refers to the act of navigating from one level of data to a more<br />

detailed level. The levels are set by the structure of the data. See also drill up.<br />

e.List<br />

The basis for the structure of a <strong>Contributor</strong> application. An e.List is a hierarchical dimension which<br />

typically reflects the structure of the organization (for example, cost centers and profit centers).<br />

editor<br />

In IBM Cognos <strong>Planning</strong>, a planner or reviewer who is editing a contribution<br />

extensions<br />

Glossary<br />

In IBM Cognos <strong>Planning</strong>, extends the functionality of the <strong>Contributor</strong> <strong>Administration</strong> Console and<br />

Classic Web Client. There are two types of extensions: Admin Extensions and Client Extensions.<br />

<strong>Administration</strong> <strong>Guide</strong> 399


Glossary<br />

400 <strong>Contributor</strong><br />

Admin Extensions run in the <strong>Administration</strong> Console. Client Extensions are activated from the tool<br />

options on the Classic <strong>Contributor</strong> Grid.<br />

file map<br />

In Analyst, a file map tells the program how to split an ASCII or text file into columns of data. A<br />

file map puts in the divisions, or breaks, between one column of numbers and another. It defines<br />

the start point and width of each column of data within an ASCII file, and denotes whether the<br />

column is a numeric, text, or date field. If there is only one column, a file map is superfluous. File<br />

maps are always necessary when using an ASCII file as the source for a D-Link.<br />

Get Data<br />

In IBM Cognos <strong>Planning</strong>, a command in the Web client that loads the screen that displays local<br />

links and system links.<br />

go to production<br />

In IBM Cognos <strong>Planning</strong>, a process in the <strong>Contributor</strong> <strong>Administration</strong> Console that takes the<br />

development application and creates the live production application.<br />

grid<br />

In IBM Cognos <strong>Planning</strong>, a tabular form for viewing and entering data.<br />

GUID<br />

Global Unique Identifier. A unique internal reference for items in a model. For example, when you<br />

add a dimension item, this item is assigned a GUID.<br />

hold<br />

In IBM Cognos <strong>Planning</strong>, a function that protects a cell against breakback.<br />

import block<br />

In IBM Cognos <strong>Planning</strong>, a package of data from Analyst or an external system that is validated<br />

and prepared for import into a <strong>Contributor</strong> application. The import block is imported into the<br />

<strong>Contributor</strong> application datastore via a reconcile job.<br />

import link<br />

A function used in Analyst to update the items in a dimension on a regular basis from a source file<br />

or database.<br />

job server<br />

In IBM Cognos <strong>Planning</strong>, a machine that runs the administration jobs. There may be multiple job<br />

servers. A job server is sometimes referred to as an application server.<br />

library<br />

In IBM Cognos <strong>Planning</strong>, the storage location of the model. The library includes a group of connected<br />

Analyst objects: macros, reports, D-Links, selections, D-Cubes, maps, A-Tables, D-Lists, and formats.<br />

A library is similar to a Windows directory.


local links<br />

In IBM Cognos <strong>Planning</strong>, a link defined and run by a user in the Web client.<br />

lock<br />

In IBM Cognos <strong>Planning</strong>, a function that prevents data being entered into cells whether by typing<br />

or via a D-Link.<br />

lookup d-links<br />

In IBM Cognos <strong>Planning</strong>, D-Links that look up data from a source D-Cube based on text data. It<br />

uses a database D-Cube as a target.<br />

macros<br />

In IBM Cognos <strong>Planning</strong>, a single object defined by an administrator to automate a series of<br />

<strong>Administration</strong> tasks in <strong>Contributor</strong>. Each task is known as a step. In Analyst, a set of commands<br />

that have been recorded and grouped together as a single command, which is used to automatically<br />

complete a list of instructions in one step.<br />

match descriptions<br />

In IBM Cognos <strong>Planning</strong>, used to automatically match source and target dimension items with the<br />

same name. In addition, match descriptions can be used to perform an allocation by date.<br />

maximum workspace<br />

(MAXWS) The amount of memory reserved for Analyst. May be changed to allow larger models<br />

to run more effectively.<br />

model<br />

A physical or business representation of the structure of the data from one or more data sources.<br />

A model describes data objects, structure, and grouping, as well as relationships and security.<br />

In IBM Cognos 8 BI, a design model is created and maintained in Framework Manager. The design<br />

model or a subset of the design model must be published to the IBM Cognos 8 server as a package<br />

for users to create and run reports.<br />

In IBM Cognos <strong>Planning</strong>, a model is a group of D-Cubes, D-Lists, D-Links, and other objects stored<br />

in a library. A model may reside in one or more libraries, with a maximum of two for <strong>Contributor</strong>.<br />

namespace<br />

For authentication and access control, a configured instance of an authentication provider. Allows<br />

access to user and group information.<br />

In XML, a collection of names, identified by a URI reference, which are used in XML documents<br />

as element types and attribute names.<br />

In Framework Manager, namespaces uniquely identify query items, query subjects, and so on. You<br />

import different databases into separate namespaces to avoid duplicate names.<br />

Glossary<br />

<strong>Administration</strong> <strong>Guide</strong> 401


Glossary<br />

402 <strong>Contributor</strong><br />

offline grid<br />

In IBM Cognos <strong>Planning</strong>, the application that is used to access a section of an offline <strong>Contributor</strong><br />

application. The purpose is to enable users to enter or view data while there is no network connec-<br />

tion.<br />

owner<br />

In <strong>Contributor</strong>, a user who is assigned to an e.List item through the Rights screen and is permitted<br />

to edit or review it. These rights may be directly assigned, or may be inherited.<br />

planner<br />

In IBM Cognos <strong>Planning</strong>, a person who enters data in the <strong>Contributor</strong> application in the Web client.<br />

product application<br />

In IBM Cognos <strong>Planning</strong>, the version of the <strong>Contributor</strong> application seen by the Web-client user.<br />

The version of the <strong>Contributor</strong> application that is seen in the <strong>Contributor</strong> <strong>Administration</strong> Console<br />

is the development application.<br />

protect<br />

In IBM Cognos <strong>Planning</strong>, a function that is used to prevent data from being typed into a cell.<br />

However, data can still be transferred into a protected cell via a D-Link.<br />

publish<br />

In IBM Cognos 8 BI, refers to exposing all or part of a Framework Manager model or Transformer<br />

PowerCube, via a package, to the IBM Cognos 8 server, so that it can be used to create reports and<br />

other content.<br />

In IBM Cognos <strong>Planning</strong>, refers to a function that is used to copy the data from <strong>Contributor</strong> or<br />

Analyst to a datastore, typically so it can be used for reporting purposes.<br />

publish container<br />

In IBM Cognos <strong>Planning</strong>, a datastore container created specifically to publish data to.<br />

reconciliation<br />

In IBM Cognos <strong>Planning</strong>, a process that ensures that the copy of the <strong>Contributor</strong> application that<br />

the user accesses on the Web is up to date, for example, all data is imported. Reconciliation takes<br />

place after Go to Production has run and a new production application is created.<br />

reviewer<br />

In IBM Cognos <strong>Planning</strong>, a person who reviews the submissions of reviewers or planners.<br />

rights<br />

In <strong>Contributor</strong>, assigning rights enables administrators to determine what users can do in a Con-<br />

tributor application. Rights determine whether a user can view, edit, review, and submit data.


saved selections<br />

In <strong>Contributor</strong>, dynamic groups of items from a dimension or e.List. When used in conjunction<br />

with access tables, access tables provide a high level of control over the access or cells.<br />

In Extensions, sets of data configured during an export or refresh. A user can choose a saved<br />

selection and update just the data without reconfiguring the report or export criteria.<br />

In Analyst, sets of data used to save a specific D-Cube orientation, including a selection of rows,<br />

columns, and pages for later use. The selected items, sort order, and slice of the D-Cube are all<br />

saved in a named selection.<br />

synchronize<br />

In <strong>Contributor</strong>, a function used to update all cubes, links, and so on in an application when the<br />

underlying objects in Analyst change. Changes include renaming dimensions, adding, deleting, or<br />

renaming dimension items.<br />

system links<br />

In <strong>Contributor</strong>, a link that is defined by the <strong>Contributor</strong> administrator and run by a user in the<br />

Web client. This is part of the Get Data functionality in the Web client.<br />

table-only layout<br />

In IBM Cognos <strong>Planning</strong>, a publish schema that consists of a table-only layout, and is particularly<br />

suitable for the Generate Framework Manager Model extension.<br />

view layout<br />

Glossary<br />

In IBM Cognos <strong>Planning</strong>, a publish schema that consists of a layout of views over text values.<br />

<strong>Administration</strong> <strong>Guide</strong> 403


Glossary<br />

404 <strong>Contributor</strong>


Index<br />

Symbols<br />

.cpf, 303<br />

A<br />

access<br />

<strong>Contributor</strong>, 88<br />

access control, 29<br />

access levels, 123<br />

definition, 121<br />

hidden, 121<br />

loss of access to e.List items, 255<br />

no data, 121<br />

no data settings and block sizes, 123<br />

planner data entries, 137<br />

reading, 121<br />

updating no data settings, 122<br />

writing, 121<br />

access permissions<br />

users, 32<br />

access rights<br />

go to production options, 82<br />

granting, 39<br />

access tables, 22, 23, 119<br />

changing, 126, 136<br />

cut-down models, 130, 140<br />

default, 393<br />

definition, 397<br />

editing, 125<br />

exporting, 129<br />

formatting, 127<br />

importing, 126<br />

importing data, 137<br />

large, 129<br />

memory usage, 130<br />

performance issues, 129<br />

rules, 120<br />

viewing imported, 128<br />

accumulation D-links<br />

definition, 397<br />

act a system link source, 79<br />

adding<br />

applications, 60<br />

e.List items, 102<br />

Admin extensions, 301, 329<br />

Generate Framework Manager Model, 306<br />

running, 301<br />

<strong>Administration</strong> Console<br />

actions that run jobs, 51<br />

performance issues, 129<br />

administration jobs<br />

definition, 397<br />

administration links, 147<br />

allocation table, 155<br />

<strong>Contributor</strong>, 22<br />

CPU usage, 159<br />

creating, 149<br />

definition, 397<br />

exporting, 157<br />

importing, 157<br />

macros, 23<br />

model changes, 160<br />

moving commentary, 149<br />

rights, 39<br />

running, 157<br />

setting source batch size, 160<br />

setting target batch size, 161<br />

synchronize, 156<br />

troubleshooting memory issues with detailed fact<br />

query subject, 169<br />

troubleshooting tuning settings, 161, 162<br />

tuning, 158<br />

upgrading, 329<br />

using existing, 161<br />

validate, 156<br />

administration machines<br />

definition, 397<br />

administration servers<br />

definition, 397<br />

administrators, 25<br />

multiple, 22<br />

admin options, 79<br />

<strong>Administration</strong> <strong>Guide</strong> 405


Index<br />

allocation table in <strong>Administration</strong> Link, 155<br />

allow automatic cab downloads and installations, 71<br />

allow bouncing, 74<br />

allow bouncing example, 77<br />

allow multi-e.List item views, 72<br />

allow reviewer edit, 74<br />

allow slice and dice, 72<br />

Analyst<br />

Generate Framework Manager Model wizard, 306<br />

Analyst - <strong>Contributor</strong> links, 347<br />

upgrading, 329<br />

annotations<br />

deleting, 289<br />

display audit annotations in Web client, 74<br />

annotations import threshold, 74<br />

annotations paste threshold, 74<br />

anonymous access, 35<br />

application containers<br />

rights, 39<br />

application details<br />

viewing, 79<br />

application folders, 68<br />

monitoring, 59<br />

application options, 74<br />

applications<br />

adding, 60<br />

creating, 24, 60, 65<br />

definition, 397<br />

information, 70<br />

linking, 89<br />

monitored, 58<br />

synchronizing, 179<br />

upgrading, 60<br />

application tabs<br />

translating, 187<br />

application XML, 79<br />

Application XML<br />

issues, 380<br />

assign access rights, 39<br />

assigning rights, 24<br />

assumption cube<br />

definition, 397<br />

assumption cubes, 119, 123, 126<br />

A-table<br />

definition, 397<br />

attach documents, 290<br />

406 <strong>Contributor</strong><br />

attached documents, 290<br />

configuration, 71<br />

configuring the properties, 290<br />

maximum number, 290<br />

publishing, 291<br />

audit annotations, 289<br />

recording, 74<br />

authentication, 29<br />

authentication providers, 29, 35<br />

automation, 23<br />

B<br />

backups, 357<br />

base language, 79<br />

base models, 306<br />

best practices, 13<br />

BiFs<br />

definition, 398<br />

supported in <strong>Contributor</strong>, 343<br />

binary large objects, See large objects (LOBs, BLOBs,<br />

CLOBs)<br />

BLOBs, See large objects (LOBs, BLOBs, CLOBs)<br />

block sizes and no data access settings, 123<br />

BMTReport.log, 361<br />

bounce<br />

definition, 398<br />

bouncing<br />

allowing, 74<br />

example, 77<br />

breakback<br />

setting, 72<br />

business cases, 315<br />

business logic<br />

defining, 231<br />

business rules<br />

defining, 237<br />

enforcing, 231<br />

planning for, 232<br />

Business Viewpoint Client<br />

C<br />

managing <strong>Contributor</strong> master dimensions, 314<br />

cab downloads<br />

caching<br />

allowing, 71<br />

<strong>Contributor</strong> data for IBM Cognos 8, 306


Calculation Engine (JCE)<br />

error logs, 382<br />

capabilities, 34<br />

capacity planning, 359<br />

cascaded models, 146<br />

cascade rights, 38<br />

changing<br />

applications and translations, 185<br />

e.List, 136<br />

character large objects, See large objects (LOBs, BLOBs,<br />

CLOBs)<br />

client-executed links, 144<br />

client extensions, 300<br />

configuring, 301<br />

extension groups, 300<br />

client-side cache, 74<br />

client-side reconciliation, 54<br />

CLOBs, See large objects (LOBs, BLOBs, CLOBs)<br />

CM-REQ-4159, 362<br />

code pages, 191<br />

Cognos 8, See IBM Cognos 8<br />

Cognos namespace, 29<br />

color<br />

selecting for changed values, 72<br />

column headings<br />

EListItemCaption, 100<br />

ELIstItemIsPublished, 101<br />

EListItemName, 100<br />

EListItemOrder, 100<br />

EListItemParentName, 100<br />

EListItemReviewDepth, 101<br />

EListItemViewDepth, 100<br />

commentaries<br />

deleting, 289<br />

commentary<br />

breakback considerations, 291<br />

copy, 291<br />

cumulative, 291<br />

definition, 398<br />

deleting, 290<br />

moving with administration links, 149<br />

moving with system links, 149<br />

components tabs<br />

translation, 187<br />

concurrency, 359<br />

condition<br />

specifying for event, 226<br />

configure application, 70<br />

configuring attached document properties, 290<br />

configuring the Web client rights, 39<br />

contribution e.List items, 109<br />

contributions, 26, 87<br />

definition, 398<br />

<strong>Contributor</strong> add-ins<br />

Microsoft Excel, 91<br />

<strong>Contributor</strong> <strong>Administration</strong> Console<br />

definition, 398<br />

contributor-only cubes, 78<br />

copy<br />

import, 175<br />

copy commentary, 291<br />

copy development e.List item publish setting to produc-<br />

tion application, 82<br />

copying<br />

Analyst <strong>Contributor</strong> links, 350<br />

copyright material<br />

creating<br />

printing, 15<br />

application, 65<br />

applications, 39, 60<br />

applications using a script, 39<br />

connections from IBM Cognos 8 BI products, 306<br />

cube help, 375<br />

datasource connections, 303<br />

detailed fact query subject, 167<br />

Framework Manager projects, 165, 303<br />

planning packages, 244<br />

<strong>Planning</strong> tables, 47<br />

PowerCubes, 309<br />

production applications, 25<br />

publish containers, 39<br />

scripts, 39<br />

source files, 173<br />

system links, 163<br />

Web sites, 25<br />

credentials, 44<br />

cube dimension order<br />

setting, 72<br />

cube instructions, 78<br />

cube order<br />

setting, 71<br />

Index<br />

<strong>Administration</strong> <strong>Guide</strong> 407


Index<br />

cubes, 21, 119<br />

access tables, 124<br />

changing, 250<br />

creating help, 375<br />

definition, 398<br />

detailed help, 375<br />

importing data, 173<br />

no access tables, 126<br />

types, 119<br />

cumulative commentary, 291<br />

current owners, 95<br />

definition, 398<br />

cut-down models, 74, 138<br />

access tables, 140<br />

cutting down a D-List to no data does not affect<br />

dimension size, 140<br />

definition, 398<br />

examples, 142<br />

Go to Production process, 256<br />

impact from access tables, 130<br />

languages, 246<br />

limitations, 138<br />

options, 139<br />

processes, 138<br />

restrictions to cutting down dimensions, 140<br />

translations, 139<br />

cut down to no data does not affect dimension size, 140<br />

D<br />

data<br />

loss from changes, 179<br />

moving using links, 145<br />

database object names, 266<br />

databases<br />

backing up, 357<br />

object names, 280<br />

privileges, 356<br />

data blocks, 245<br />

dataCacheExpirationThreshold parameter, 306<br />

data dimensions for publish, 263<br />

data entry<br />

validating, 231<br />

data entry limits, 395<br />

numerical cells, 395<br />

data loads, 360<br />

408 <strong>Contributor</strong><br />

data source connections, 303<br />

creating, 306<br />

data sources<br />

TM1, 147<br />

datastores<br />

definition, 398<br />

rights, 39<br />

datastore servers, 48<br />

information, 49<br />

data validation, 231<br />

and e.List items, 240<br />

defining business rules, 237<br />

defining fail actions, 239<br />

impact of aggregations, 233<br />

setting up, 232<br />

setting up D-Cubes in Analyst, 233<br />

DB2 import utility, 359<br />

D-Cubes<br />

definition, 398<br />

setting up pre- and post-aggregation ordering, 233<br />

DDL scripts, 356<br />

delete<br />

job server cluster, 56<br />

delete annotations<br />

rights, 39<br />

Delete Commentary, 394<br />

deleting<br />

annotations, 289<br />

annotations for e.List items, 289<br />

commentaries, 289<br />

commentary, 290<br />

<strong>Contributor</strong> applications, 60<br />

e.List items, 104<br />

import queues, 176<br />

jobs, 55<br />

namespaces, 31<br />

server definitions, 60<br />

undefined items, 98<br />

deployment<br />

macro, 210<br />

deployment status, 172<br />

designing e.Lists, 24<br />

detailed fact query subject, 167<br />

memory usage, 169<br />

developing plans, 24


development applications, 247<br />

rights, 39<br />

development environment, 170<br />

dimensions<br />

changing, 252<br />

definition, 399<br />

D-Links, 341<br />

editing selections, 116<br />

saving selections, 115<br />

dimensions for publish<br />

selecting, 82<br />

dimensions for publishing<br />

rules for non-defined, 303<br />

disable job processing, 56<br />

display audit annotations in Web client, 74<br />

display warning message on zero data, 79<br />

D-Links, 22<br />

definition, 399<br />

designing <strong>Contributor</strong> model in Analyst, 339<br />

dimensions, 341<br />

D-List aggregations<br />

impact on data validation, 233<br />

D-List format<br />

D-Lists<br />

definition, 399<br />

definition, 399<br />

importing using IQDs, 312<br />

drill down<br />

definition, 399<br />

dynamic objects, 357<br />

E<br />

e.List item properties<br />

previewing, 293<br />

e.List items<br />

adding, 102<br />

configuring display number, 96<br />

deleting, 104<br />

multiple owners, 95<br />

reconciliation, 256<br />

reordering, 103<br />

e.Lists, 21, 93, 98, 106<br />

aggregation and data validation, 233<br />

associating validation rules, 240<br />

changes, 136<br />

default options, 393<br />

definition, 399<br />

designing, 24<br />

importing file examples, 99<br />

importing from Performance Applications, 312<br />

reconciliation, 104<br />

editor lagging, 255<br />

editors<br />

definition, 399<br />

EListItemCaption column heading, 100<br />

ELIstItemIsPublished column heading, 101<br />

EListItemName column heading, 100<br />

EListItemOrder column heading, 100<br />

EListItemParentName column heading, 100<br />

EListItemReviewDepth column heading, 101<br />

EListItemViewDepth column heading, 100<br />

email<br />

sending on save, 74<br />

sending on submit, 74<br />

e-mail, 63<br />

links, 378<br />

email character separator, 71<br />

E-mail function, 27<br />

environments, 170<br />

error messages<br />

errors<br />

importing the e.List and rights, 97<br />

out of memory when exporting during deploy-<br />

ment, 173<br />

handling, 379<br />

logging, 383<br />

estimating model and data block size, 141<br />

eTrust SiteMinder namespace, 29<br />

event<br />

condition, 226<br />

event condition<br />

specifying, 226<br />

Everyone group, 37<br />

examples<br />

e.List files, 99<br />

importing data source files, 174<br />

rights file, 111<br />

Export for Excel extension, 312<br />

exporting<br />

access tables, 129<br />

Analyst library, 170<br />

application links, 170<br />

Index<br />

<strong>Administration</strong> <strong>Guide</strong> 409


Index<br />

e.Lists, 101<br />

macros, 170<br />

model, 170<br />

rights, 101<br />

export tables, 270<br />

expression<br />

specifying for event condition, 226<br />

extending functionality, 21<br />

extensions<br />

definition, 399<br />

group, 300<br />

external namespaces<br />

F<br />

eTrust SiteMinder, 29<br />

IBM Cognos Series 7, 29<br />

LDAP, 29<br />

Microsoft Active Directory, 29<br />

NTLM, 29<br />

SAP, 29<br />

failure of reconciliation, 54<br />

file formats, 99<br />

file maps<br />

definition, 400<br />

filesys.ini, 47<br />

file types<br />

IQDs, 312<br />

fill and substitute mode, 353<br />

Filters<br />

importing from SAP BW, 167<br />

financial planning, 300<br />

finding, 98<br />

e.Lists, 98<br />

information, 14<br />

rights, 98<br />

finish screen, 256<br />

folders, 68<br />

fonts, 192<br />

force to zero option, 137<br />

formatting imported access tables, 127<br />

Framework Manager, 302<br />

creating and publishing a Framework Manager<br />

package, 166<br />

creating a project and import metadata, 165<br />

creating projects, 303<br />

model troubleshooting, 361<br />

410 <strong>Contributor</strong><br />

G<br />

model updating, 309<br />

Generate Framework Manager Model extension<br />

accessing published data from IBM Cognos 8, 306<br />

generate scripts, 79<br />

generating<br />

Transformer models, 309<br />

Get Data<br />

definition, 400<br />

global administration<br />

rights, 39<br />

go to production<br />

definition, 400<br />

go to production options, 82<br />

access rights, 82<br />

planning package setting, 82<br />

Go to Production process, 243<br />

buttons, 27<br />

e.List items to be reconciled, 256<br />

finishing, 256<br />

importing data details, 254<br />

importing process, 177<br />

invalid owners and editors, 254<br />

model changes, 250<br />

options, 248<br />

rights, 39<br />

running, 248<br />

show changes screen, 249<br />

grid options, 72<br />

grids<br />

default, 389<br />

definition, 400<br />

group extensions, 300<br />

groups, 31<br />

GUID<br />

H<br />

help, 27<br />

validate, 113<br />

definition, 400<br />

adding, 78<br />

getting, 14<br />

translating, 191<br />

hidden items, 120<br />

history tracking, 74


hold<br />

definition, 400<br />

HTML formatting, 375<br />

hypertext links, 377<br />

I<br />

IBM Cognos 8, 302<br />

caching <strong>Contributor</strong> unpublished data, 306<br />

IBM Cognos 8 Business Intelligence studios<br />

connecting to data sources, 306<br />

IBM Cognos Performance Applications, 312<br />

IBM Cognos Resource Center, 14<br />

IBM Cognos Series 7 namespace, 29, 335<br />

illegal characters, 387<br />

images, 375<br />

import blocks<br />

definition, 400<br />

import block size, 79<br />

Import data details tab, 254<br />

import files<br />

prepared data blocks, 176<br />

preparing, 177<br />

testing, 177<br />

Import from IQD wizard<br />

Performance Applications, 312<br />

importing<br />

access tables, 126<br />

Analyst library, 170<br />

application links, 170<br />

copy process, 175<br />

data, 39, 143<br />

data into cubes, 173<br />

e.Lists, 39, 96, 101<br />

example, 174<br />

loading, 175<br />

macros, 23, 170<br />

models, 170<br />

multiple times, 149<br />

preparing, 176<br />

rights, 96<br />

rights file formats, 110<br />

SAP BW data, 167<br />

translated files, 190<br />

import links<br />

definition, 400<br />

import location, 79<br />

import options, 79<br />

import queues, 176<br />

incremental, 279<br />

incremental publish, 279<br />

information<br />

finding, 14<br />

inherited rights, 108<br />

integration, 299, 302<br />

IBM Cognos <strong>Contributor</strong> and IBM Cognos Control-<br />

ler, 312<br />

Invalid owners and editors tab, 254<br />

IQD files<br />

importing D-Lists and e.Lists, 312<br />

items tables for table-only layout, 267<br />

J<br />

jobs<br />

architecture, 358<br />

canceling, 54<br />

deleting, 55<br />

managing, 49, 52<br />

pausing, 54<br />

preparing import jobs, 359<br />

publishing, 53, 359, 360<br />

running, 25<br />

run order, 50<br />

securing, 51<br />

securing scheduled, 44<br />

types, 49<br />

job server clusters<br />

adding, 56<br />

rights, 39<br />

job servers, 56<br />

L<br />

lagging<br />

adding, 56<br />

adding objects, 58<br />

definition, 400<br />

rights, 39<br />

editor, 255<br />

large access tables, 129<br />

large objects (LOBs, BLOBs, CLOBs), 358<br />

LDAP namespace, 29<br />

libraries<br />

Analyst guidelines, 337<br />

Index<br />

<strong>Administration</strong> <strong>Guide</strong> 411


Index<br />

definition, 400<br />

limitations, 363<br />

limitations when importing IBM Cognos packages, 363<br />

limit document size, 71<br />

limits for data entries, 395<br />

link access<br />

linking<br />

rights, 39<br />

to existing applications, 39<br />

linking to applications, 89<br />

linking to a publish container<br />

rights, 39<br />

link modes, 149<br />

links<br />

administration, 147, 149, 157<br />

Analyst - <strong>Contributor</strong>, 347<br />

changing, 252<br />

client run, 144<br />

local, 145<br />

memory usage, 351<br />

model designs, 146<br />

order, 148<br />

system, 144, 163<br />

using to move data, 145<br />

loading data, 175, 360<br />

LOBs, See large objects (LOBs, BLOBs, CLOBs)<br />

local links, 145<br />

lock<br />

definition, 400<br />

definition, 401<br />

lock escalation, 358<br />

LOCKLIST setting, 357<br />

LogFetcher utility, 385<br />

lookup d-links<br />

M<br />

macro<br />

definition, 401<br />

deployment, 210<br />

macros, 23<br />

administrator links, 219<br />

authentication, 43<br />

automating tasks, 193<br />

creating, 194<br />

definition, 401<br />

deleting commentary, 217<br />

412 <strong>Contributor</strong><br />

development, 202<br />

executing a command line, 222<br />

executing an Admin extension, 218<br />

IBM Cognos Connection, 43<br />

importing access tables, 23, 206<br />

importing e.List and rights, 23<br />

importing e.Lists and rights, 208<br />

managing job servers, 199<br />

production, 211<br />

publishing, 23, 211<br />

rights, 42<br />

rights needed to transfer, 42<br />

running, 224<br />

securing scheduled, 44<br />

synchronizing, 23<br />

troubleshooting, 229<br />

upgrading, 329<br />

upload development model, 210<br />

maintaining the application<br />

rights, 39<br />

manage extensions<br />

rights, 39<br />

managing<br />

jobs, 49, 52<br />

sessions, 61<br />

managing <strong>Contributor</strong> master dimensions<br />

Business Viewpoint Client, 314<br />

match descriptions<br />

definition, 401<br />

matrix management, 147<br />

maximum number of attached documents, 290<br />

maximum workspace<br />

definition, 401<br />

MAXLOCKS setting, 357<br />

measures dimension, 303<br />

metadata, 357<br />

organizing data using Framework Manager, 303<br />

Microsoft Active Directory, 29<br />

Microsoft Excel<br />

design considerations, 311<br />

migrating applications, 170<br />

model<br />

designing Analyst model for <strong>Contributor</strong>, 337<br />

model and data block sizes, 141<br />

model changes screen, 250


model designs<br />

models<br />

using links, 146<br />

advanced changes, 182<br />

changes that impact publish tables, 262<br />

creating in Framework Manager, 303<br />

definition, 245, 401<br />

details, 68<br />

using Generate Framework Manager Model func-<br />

tionality, 306<br />

modify datastore connection details<br />

rights, 39<br />

modifying rights manually, 111<br />

monitored applications, 58<br />

Monitoring Console, 61, 172<br />

moving data using links, 145<br />

multi-administration roles, 22<br />

multi e.List item views, 72<br />

multiple access tables, 135<br />

multiple administrators, 22<br />

multiple owners<br />

N<br />

e.List items, 95<br />

namespace<br />

validate users, 113<br />

namespaces, 29<br />

definition, 401<br />

deleting, 31<br />

multiple, 29<br />

restoring, 31<br />

upgrade, 335<br />

See Also authentication providers<br />

naming conventions, 357<br />

navigation, 71<br />

No Data access level, 120<br />

NTLM namespace, 29<br />

numerical cells<br />

O<br />

limits, 395<br />

objects, 357<br />

offline<br />

store, 90<br />

offline grids<br />

definition, 401<br />

offline working<br />

preventing, 74<br />

Oracle, 48<br />

orientation, 72<br />

out of memory error when exporting during deploy-<br />

ment, 173<br />

owners, 95<br />

definition, 402<br />

ownership<br />

P<br />

e.List items, 112<br />

parameters<br />

dataCacheExpirationThreshold, 306<br />

LOCKLIST, 357<br />

MAXLOCKS, 357<br />

percentages, 312<br />

performance<br />

CPU usage, 159<br />

model changes, 160<br />

tuning settings, 161, 162<br />

variables, 159<br />

permissions, 105, 356<br />

planner, 26<br />

rights, 109<br />

planner-only cubes, 78<br />

planners<br />

definition, 402<br />

<strong>Planning</strong> <strong>Administration</strong> Domain<br />

upgrading, 329<br />

<strong>Planning</strong> <strong>Contributor</strong> Users, 33<br />

<strong>Planning</strong> Data Service, 302<br />

planning package, 244, 248<br />

Framework Manager, 302<br />

Go to Production, 302<br />

<strong>Planning</strong> Rights Administrator, 33<br />

<strong>Planning</strong> tables<br />

plans<br />

creating, 47<br />

developing, 24<br />

post production tasks, 256<br />

precalculated summaries, 261<br />

prepared data blocks, 176<br />

preparing data, 176<br />

preproduction process, 256<br />

prevent client-side reconciliation, 82<br />

Index<br />

<strong>Administration</strong> <strong>Guide</strong> 413


Index<br />

preventing<br />

client-side reconciliation, 55<br />

prevent offline working, 74<br />

preview data<br />

rights, 39<br />

previewing<br />

e.List item properties, 293<br />

e.List items, 104<br />

production workflow, 293<br />

properties, 293<br />

previewing the production workflow, 293<br />

printing copyright material, 15<br />

Print to Excel extension, 312<br />

privileges, 356<br />

product application<br />

definition, 402<br />

production<br />

tasks, 246<br />

production applications, 245<br />

rights, 39<br />

production environment, 170<br />

production workflow<br />

projects<br />

previewing, 293<br />

creating in Framework Manager, 303<br />

prompt to send email on reject, 74<br />

prompt to send email on save, 74<br />

prompt to send email on submit, 74<br />

prompt to send email when user takes ownership, 74<br />

properties<br />

protect<br />

previewing, 293<br />

definition, 402<br />

providers<br />

security, 29<br />

publish, 259, 279<br />

access rights, 260<br />

data dimensions, 263<br />

data types, 271<br />

export tables, 270<br />

hierarchy tables, 267<br />

items tables, 267<br />

layouts, 259<br />

scripts, 260<br />

select e.List items, 261<br />

table-only layout, 265<br />

414 <strong>Contributor</strong><br />

publish containers<br />

definition, 402<br />

rights, 39<br />

publish data<br />

rights, 39<br />

publishing<br />

definition, 402<br />

macros, 23<br />

publishing attached documents, 291<br />

publish options, 79<br />

R<br />

read access, 120<br />

recalculate after every cell change, 72<br />

reconciliation, 54<br />

changes to access tables, 136<br />

definition, 402<br />

e.Lists, 104<br />

prevent client-side, 82<br />

record audit annotations, 74<br />

reject depth, 105<br />

related documentation, 13<br />

removing applications, 60<br />

rights, 39<br />

reordering<br />

e.List items, 103<br />

reporting directly from publish tables, 261<br />

reporting on live data, 302<br />

resetting development to production, 27<br />

restoring<br />

namespaces, 31<br />

review depth, 105<br />

review e.List items, 108<br />

reviewer edit<br />

allowing, 74<br />

reviewers, 26<br />

access levels, 137<br />

definition, 402<br />

rights, 108<br />

reviews, 87<br />

rights, 107<br />

assigning, 24<br />

default, 393<br />

definition, 402<br />

e.List items, 105<br />

file formats, 110


inherited, 108<br />

modifying, 111, 112<br />

reordering, 112<br />

submitting, 108<br />

summary, 107, 113<br />

user, 107<br />

roles, 31<br />

valdiate, 113<br />

row-level locking, 358<br />

rule sets<br />

associating to e.Lists, 240<br />

defining fail actions, 239<br />

running jobs, 25<br />

run order of jobs, 50<br />

S<br />

sample HTML text, 375<br />

SAP BW<br />

importing data, 167<br />

limitations, 363<br />

SAP namespace, 29<br />

saved selections, 22, 115<br />

definition, 402<br />

editing, 116<br />

Save function, 27<br />

saving application XML for support, 79<br />

scenario dimensions, 261<br />

scheduler credentials, 44<br />

jobs, 51<br />

script.sql, 69<br />

scripts creation path, 79<br />

searching<br />

e.Lists, 98<br />

rights, 98<br />

translating, 191<br />

securing jobs, 51<br />

security, 356<br />

access control, 29<br />

authentication, 29<br />

providers, 29<br />

upgrade, 335<br />

validate users, 113<br />

Web client settings, 89<br />

security overview, 34<br />

select color for changed values, 72<br />

send email on reject<br />

prompt, 74<br />

server call timeout, 380<br />

servers for datastores, 48<br />

server-side reconciliation, 54<br />

sessions<br />

managing, 61<br />

set an application on or offline<br />

rights, 39<br />

Set Offline function, 27<br />

Set Online function, 27<br />

setting data cache expiration, 306<br />

simplifying models, 312<br />

size of models, 68<br />

size of models and data blocks, 141<br />

slice and dice<br />

allowing, 72<br />

source files<br />

creating, 173<br />

SQL, 357<br />

SQL Server, 48<br />

static objects, 357<br />

stop job processing, 56<br />

submitting rights, 108<br />

synchronize<br />

definition, 403<br />

synchronize administration link, 156<br />

synchronizing<br />

advanced model changes, 182<br />

avoiding data loss, 180<br />

data loss, 179<br />

examples, 181<br />

Generate Scripts option, 180<br />

macros, 23<br />

rights, 39<br />

system administrator, 29<br />

system link, 79<br />

system links, 144<br />

creating, 163<br />

definition, 403<br />

using to move commentary, 149<br />

system locale, 191<br />

system settings, 44<br />

T<br />

table-level locking, 358<br />

Index<br />

<strong>Administration</strong> <strong>Guide</strong> 415


Index<br />

table-only layouts<br />

definition, 403<br />

table-only publish layout, 265<br />

table-only publish post GTP, 79<br />

take ownership<br />

send email, 74<br />

test environment, 170<br />

text formatted cells<br />

data entry limits, 395<br />

timeout, 380<br />

TM1<br />

data sources, 147<br />

translating<br />

help, 191<br />

searches, 191<br />

strings, 187<br />

translation<br />

application tabs, 187<br />

assigning to users, 185<br />

changes, 185<br />

cycles, 185<br />

exporting files, 190<br />

importing and exporting files, 189<br />

importing files, 190<br />

rights, 39<br />

trees, 87<br />

troubleshooting<br />

U<br />

Generate Framework Manager Model exten-<br />

sion, 361<br />

importing from IBM Cognos package, 366<br />

importing IBM Cognos packages, 363<br />

macros, 229<br />

modeled data import, 366<br />

unable to change model design language, 362<br />

unable to connect to Oracle database, 361<br />

unable to create framework manager model, 361<br />

unable to retrieve session namespace, 362<br />

underlying values, 312<br />

unowned items, 95<br />

unpublished <strong>Contributor</strong> data, 306<br />

unregistering<br />

namespaces, 31<br />

upgrading<br />

Admin extensions, 329<br />

416 <strong>Contributor</strong><br />

administration links and macros, 329<br />

Analyst - <strong>Contributor</strong> links, 329<br />

applications, 60, 329<br />

<strong>Contributor</strong> web site, 336<br />

planning administration domain, 329<br />

rights, 39<br />

Web sites, 336<br />

what is not upgraded in <strong>Contributor</strong>, 329<br />

wizards, 332<br />

use client-side cache, 74<br />

user annotations, 289<br />

user models, 306<br />

users, 21, 31, 101<br />

V<br />

validate<br />

classes and permissions, 32<br />

loss of access to e.List items, 255<br />

validate, 113<br />

users, 113<br />

validate administration link, 156<br />

validation methods, 231<br />

version dimensions, 261<br />

view application details, 79<br />

view depth, 101, 106<br />

viewing<br />

imported access tables, 128<br />

view layout<br />

definition, 403<br />

view rights, 113<br />

W<br />

warning messages<br />

importing the e.List and rights, 97<br />

Web clients<br />

settings, 89<br />

web client settings, 71<br />

web client status refresh rate, 74<br />

Web sites, 87<br />

wizards<br />

creating, 25<br />

upgrading, 336<br />

Import from IQD wizard, 312<br />

workflow state definition, 295<br />

write access, 120


X<br />

XML<br />

default locations and filenames, 390<br />

Index<br />

<strong>Administration</strong> <strong>Guide</strong> 417

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!