Skip to main content

Data & Intelligence

Cognos TM1 Performance Review – on a budget!

Often I am asked to conduct a “performance review” of implemented Cognos TM1 applications “rather quickly” when realistically; a detailed architectural review must be extensive and takes some time. Generally, if there is a limited amount of time, you can use the following suggestions as perhaps some appropriate areas to focus on (until such time when a formal review is possible):

  1. Cognos TM1 Performance Review on a BudgetLocking and Concurrency. Ensure that there are no restrictions or limitations that may prevent users from performing tasks in one part of the application because other users are “busy” exercising the application somewhere else or, specifically, limit the exposure that would allow one user or feature to adversely affect another.
  2. Batch (TurboIntegrator) Processing Time. This is the time it takes for critical TurboIntegrator scripts to complete.
  3. Application Size. The overall size of the application should be reviewed to determine if it is “of a reasonable size”, based upon the current and future expectations of the application (there are a lot of factors that will cause memory consumption to be less than optimal but, given time a cursory check is recommended to identify any potential offenders).

Specific Review Areas

Based upon timing and the above outlined objectives, you might proceed by looking at the following:

  • The TM1 server configuration settings
  • The number of and the dimensionality of the implemented cubes – specifically the cubes currently consuming the largest amounts of memory.
  • Implemented security (at a high level)
  • All TurboIntegrator processes that are considered to be part of critical application processing.

Start by

Taking a quick look at the server configuration settings (tm1s.cfg) and follow-up on any non-default settings – who changed them and why? Do they make sense? Where they validated as having the expected effects?

Cubes and dimensions should be reviewed. Are there excessive views or subsets? Are there any cubes with an extraordinary number of dimensions? Have the dimension order been optimized? Any dimension partially large? Etc.

Security can be complex and compound or, simple and straight forward. Given limited time I usually check the number of roles (groups) vs. the number of users (clients) – hint: never have more groups than clients – and things like naming conventions and how security is maintained. In addition, I am always uneasy when I see cell-level security implemented.

More considerations:

As part of a “quick review” I recommend leveraging a file-search tool such as Grep (or similar) to provide the ability to examine all of the applications TurboIntegrator script files for a specific function or logic pattern use (that you may want to examine more closely):

Saving Changed Data Methodology

Quickly scan the TI processes – do any call the SaveDataAll function to force in-memory changes to be committed to disk? SaveDataAll commits changes in-memory for all cubes in the instance or server. This creates a period of time that the TM1 server is locked to all users. In addition, depending on the volume of changes that were made since the last save, the period of time the function will require to complete will vary and increase processing time. Rather than using the SaveDataAll function, CubeSaveData should be used to serialize or commit in-memory changes for (only) a specific cube:

CubeSaveData(Cube);

Logging Options

TM1 by default logs all changes as transactions to all cubes. Transactional logging may be used to recover in an event of a server crash or other abnormal error or shutdown. A typical application does not need to log all transactions for all cubes. Transaction logging impacts overall server performance and increases processing time when changes are being made in a “batch”. The best practice recommendation is to turn off logging for all cubes that do not require TM1 to recover lost changes. In other situations, it may be preferable to set cube logging on for a particular cube but temporally turn off logging at the start of a (TurboIntegrator) process and then reset or turn back on logging after the process completes successfully:

CubeGetLogChanges(CubeName);

Data Intelligence - The Future of Big Data
The Future of Big Data

With some guidance, you can craft a data platform that is right for your organization’s needs and gets the most return from your data capital.

Get the Guide

CubeSetLogChanges(Cube, LogChanges);

Subset and View Maintenance

It is a best practice recommendation to avoid using the ViewDestory and SubsetDestory functions. These functions are memory intensive, cause potential locking/rollback situations and impact performance. The appropriate approach is to use ViewExists and SubsetExists and if the view or subset exists, update the view and subsets as required for the processing effort.

An additional good practice is to modify the view and subset in the Epilog section to insert a single leaf element to all subsets in the view to reduce its overall size in case a user accidentally opens these “not for user” views.

CellsUpdateable

The CellsUpdateable is a TM1 function that lets you determine if a particular cube cell can be written to. This function is useful but impacts performance since it uses the same logic and internal resources as performing an actual cell write (CellPutN or CellPutS). This function is usually used as a defensive measure in avoiding write errors or to simplify processing logic flow. It is a best practice recommendation to restrict or filter the data view being processed (eliminating the need for recurring CellsUpdateable calls) if possible. This approach also decreases the volume or size of the data transactions to be processed.

Span of Data

Even the best TM1 model designs have limitations on the volume of data that can be loaded into a cube. An optimal approach is to limit a cube to 3 years of “active” data with prior years available in archival cubes. This will reduce the size of the entire TM1 application and improve performance. Older data can still be available for processing (sourced from 1 or more archival cubes). Keep the “active” cube loaded with only those years of data that have the highest percentage chance of being required for a “typical” business process. Additionally, it is recommended that a formal process for both archival and removal of years of data be provided.

View Caching

TM1 caches views of data into memory for faster access (increasing performance). Once data changes, views become invalid and must be re-cached for viewing or processing. It may be beneficial to utilize the TM1 function VIECONSTRUCT to “force” TM1 to pre-cache updated views before processing them. This function is useful for pre-calculating and storing large views so they can be quickly accessed after a data load or update:

ViewConstruct(CubeName, ViewName);

Real-time (RT) to Near-real-time (NRT)

Generally speaking, consolidation is one of TM1’s greatest features. The TM1 engine will perform in-memory consolidations and drill-downs faster than any rule or TI process.

All data that is not loaded from external means should be maintained using the most efficient means: a consolidation, TurboIntegrator process or rule. From a performance and resource usage perspective, consolidations are the fastest and require the least memory and rules are the slowest and require the most memory. Simply put, all data that cannot be calculated by a TM1 consolidation should be seriously evaluated to determine its change tempo (slow moving or fast moving). For example, data that changes little or only changes during a set period of time, or “on demand” are “good candidates” for TI processing (or near real time maintenance) rather than using a rule. If at all possible, moving rule calculation logic into a TurboIntegrator process should be the method used to maintain as much data as possible. This will reduce the size of the overall TM1 application and improve overall processing performance as well.

Architectural Approach

It is an architectural best practice to organize application logical components as separate or distinct from one another. This is known as “encapsulation” of business purpose. Each component should be purpose based and be optimized to best solve for its individual purpose or need. For example, the part of the application that performs calculating and processing of information should be separated from the part (of the application) that supports the consumption of or reporting on of information.

Applications with architecture that does not separate on purpose are more difficult (more costly) to maintain and typically develop performance issues over time.

Architectural components can be separated within a single TM1 server instance or across multiple server instances. The “multiple servers” approach can be ideal in some cases may:

  • Improve processing time as the processing instance would be a “system only” instance optimized for batch processing.
  • Eliminate locking situations as no client/user would have access to the “system only” instance.
  • Reduce the overall size of the “client/user” instance.
  • Improve maintainability by reducing the amount of code per instance.
  • Support scalability – the “system instance” can server numerous future servers that may require profile and work plan generation.

Performance Profile

Profiling is the extrapolation of information about something, based on known qualities (baselines) to determine its behavior (performance) patterns. In fact, performance profiling is determining the average time and/or resources required to perform a particular task within an application. During performance testing, profiles will be collected for selected application events that have established baselines. It is absolutely critical to follow the same procedure that was used to establish the event baseline to create its profile. Application profiles are extremely valuable in:

  • Validation of application architecture
  • Application optimization and tuning
  • Predicting cost of application service

It is strongly recommend that a performance/stress test be performed, will the goal of establishing an application profile, as part of any “application performance” review.

Conclusion

Overall, most TM1 applications will have various “to-dos” such as fine tuning of specific feeders and other overall optimizations which may be already in process or scheduled to occur as part of the project plan.  It is recommended that if an extended performance review is not feasible at this time, at leaset each of these suggestions should be reviewed and discussed to determine its individual feasibility, expected level of effort to implement and effect on overall design approach (keeping in mind these are only high level – but important suggestions).

Good Luck!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Jim Miller

Mr. Miller is an IBM certified and accomplished Senior Project Leader and Application/System Architect-Developer with over 30 years of extensive applications and system design and development experience. His current role is National FPM Practice Leader. His experience includes BI, Web architecture & design, systems analysis, GUI design and testing, Database modeling and systems analysis, design, and development of Client/Server, Web and Mainframe applications and systems utilizing: Applix TM1 (including TM1 rules, TI, TM1Web and Planning Manager), dynaSight - ArcPlan, ASP, DHTML, XML, IIS, MS Visual Basic and VBA, Visual Studio, PERL, Websuite, MS SQL Server, ORACLE, SYBASE SQL Server, etc. His Responsibilities have included all aspects of Windows and SQL solution development and design including: analysis; GUI (and Web site) design; data modeling; table, screen/form and script development; SQL (and remote stored procedures and triggers) development and testing; test preparation and management and training of programming staff. Other experience includes development of ETL infrastructure such as data transfer automation between mainframe (DB2, Lawson, Great Plains, etc.) systems and client/server SQL server and Web based applications and integration of enterprise applications and data sources. In addition, Mr. Miller has acted as Internet Applications Development Manager responsible for the design, development, QA and delivery of multiple Web Sites including online trading applications, warehouse process control and scheduling systems and administrative and control applications. Mr. Miller also was responsible for the design, development and administration of a Web based financial reporting system for a 450 million dollar organization, reporting directly to the CFO and his executive team. Mr. Miller has also been responsible for managing and directing multiple resources in various management roles including project and team leader, lead developer and applications development director. Specialties Include: Cognos/TM1 Design and Development, Cognos Planning, IBM SPSS and Modeler, OLAP, Visual Basic, SQL Server, Forecasting and Planning; International Application Development, Business Intelligence, Project Development. IBM Certified Developer - Cognos TM1 (perfect score 100% on exam) IBM Certified Business Analyst - Cognos TM1

More from this Author

Follow Us