Skip to main content

Data & Intelligence

Automation of Cognos TM1 Data Loading

line

In one of my earlier posts I recommended an approach for loading larger amounts of data into TM1 applications, and provided a high-level explanation of this approach, which I call the “File Route” solution.

Some More Thoughts                 

I like this load-strategy because, in following best practice guidelines, it leverages TM1’s proven ETL tool, TurboIntegrator to load data. TurboIntegrator (or TI) is a fast and efficient method for loading rows and rows of data and can be programmed to handle most presumed exception conditions. The following are a few more thoughts on implementing such a solution.

Data Intelligence - The Future of Big Data
The Future of Big Data

With some guidance, you can craft a data platform that is right for your organization’s needs and gets the most return from your data capital.

Get the Guide

Generally “bulk data load” does not refer to the process of receiving formatted data files from source systems or directly querying a ledger for data, what it does refer to is the need for a simple, robust process for business users to enter lager amounts of data into TM1 without having to type in the data, record by record. For example, when making top-side adjustments to a forecast it may be considered reasonable to expect a user to key-in or edit certain cells in a cube (using a formatted cube view or WebSheet, for example) but if a user must provide actuals for 100’s of accounts and for dozens of cost centers, than data entry isn’t going to work. Here is where a “bulk load” solution comes in handy.

Think of a bulk load process as an assembly line that continuously moves from point A (the user’s desktop) to point B (a location with TM1). Users “drop” chunks of data onto a “receiving point” or “drop box” where an automated, intelligent process receives the “package”, “logs it as a request”, (does some verification) and then indicates to an individual loader process that there is work to be done. Once a loader process has processed the request (loads the data), another process validates the results of the processing request and then notifies the requestor (the user). (During each of these “steps” the process keeps a status object (perhaps a cube) updated with the latest processing information which can be viewed by the user).

 

More architectural thoughts to keep the following in mind when considering your file route solution:

  • Users will need to adopt a somewhat standard format for submitting their data. “Somewhat” means that there can be some flexibility in the format, but where formats differ, “logical rules” need to be strictly enforced to allow program logic to identify and understand data (as the TM1 Architect, you’ll need a good understanding of all data requirements for this!)
  • You will need to determine the cadence that is used by the solution to check for new data. How often? Every 30 minutes? Every hour? On demand? This will require some experimentation of average file sizes, data “overlaps”, security, etc.
  • Security requirements. For example, how will you restrict users from submitting data on behalf of others? Will you need to restrict WHEN or how often a user can submit data for load? How about maximum file sizes? Etc.
  • User feedback. It is absolutely imperative that you provide a near real time method for monitoring and auditing the status of load requests to each user.
  • Expirations. You will need to consider the idea of “expiring” requests. . Based upon a variety of factors, will a load request ever become “stale” and not need to be loaded?
  • You solution must always load to absorption or input cubes, never to any key application calculation or business process cube.
  • After processing data files, always move them from the drop box folder to an offline location for archival and later (if required) audit.
  • Build your solution as “generic” as possible! Do not “hard code” or “custom link” anything. The process should easily manage multiple load requests and, in a perfect world, have the ability to identify the format of the file (based upon its destination and perhaps other business logic), apply appropriate business logic and then load the data.

 

Conclusion

There is more to think about when designing and implementing a usable and scalable bulk load solution of course – but this approach works and works well. I’ve used it (or something similar to it) throughout my career with great success. Don’t believe me? Need some specifics? Give me a call.

jm

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Jim Miller

Mr. Miller is an IBM certified and accomplished Senior Project Leader and Application/System Architect-Developer with over 30 years of extensive applications and system design and development experience. His current role is National FPM Practice Leader. His experience includes BI, Web architecture & design, systems analysis, GUI design and testing, Database modeling and systems analysis, design, and development of Client/Server, Web and Mainframe applications and systems utilizing: Applix TM1 (including TM1 rules, TI, TM1Web and Planning Manager), dynaSight - ArcPlan, ASP, DHTML, XML, IIS, MS Visual Basic and VBA, Visual Studio, PERL, Websuite, MS SQL Server, ORACLE, SYBASE SQL Server, etc. His Responsibilities have included all aspects of Windows and SQL solution development and design including: analysis; GUI (and Web site) design; data modeling; table, screen/form and script development; SQL (and remote stored procedures and triggers) development and testing; test preparation and management and training of programming staff. Other experience includes development of ETL infrastructure such as data transfer automation between mainframe (DB2, Lawson, Great Plains, etc.) systems and client/server SQL server and Web based applications and integration of enterprise applications and data sources. In addition, Mr. Miller has acted as Internet Applications Development Manager responsible for the design, development, QA and delivery of multiple Web Sites including online trading applications, warehouse process control and scheduling systems and administrative and control applications. Mr. Miller also was responsible for the design, development and administration of a Web based financial reporting system for a 450 million dollar organization, reporting directly to the CFO and his executive team. Mr. Miller has also been responsible for managing and directing multiple resources in various management roles including project and team leader, lead developer and applications development director. Specialties Include: Cognos/TM1 Design and Development, Cognos Planning, IBM SPSS and Modeler, OLAP, Visual Basic, SQL Server, Forecasting and Planning; International Application Development, Business Intelligence, Project Development. IBM Certified Developer - Cognos TM1 (perfect score 100% on exam) IBM Certified Business Analyst - Cognos TM1

More from this Author

Follow Us
TwitterLinkedinFacebookYoutubeInstagram