Skip to main content

Data & Intelligence

Loading Cubes – with Cognos TM1 Performance Modeler

So we’ve created a sample, simple cube in Modeler (this cube has only 2 dimensions) and we want to load some data. To do that, you can (like in TM1 Architect) right-click on the folder “Processes”:

newprocess

And then provide a name for the new process:

nameprocess

Performance Modeler now presents the “edit process” panel with the “Data Source” tab displayed as the open or current tab. Here you can browse to select your file to be imported and fill in the “Source Details” describing your file. These details include:

  • Field Delimiter
  • Starting Row
  • Column Headers Included (yes or no)
  • Quote Character
  • Decimal and Thousands Separator and a
  • Server File Location

Once you fill this information in, Modeler will show you the columns (fields) of data in your file and a “Data Preview”:

fileandgormat

In this step you need to check the columns you want to import and also (very important) indicate the column that will be considered the “measure” (in my file it was column 3).

Importance Step!

The next step is critical. You need to click on the “Maps” tab and then click “Show Properties” (in the upper right of the tab):

processproperties

The properties tab opens and you must select the:

  • Process Input Type
  • Process Target Name
  • Process Update Behavior
Data Intelligence - The Future of Big Data
The Future of Big Data

With some guidance, you can craft a data platform that is right for your organization’s needs and gets the most return from your data capital.

Get the Guide

 These properties are important, since the default behaviors of a new process created in Performance Modeler might not be what you expected. For my example, I chose to “Import to existing cube”, named my (target) cube, and “Add to existing values”.

Mapping

Performance modeler requires that you indicate what you want it to do with each column (field) in our input file. My file has only 3 columns:

inputfile

When I designated my target cube, the mapping tab changed.

On the left we see the 3 fields in my input file and on the right I see the 2 dimensions in my cube and additionally, you’ll also see a “Values for”.

I needed to “drag and drop” my input columns (one at a time) to the dimension and level that they correspond or “map” to. Column 3 is the actual numeric value to be loaded and should be “mapped” to the “Value for” indicator:

mappings

Note: It’s always a good idea to SAVE your process after each change you make to its definition.

At this point, notice that you could click on the Advanced tab and add some custom scripting to the TurboIntegrator Prolog, Metadata, Data or Epilog tabs, but for this easy example, let’s just execute it.

Right-click (on the process name) and select Execute.

successful

Now we look at the cube, we will see the results of our data load:

results

Terrific. So slightly different than build a load process in Architect, but it works. Next time I’ll show how I added custom scripting to this process and some other cool stuff.

Cheers!

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Jim Miller

Mr. Miller is an IBM certified and accomplished Senior Project Leader and Application/System Architect-Developer with over 30 years of extensive applications and system design and development experience. His current role is National FPM Practice Leader. His experience includes BI, Web architecture & design, systems analysis, GUI design and testing, Database modeling and systems analysis, design, and development of Client/Server, Web and Mainframe applications and systems utilizing: Applix TM1 (including TM1 rules, TI, TM1Web and Planning Manager), dynaSight - ArcPlan, ASP, DHTML, XML, IIS, MS Visual Basic and VBA, Visual Studio, PERL, Websuite, MS SQL Server, ORACLE, SYBASE SQL Server, etc. His Responsibilities have included all aspects of Windows and SQL solution development and design including: analysis; GUI (and Web site) design; data modeling; table, screen/form and script development; SQL (and remote stored procedures and triggers) development and testing; test preparation and management and training of programming staff. Other experience includes development of ETL infrastructure such as data transfer automation between mainframe (DB2, Lawson, Great Plains, etc.) systems and client/server SQL server and Web based applications and integration of enterprise applications and data sources. In addition, Mr. Miller has acted as Internet Applications Development Manager responsible for the design, development, QA and delivery of multiple Web Sites including online trading applications, warehouse process control and scheduling systems and administrative and control applications. Mr. Miller also was responsible for the design, development and administration of a Web based financial reporting system for a 450 million dollar organization, reporting directly to the CFO and his executive team. Mr. Miller has also been responsible for managing and directing multiple resources in various management roles including project and team leader, lead developer and applications development director. Specialties Include: Cognos/TM1 Design and Development, Cognos Planning, IBM SPSS and Modeler, OLAP, Visual Basic, SQL Server, Forecasting and Planning; International Application Development, Business Intelligence, Project Development. IBM Certified Developer - Cognos TM1 (perfect score 100% on exam) IBM Certified Business Analyst - Cognos TM1

More from this Author

Follow Us