Jim Miller, Author at Perficient Blogs https://blogs.perficient.com/author/jmiller/ Expert Digital Insights Sun, 28 Sep 2014 23:38:04 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Jim Miller, Author at Perficient Blogs https://blogs.perficient.com/author/jmiller/ 32 32 30508587 Cognos TM1 Server – On Start Up! https://blogs.perficient.com/2014/09/28/cognos-tm1-server-on-start-up/ https://blogs.perficient.com/2014/09/28/cognos-tm1-server-on-start-up/#respond Sun, 28 Sep 2014 23:38:04 +0000 http://blogs.perficient.com/dataanalytics/?p=5021

It is almost always advantageous to be able to “make sure” your Cognos TM1 environment is “ready for use” after a server restart. For example, you may want to:

  • Create a backup
  • Load the latest sales data
  • Initialize security
  • Etc.

Hopefully, you know what a TM1 chore is (“a chore is a set of tasks that can be executed in sequence that are typically TurboIntegrator processes) and understand that, as an administrator, you could login to TM1 and manually execute a chore or process, but there is a better way.

Let TM1 Server do it!

To have the TM1 server execute a chore immediately after (every time) it starts up, you can leverage a TM1 configuration file parameter to designate a chore as a “startup chore”. This is similar to w MS Windows service that is set to “automatic” (most likely like your machines TM1 servers):

start1

 

 

To indicate that your chore should be run when the server starts up, you go into the (TM1s.cfg) configuration file and add the parameter: StartupChores.

You simply list your chore (or chores separated by a colon, for example:

StartupChores=ChoreName1:ChoreName2:ChoreName3:ChoreNameN

Don’t worry too much about adding this to the configuration; if this parameter is not specified, then no Chores will be run, and if the chore name specified does not match an existing Chore then an error is written to the server log, and TM1 tries to execution the next chore indicated (if no valid chores are found, the server will simply start/become available as normal).

These chores will run before the server starts up (technically, the server is “up”, just not “available” to any user yet):

Startup chores run before user logins are accepted and before any other chores begin processing.

Here is my example:

 

start2

 

 

 

 

 

 

 

 

 

 

 

 

 

Once I restarted my server, I checked my server log and verified that the chore (Backup TM1) did in fact execute:

 

start3

 

 

 

 

 

 

 

 

 

Note:

Since Startup chores are run before any logins are allowed, you’ll have trouble trying to monitor the Startup chores with tools like TM1Top or even Operations Console – and therefore there is no way to cancel a Startup chore with the exception of killing the server process.

Startup!

]]>
https://blogs.perficient.com/2014/09/28/cognos-tm1-server-on-start-up/feed/ 0 200063
Rounding out Cognos TM1 https://blogs.perficient.com/2014/09/21/rounding-out-cognos-tm1/ https://blogs.perficient.com/2014/09/21/rounding-out-cognos-tm1/#respond Sun, 21 Sep 2014 15:23:24 +0000 http://blogs.perficient.com/dataanalytics/?p=4998

Currently there are 4 options for rounding numbers in Cognos TM1. They are:

  • Rounding in reports.
  • Rounding during loads.
  • Rounding with the Cube Viewer.
  • Rounding with rules.

Rounding in Reports

The most popular method to apply rounding in TM1 is in reporting. Cognos TM1 leverages MS Excel for reporting and supports all of the formatting and calculations available within Excel. Typically, “templates” are created that apply the organizations (or individuals) desired formatting and/or rounding in a consistent way. In addition, Excel workbooks can be published to TM1Web for viewing by wider audiences (other reporting options, such as Cognos BI, also support formatting/rounding in report presentation).

The following is a simple illustration of using Excel formatting on TM1 data:

round1

 

 

 

 

 

 

Rounding during loads

Another popular method for rounding numbers in TM1 is to round as data is being loaded (into TM1). This allows information to always be presented in the expected format (or precision) throughout the TM1 application. Based upon specific requirements, it is also common to model a TM1 application with reporting cubes to isolate the calculation and transactional processing from the reporting and presentation (of specific information). In this case, data may be transferred from a source cube to various reporting cubes and during that transfer process logic can be applied to round to the desired precision. Simply put, you may have a summary cube specifically for reporting that holds dollars rounded up to the nearest thousand.

Rounding with the Cube Viewer

Cognos TM1 cube Viewer does support some formatting options for viewing data in TM1 cubes. Although this method is somewhat elementary, some precision can be set which will invoke some level of rounding for display.

The format dialog in Cognos TM1:

round2

 

 

 

 

 

 

 

 

 

Rounding with Rules

Finally, Cognos TM1 supports the ability to create cube rules that apply business logic to data in TM1. This business logic can include rounding. Generally speaking, as a user navigates through a cube, TM1 executes the rule (in real time) applying the logic to certain data intersection points within the cube. The result of the rule can be based upon just about any algorithm. Below is an example of very simple rounding logic.

The user-entered value is “Valueof” and there are 2 TM1 cube rules applied:

  • RoundedValue shows the “Valueof” with basic rounding logic applied (rounds up or down to the nearest thousand).
  • RoundedUp shows the “Valueof” with basic rounding-up logic applied (rounds up to the nearest thousand).

 

round3

 

 

 

 

 

 

 

 

 

 

Recommendations

So what do I recommend? A best practice recommendation would be to evaluate your requirements and take an approach that best serves the model’s needs. The options that may best serve from an enterprise perspective would be to maximize flexibility by storing the raw numbers in TM1 and then either:

  • Using a reporting tool (such as Excel with defined formatting templates) to apply rounding
  • Create reporting cubes within the model and load data into those cubes at the appropriate precision and format for reporting

To be clear, it is important to understand that programmatically introducing rounding (either via during a load/transfer of data or by TM1 cube rule) can introduce material differences in some consolidation situations (shown in the cube view image above as “All Locations”).

]]>
https://blogs.perficient.com/2014/09/21/rounding-out-cognos-tm1/feed/ 0 200060
Simple Cognos TM1 Backup Best Practices https://blogs.perficient.com/2014/09/08/simple-cognos-tm1-backup-best-practices/ https://blogs.perficient.com/2014/09/08/simple-cognos-tm1-backup-best-practices/#respond Tue, 09 Sep 2014 02:18:12 +0000 http://blogs.perficient.com/dataanalytics/?p=4956

How do you create a recoverable backup for a TM1 server instance (TM1 service)? What is best practice? Here is some advice.

Note: as with any guideline or recommendation, there will be circumstances that support deviating from accepted best practice. In these instances, it is recommended that all key stakeholders involved agree that:

  • Simple Cognos TM1 Backup Best PracticesThe reason for deviation is reasonable and appropriate
  • The alternative approach or practice being implemented is reasonable and appropriate

Definition of a Backup

“In information technology, a backup, or the process of backing up, refers to the copying and archiving of computer data so it may be used to restore the original after a data loss event. The verb form is to back up in two words, whereas the noun is backup” (http://en.wikipedia.org/wiki/Backup).

To be clear, what I mean to refer to here is the creation of an archived copy or image of a specified Cognos TM1 server instance at a specified moment in time that can be used to completely restore that TM1 server to the state it was in when the archive was created.

Procedure

The following outlines the steps recommended for creating a valid backup:

  1. Verify the current size of the TM1 server logs and database folders. Note that the location of these folders is specified in the TM1s.cfg file; look for “DataBaseDirectory” and “LoggingDirectory”. Should you restore from this backup, you should compare these sizes to the size totals after you complete the restore.
  2. Verify that there is available disk space to perform compression of the server logs and database folders and to save the resulting compressed file(s).
  3. Verify that you have appropriate access rights to:
    1. Stop and start TM1 services
    2. Create, save and move files on the appropriate file systems and servers
  4. Notify all TM1 users that the server will be shut down at a specified time
  5. Login to TM1 as a TM1 Admin (preferably the Admin ID not, a client ID granted admin access).
  6. Verify that all TM1 users have exited. (One way to do this is to us right-click on the TM1 server (in TM1 server explorer) and select Server Manager…).
  7. Deactivate (turn off) any active or scheduled TM1 chores (Note: it is important to verify that you have available, up-to-date documentation on chore schedules before deactivating so that you can restore the correct chore schedule after the backup is complete).
  8. Make sure that any software that may have access to the TM1 logs and database folders (for example, virus scanning or automated backups) is temporarily disabled or not scheduled to run during the period of time that you will be creating a backup to avoid the chance of file lock conflicts.
  9. Perform a TM1 SaveDataAll.
  10. Logout of TM1.
  11. Stop the machine service for the TM1 server instance. Note: be sure that the service is not configured to “auto start”. Some environments may have services configured to startup automatically after a period of down time. It is imperative that the TM1 service does not start while a backup is being created.
  12. Verify that the service has stopped.
  13. Using a simple text editor such as MS Windows notepad, open and review the TM1 server log to verify that the TM1 service did stop and no errors occurred during shutdown.
  14. Use certified compression software such as 7-Zip, create a compressed file of the TM1 server logs folder
  15. Use certified compression software such as 7-Zip, create a compressed file of the TM1 server database folder
  16. Rename the compressed files, typically adding a “_date” to the file name for later reference. For example “Forecasting_2014_09_11.zip”.
  17. Move the compressed files to a “work area” and verify that the files can be uncompressed.
  18. Move the compressed files to an area specified for archiving backups, typically one that is subject to an automated network backup. These files should be saved for an appropriate amount of time.
  19. Restart the machine service for the TM1 server instance.
  20. When the TM1 server is available again, login as a TM1 Admin verifying the server is accessible.
  21. Using a simple text editor such as MS Windows notepad, open and review the TM1 server log to verify that the TM1 service did start successfully and no errors occurred during startup.
  22. Reactivate the appropriate TM1 chores (based upon available documentation).
  23. Notify all TM1 users that the server is now available.

Conclusion

Certainty some of the above steps could be eliminated in the process of creating a backup, however in an enterprise environment where business processes depend upon availability and correctness , it is highly recommended that the outlined steps  become standard operating procedure for creating your Cognos TM1 backups. 

Common sense, right? Let’s hope so.

]]>
https://blogs.perficient.com/2014/09/08/simple-cognos-tm1-backup-best-practices/feed/ 0 200056
An Architectural Approach to Cognos TM1 Design https://blogs.perficient.com/2014/08/28/an-architectural-approach-to-cognos-tm1-design/ https://blogs.perficient.com/2014/08/28/an-architectural-approach-to-cognos-tm1-design/#respond Thu, 28 Aug 2014 20:22:52 +0000 http://blogs.perficient.com/dataanalytics/?p=4907

Overtime, I’ve written about keeping your TM1 model design “architecturally pure”. What this means is that you should strive to keep a models “areas of functionality” distinct within your design.

Common Components

I believe that all TM1 applications, for example, are made of only 4 distinct “areas of functionality”. They are absorption (of key information from external data sources), configuration (of assumptions about the absorbed data), calculation (where the specific “magic” happens; i.e. business logic is applied to the source data using the set assumptions) and consumption (of the information processed by the application and is ready to be reported on).

Some Advantages

Keeping functional areas distinct has many advantages:

  • Reduces complexity and increases sustainability within components
  • Reduces the possibility of one component negativity effecting another
  • Enables the probability of reuse of the particular (distinct) components
  • Promotes a technology independent design; meaning components can be built using the technology that best fits their particular objective
  • Allows components to be designed, developed and supported by independent groups
  • Diminishes duplication of code, logic, data, etc.
  • Etc.

Resist the Urge

There is always a tendency to “jump in” and “do it all” using a single tool or technology or, in the case of Cognos TM1, a few enormous cubes and today, with every release of software, there are new “package connectors” that allow you to directly connect (even external) system components. In addition, you may “understand the mechanics” of how a certain technology works which will allow you to “build” something, but without comprehensive knowledge of architectural concepts, you may end up with something that does not scale, has unacceptable performance or is costly to sustain.

Final Thoughts

Some final thoughts:

  • Try white boarding the functional areas before writing any code
  • Once you have your “like areas” defined, search for already existing components that may meet your requirements
  • If you do decide to “build new”, try to find other potential users for the new functionality. Could you partner and co-produce (and thus share the costs) a component that you both can use?
  • Before building a new component, “try out” different technologies. Which best serves the need of these components objectives? (A rule of thumb, if you can find more than 3 other technologies or tools that better fit your requirements than the technology you planned to use, you’re in trouble!).

And finally:

Always remember, just because you “can” doesn’t mean you “should”.

]]>
https://blogs.perficient.com/2014/08/28/an-architectural-approach-to-cognos-tm1-design/feed/ 0 200051
A Practice Vision https://blogs.perficient.com/2014/08/27/a-practice-vision/ https://blogs.perficient.com/2014/08/27/a-practice-vision/#respond Wed, 27 Aug 2014 23:11:53 +0000 http://blogs.perficient.com/dataanalytics/?p=4905

Vision

Most organizations today have had successes implementing technology and they are happy to tell you about it. From a tactical perspective, they understand how to install, configure and use whatever software you are interested in. They are “practitioners”. But, how may can bring a “strategic vision” to a project or to your organization in general?

An “enterprise” or “strategic” vision is based upon an “evolutionary roadmap” that starts with the initial “evaluation and implementation” (of a technology or tool), continues with “building and using” and finally (hopefully) to the organization, optimization and management of all of the earned knowledge (with the tool or technology). You should expect that whoever you partner with can explain what their practice vision or mythology is or, at least talk to the “phases” of the evolution process:

Evaluation and Implementation

The discovery and evaluation that takes place with any new tool or technology is the first phase of a practices evolution. A practice should be able to explain how testing is accomplished and what it covers How was it that they determined if the tool/technology to be used will meet or exceed your organization’s needs? Once a decision is made, are they practiced at the installation, configuration and everything that may be involved in deploying the new tool or technology for use?

Build, Use, Repeat

Once deployed, and “building and using” components with that tool or technology begin, the efficiency at which these components are developed as well as the level of quality of those developed components will depend upon the level of experience (with the technology) that a practice possess. Typically, “building and using” is repeated with each successful “build” so how many times has the practice successfully used this technology? By human nature, once a solution is “built” and seems correct and valuable, it will be saved and used again. Hopefully, this solution would have been shared as a “knowledge object” across the practice. Although most may actually reach this phase, it is not uncommon to find:

  • Objects with similar or duplicate functionality (they reinvented the wheel over and over).
  • Poor naming and filing of objects (no one but the creator knows it exists or perhaps what it does)
  • Objects not shared (objects visible only to specific groups or individuals, not the entire practice)
  • Objects that are obsolete or do not work properly or optimally are being used.
  • Etc.

Manage & Optimization

At some point, usually while (or after a certain number of) solutions have been developed, a practice will “mature its development or delivery process” to the point that it will begin investing time and perhaps dedicate resources to organize, manage and optimize its developed components (i.e. “organizational knowledge management”, sometimes known as IP or intellectual property).

You should expect a practice to have a recognized practice leader and a “governing committee” to help identify and manage knowledge developed by the practice and:

  • inventory and evaluate all known (and future) knowledge objects
  • establish appropriate naming standards and styles
  • establishing appropriate development and delivery standards
  • create, implement and enforce a formal testing strategy
  • continually develop “the vision” for the practice (and perhaps the industry)

 

More

As I’ve mentioned, a practice needs to take a strategic or enterprise approach to how it develops and delivers and to do this it must develop its “vision”. A vision will ensure that the practice is leveraging its resources (and methodologies) to achieve the highest rate of success today and over time. This is not simply “administrating the environment” or “managing the projects” but involves structured thought, best practices and continued commitment to evolved improvement. What is your vision?

]]>
https://blogs.perficient.com/2014/08/27/a-practice-vision/feed/ 0 200050
IBM OpenPages GRC Platform –modular methodology https://blogs.perficient.com/2014/08/14/ibm-openpages-grc-platform-modular-methodology/ https://blogs.perficient.com/2014/08/14/ibm-openpages-grc-platform-modular-methodology/#respond Thu, 14 Aug 2014 14:58:10 +0000 http://blogs.perficient.com/dataanalytics/?p=4849

The OpenPages GRC platform includes 5 main “operational modules”. These modules are each designed to address specific organizational needs around Governance, Risk, and Compliance.

Operational Risk Management module “ORM”

IBM OpenPages GRC Platform - modular methodologyThe Operational Risk Management module is a document and process management tool which includes a monitoring and decision support system enabling an organization to analyze, manage, and mitigate risk simply and efficiently. The module automates the process of identifying, measuring, and monitoring operational risk by combining all risk data (such as risk and control self-assessments, loss events, scenario analysis, external losses, and key risk indicators (KRI)), into a single place.

Financial Controls Management module “FCM”

The Financial Controls Management module reduces time and resource costs associated with compliance for financial reporting regulations. This module combines document and process management with awesome interactive reporting capabilities in a flexible, adaptable easy-to-use environment, enabling users to easily perform all the necessary activities for complying with financial reporting regulations.

Policy and Compliance Management module “PCM”

The Policy and Compliance Management module is an enterprise-level compliance management solution that reduces the cost and complexity of compliance with multiple regulatory mandates and corporate policies. This model enables companies to manage and monitor compliance activities through a full set of integrated functionality:

  • Regulatory Libraries & Change Management
  • Risk & Control Assessments
  • Policy Management, including Policy Creation, Review & Approval and Policy Awareness
  • Control Testing & Issue Remediation
  • Regulator Interaction Management
  • Incident Tracking
  • Key Performance Indicators
  • Reporting, monitoring, and analytics

IBM OpenPages IT Governance module “ITG”

This module aligns IT services, risks, and policies with corporate business initiatives, strategies, and operational standards. Allowing the management of internal IT control and risk according to the business processes they support. In addition, this module unites “silos” of IT risk and compliance delivering visibility, better decision support, and ultimately enhanced performance.

IBM OpenPages Internal Audit Management module “IAM”

This module provides internal auditors with a view into an organizations governance, risk, and compliance, affording the chance to supplement and coexist with broader risk and compliance management activities throughout the organization.

One Solution

The IBM OpenPages GRC Platform Modules Object Model (“ORM”, “FCM”, “PCM”, “ITG” an “IAM”) interactively deliver a superior solution for Governance, Risk, and Compliance. More to come!

]]>
https://blogs.perficient.com/2014/08/14/ibm-openpages-grc-platform-modular-methodology/feed/ 0 200044
The installation Process – IBM OpenPages GRC Platform https://blogs.perficient.com/2014/08/13/the-installation-process-ibm-openpages-grc-platform/ https://blogs.perficient.com/2014/08/13/the-installation-process-ibm-openpages-grc-platform/#respond Wed, 13 Aug 2014 18:13:27 +0000 http://blogs.perficient.com/dataanalytics/?p=4843

When preparing to deploy the OpenPages platform, you’ll need to follow these steps:

  1. Determine which server environment you will deploy to – Windows or AIX.
  2. Determine your topology – how many servers will you include as part of the environment? Multiple application servers? 1 or more reporting servers?
  3. Perform the installation of the OpenPages prerequisite software for the chosen environment -and for each server’s designed purpose (database, application or reporting).
  4. Perform the OpenPages installation, being conscious of the software that is installed as part of that process.

Topology

Depending upon your needs, you may find that you’ll want to use separate servers for your application, database and reporting servers. In addition, you may want to add additional application or reporting servers to your topology.

 

 

topo

 

 

 

 

 

 

 

 

 

 

 

 

After the topology is determined you can use the following information to prepare your environment. I recommend clean installs (meaning starting with fresh or new machines and VM’s are just fine (“The VMWare performance on a virtualized system is comparable to native hardware. You can use the OpenPages hardware requirements for sizing VM environments” – IBM).

(Note – this is if you’ve chosen to go Oracle rather than DB2):

MS Windows Severs

All servers that will be part of the OpenPages environment must have the following installed before proceeding:

  • Microsoft Windows Server 2008 R2 and later Service Packs (64-bit operating system)
  • Microsoft Internet Explorer 7.0 (or 8.0 in Compatibility View mode)
  • A file compression utility, such as WinZip
  • A PDF reader (such as Adobe Acrobat)

The Database Server

In addition to the above “all servers” software, your database server will require the following software:

  • Oracle 11gR2 (11.2.0.1) and any higher Patch Set – the minimum requirement is Oracle 11.2.0.1 October 2010 Critical Patch Update.

Application Server(s)

Again, in addition to the above “all servers” software, the server that hosts the OpenPages application modules should have the following software installed:

  • JDK 1.6 or greater, 64-bit Note: This is a prerequisite only if your OpenPages product does not include WebLogic Server.
  • Application Server Software (one of the following two options)

o   IBM Websphere Application Server ND 7.0.0.13 and any higher Fix Pack Note: Minimum requirement is Websphere 7.0.0.13.

o   Oracle WebLogic Server 10.3.2 and any higher Patch Set Note: Minimum requirement is Oracle WebLogic Server 10.3.2. This is a prerequisite only if your OpenPages product does not include Oracle WebLogic Server.

  • Oracle Database Client 11gR2 (11.2.0.1) and any higher Patch Set

Reporting Server(s)

The server that you intend to host the OpenPages CommandCenter must have the following software installed (in addition to the above “all servers” software):

  • Microsoft Internet Information Services (IIS) 7.0 or Apache HTTP Server 2.2.14 or greater
  • Oracle Database Client 11g R2 (11.2.0.1) and any higher Patch Set

During the OpenPages Installation Process

As part of the OpenPages installation, the following is installed automatically:

 

For Oracle WebLogic Server & IBM WebSphere Application Server environments:

  • The OpenPages application
  • Fujitsu Interstage Business Process Manager (BPM) 10.1
  • IBM Cognos 10.2
  • OpenPages CommandCenter
  • JRE 1.6 or greater

If your OpenPages product includes the Oracle WebLogic Server:

  • Oracle WebLogic Server 10.3.2

If your OpenPages product includes the Oracle Database:

  • Oracle Database Server Oracle 11G Release 2 (11.2.0.1) Standard Edition with October 2010 CPU Patch (on a database server system)
  • Oracle Database Client 11g Release 2 (11.2.0.1) with October 2010 CPU Patch applied 64-bit (on an application server system)
  • Oracle Database Client 11g Release 2 (11.2.0.1) with October 2010 CPU Patch applied 32-bit (on a reporting server system)

 Thanks!

]]>
https://blogs.perficient.com/2014/08/13/the-installation-process-ibm-openpages-grc-platform/feed/ 0 200043
IBM OpenPages Start-up https://blogs.perficient.com/2014/08/12/ibm-openpages-start-up/ https://blogs.perficient.com/2014/08/12/ibm-openpages-start-up/#respond Tue, 12 Aug 2014 17:47:20 +0000 http://blogs.perficient.com/dataanalytics/?p=4833

In the beginning…

OpenPages was a company “born” in Massachusetts, providing Governance, Risk, and Compliancesoftware and services to customers. Founded in 1996, OpenPages had more than 200 customers worldwide including Barclays, Duke Energy, and TIAA-CREF. On October 21, 2010, OpenPages was officially acquired by IBM:

http://www-03.ibm.com/press/us/en/pressrelease/32808.wss

IBM OpenPages Start-upWhat is it?

OpenPages provides a technology driven way of understanding the full scope of risk an organization faces. In most cases, there is extreme fragmentation of a company’s risk information – like data collected and maintained in numerous disparate spreadsheets – making aggregation of the risks faced by a company extremely difficult and unmanageable.

Key Features

IBM’s OpenPages GRC Platform can help by providing many capabilities to simplify and centralize compliance and risk management activities. The key features include:

  • Provides a shared content repository that can (logically) present the processes, risks and controls in many-to-many and shared relationships.
  • Supports the import of corporate data and maintains an audit trail ensuring consistent regulatory enforcement and monitoring across multiple regulations.
  • Supports dynamic decision making with its CommandCenter interface, which provides interactive, real-time executive dashboards and reports with drill-down.
  • Is simple to configure and localize with detailed user-specific tasks and actions accessible from a personal browser based home page.
  • Provides for Automation of Workflow for management assessment, process design reviews, control testing, issue remediation and sign-offs and certifications.
  • Utilizes Web Services for Integration. OpenPages utilizes OpenAccess API Interoperate with leading third-party applications to enhance policies and procedures with actual business data.

Understanding the Topology

The OpenPages GRC Platform consists of the following 3 components:

  • 1 database server
  • 1 or more application servers
  • 1 or more reporting servers

Database Server

The database is the centralized repository for metadata, (versions of) application data, and access control. OpenPages requires a set of database users and a tablespace (referred to as the “OpenPages database schema”). These database components install automatically during the OpenPages application installation, configuring all of the required elements. You can use either Oracle or DB2 for your OpenPages GRC Platform repository.

 Application Server(s)

The application server is required to host the OpenPages applications. The application server runs the application modules, and includes the definition and administration of business metadata, UI views, user profiles, and user authorization.

 Reporting Server

The OpenPages CommandCenter is installed on the same computer as IBM Cognos BI and acts as the reporting server.

Next Steps

An excellent next step would be to visit the ibm site and review the available slides and whitepapers. After that, keep tuned to this blog!

]]>
https://blogs.perficient.com/2014/08/12/ibm-openpages-start-up/feed/ 0 200042
Configuring Cognos TM1 Web with Cognos Security https://blogs.perficient.com/2014/08/07/configuring-cognos-tm1-web-with-cognos-security/ https://blogs.perficient.com/2014/08/07/configuring-cognos-tm1-web-with-cognos-security/#respond Thu, 07 Aug 2014 20:28:00 +0000 http://blogs.perficient.com/dataanalytics/?p=4821

Recently I completed upgrading a client’s IBM Cognos environment – both TM1 and BI. It was a “jump” from Cognos 8 to version 10.2, and TM1 9.5 to version 10.2.2. In this environment, we had multiple virtual servers (Cognos lives on one, TM1 on one and the third is the gateway/webserver).

Once the software was all installed and configured (using IBM Cognos Configuration and, yes, you still need to edit the TM1 configuration cfg file), we started the services and (it appeared) everything looked good. I spin through the desktop applications (Perspectives, Architect, etc.) and then go the Web browser, first to test TM1Web:

http:// stingryweb:9510/tm1web/

The familiar page loads:

01

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

But when I enter my credentials, I get the following:

 

02

 

 

Go to Goggle

Since an installation and configuration is not something you do every day, goggle reports that there are evidentially 2 files that the installation placed on the web server that belong on the Cognos BI server. These files need to be located, edited and then copied to the correct location for TM1Web to use IBM Cognos authentication security.

What files?

There are 2 files; an XML file (variables_TM1.xml.sample) and an HTML file (tm1web.html). These can be found on the server that you installed TM1Web – or can they? Turns out, they are not found individually but are included in zip files:

Tm1web_app.zip (that is where you’ll find the xml file) and tm1web_gateway.zip (and that is where you will find tm1web.html):

03

 

 

 

 

I found mine in:

Program Files\ibm\cognos\tm1_64\webapps\tm1web\bi_files

Make them your own

Once you unzip (the files) you need to rename the xml file (to drop the “.sample”) and place it onto the Cognos BI server in:

Program Files\ibm\cognos\c10_64\templates\ps\portal.

Next, edit the file (even though it’s an XML file, its small so you can use notepad). What you need to do is modify the URL’s (the “localhost” string should be replaced with the name of the server running TM1Web.) within the <urls> tags. You’ll find three (one for TM1WebLogin.aspx, one for TM1WebLoginHandler.aspx and one for TM1WebMain.aspx).

Now, copy your tm1web.html file to (on the Cognos BI server)

Program Files\ibm\cognos\c10_64\webcontent\tm1\web and edit it (again, you can use notepad). One more thing, the folder “tm1” may need to be manually created.

The html file update is straight forward (you need to point to where Cognos TM1 Web is running) and there is only a single line in the file. You change:

var tm1webServices = [“http://localhost:8080”];

To:

var tm1webServices = [“http:// stingryweb:9510”];

 

Now, after stopping and starting the servers web services:

 

04

 

 

 

 

The above steps are simple; you just need to be aware of these extra, very manual steps….

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

]]>
https://blogs.perficient.com/2014/08/07/configuring-cognos-tm1-web-with-cognos-security/feed/ 0 200041
Perficient takes Cognos TM1 to the Cloud https://blogs.perficient.com/2014/07/01/perficient-takes-cognos-tm1-to-the-cloud/ https://blogs.perficient.com/2014/07/01/perficient-takes-cognos-tm1-to-the-cloud/#respond Tue, 01 Jul 2014 16:51:03 +0000 http://blogs.perficient.com/dataanalytics/?p=4724

IBM Cognos TM1 is well-known as the planning, analysis, and forecasting software that delivers flexible solutions to address requirements across an enterprise, as well as provide real-time analytics, reporting, and what-if scenario modeling and Perficient is well-known for delivering expertly designed TM1 based solutions.

Analytic Projects

Perficient takes Cognos TM1 to the CloudPerhaps phase zero of a typical analytics project would involve our topology experts determining the exact server environment required to support the implementation of a number of TM1 servers (based upon not only industry proven practices, but our own breadth of practical “in the field” experiences). Next would be the procurement and configuration of said environment (and prerequisite software) and finally the installation of Cognos TM1.

It doesn’t stop there

As TM1 development begins, our engineers work closely with internal staff to outline processes for the (application and performance) testing and deployment (of developed TM1 models) but also to establish a maintainable support structure for after the “go live” date. “Support” includes not only the administration of the developed TM1 application but the “road map” to assign responsibilities such as:

  • Hardware monitoring and administration
  • Software upgrades
  • Expansion or reconfiguration based upon additional requirements (i.e. data or user base changes or additional functionality or enhancements to deployed models)
  • And so on…

Teaming Up

Earlier this year the Perficient analytics team teamed up with the IBM Cloud team to offer an interesting alternative to the “typical”: Cognos TM1 as a service in the cloud.

Using our internal TM1 models and colleagues literally all over the country, we evaluated and tested the viability of a fully cloud based TM1 solution.

What we found was, it works and works well, offering unique advantages to our customers:

  • Lowers the “cost of entry” (getting TM1 deployed)
  • Lowers the total cost of ownership (ongoing “care and feeding”)
  • Reduces the level of capital expenditures (doesn’t require the procurement of internal hardware)
  • Reduces IT involvement (and therefore expense)
  • Removes the need to plan for, manage and execute upgrades when newer releases are available (new features are available sooner)
  • (Licensed) users anywhere in world have access form day 1 (regardless of internal constraints)
  • Provides for the availability of auxiliary environments for development and testing (without additional procurement and support)

In the field

Once we were intimate with all of the “ins and outs” of TM1 10.2 on a cloud platform, we were able to to work directly with IBM to demonstrate how a cloud based solution would work to address the specific needs of one of our larger customers. After that, the Perficient team “on the ground” developed and deployed a “proof of concept” using real customer data, and partnered with the customer for the “hands on” evaluation and testing. Once the results were in, it was unanimous: “full speed ahead!””.

A Versatile platform

During the project life-cycle, the cloud environment was seamless; allowing Perficient developers to work (at the client site or remotely) and complete all necessary tasks without issue. The IBM cloud team was available (24/7) to analyze any perceived bottlenecks and, when required, to “tweak” things per the Perficient team’s suggestions, ensuring an accurately configured cloud and a successful, on-time solution delivery.

Bottom Line

Built upon our internal teams experience and IBM’s support, our delivered cloud based solution is robust and cutting edge and infinitely scalable.

Major takeaways

Even given everyone’s extremely high expectations, the project team was delighted and reported back the following major takeaways from the experience:

  • There is no “hardware administration” to worry about
  • No software installation headaches to hold things up!
  • The cloud provided an accurately configured VM -including dedicated RAM and CPU based exactly upon the needs of the solution.
  • The application was easily accessible, yet also very secure.
  • Everything was “powerfully fast” – did not experience any “WAN effects”.
  • 24/7 support provided by the IBM cloud team was “stellar”
  • The managed RAM and “no limits” CPU’s set things up to take full advantage of features like TM1’s MTQ.
  • The users could choose a complete web based experience or install CAFÉ on their machines.

In addition, IBM Concert (provided as part of the cloud experience) is a (quote) “wonderful tool for our user community to combine both TM1 & BI to create intuitive workflows and custom dashboards”.

More to Come

To be sure, you’ll be hearing much more about Concert & Cognos in the cloud and when you do, you can count on the Perficient team for expert delivery.

]]>
https://blogs.perficient.com/2014/07/01/perficient-takes-cognos-tm1-to-the-cloud/feed/ 0 200031
Exercising IBM Cognos Framework Manager https://blogs.perficient.com/2014/06/16/exercising-ibm-cognos-framework-manager/ https://blogs.perficient.com/2014/06/16/exercising-ibm-cognos-framework-manager/#respond Mon, 16 Jun 2014 14:31:25 +0000 http://blogs.perficient.com/dataanalytics/?p=4619

In Framework Manager, an expression is any combination of operators, constants, functions, and other components that evaluates to a single value. You can build expressions to create calculation and filter definitions. A calculation is an expression that you use to create a new value from existing values contained within a data item. A filter is an expression that you use to retrieve a specific subset of records. Lets walk though a few simple examples:

Using a Session Parameter

I’ve talked before about session parameters in Framework manager (a session parameter is a variable that IBM Cognos Framework Manager associates with a session, for example user ID and preferred language and you also create your own) in a previous post.

It doesn’t matter if you use a default session parameter or one you’ve created, it’s easy to include a session parameter in your Framework Manager Meta Model.

Here is an example.

In a Query Subject (a query subject is a set of query items that have a relationship and are used to optimize the data being received for reporting); you can click on the Calculations tab and then click Add.

Framework Manager shows the Calculation Definition dialog where you can view and select from the Available Components to create a new Calculation. The Components are separated into 3 types – Model, Functions and Parameters.

I clicked on Parameters and then expanded Session Parameters. Here FM lists all of the default parameters and any I’ve created as well. I selected current_timestamp (to add as my Expression definition (note – FM wraps the expression with the # character to indicate that it’s a MACRO that will be resolved at runtime).

During some additional experimentation I found:

  • You can add a reasonable name for your calculation
  • You may have to (or want to) nest functions within the expression statement (i.e. I’ve added the function “sq” as an example. This function wraps the returned value in single quotes). Hint: the more functions you nest, the slower the performance, so think it thorough).
  • If you’ve got the expression correct (the syntax anyway), the blue Run arrow lights up and you can test the expression and view the results the lower right hand pane of the dialog. Tips will show you errors/Results will show the runtime result of your expression.
  • Finally, you can click OK to save your calculation expression with your Query Subject.

june1

 

 

 

 

 

 

 

 

 

 

 

 

Filtering

Filtering works the same way as calculations. In my example I’m dealing with parts and inventories. If I’d like to create a query subject that perhaps lists only part numbers with a current inventory count of 5 or less, I can set a filter by clicking on the Filter tab and then Add (just like we just did for the calculation).

This time I can select the column InventoryCount from the Model tab and add it as my Expression definition. From there I can grab the “less than or equal to” operator (you can type it directly or select it from the Function list).

june2

 

 

 

 

 

 

 

 

 

 

 

 

Filter works the same as Calculation as far as syntax and tips (but it does not give you a chance to preview your result or the effect of your filter).

Click OK to save your filter.

JOIN ME

Finally, my inventory report is based upon the SQL table named PartInventory which only provides a part number and an inventory count. I’d like to add part descriptions (which are in a table named simply “Part”) to my report so I click on the SQL tab and create a simple join query (joining the tables using PartNo):

june3

 

 

 

 

 

 

 

 

 

 

To make sure everything looks right, I can click on the tab named Test and then click Test Sample.

You can see that you have a part name for each part number, the session parameter Time Stamp is displayed for each record and only those parts in the database where the inventory count is 5 or less:

june4

 

 

 

 

 

 

 

 

 

 

 

By the way, back on the SQL tab, you can:

  • Clear everything (and start over)
  • Enter or Modify SQL directly (remember to click the Validate button to test your code)
  • Insert an additional data source into your Query subject to include data from another source, perhaps an entirely different SQL database.
  • Insert a Macro, For example, you can add inline macro functions to your SQL query.

Here is an example:

#$Corvette_Year_Grouping{$CarYear}#

Notice the # character to indicate the code within is a function to be resolved within the SQL query.

This code uses a parameter map (I’ve blogged about PM’s in the past) to convert a session parameter (set to a particular vehicle model year) to the name of a particular SQL table column (and include that column of information in my query subject result). So in other words, the database table column included in the query result will be decided at run time.

june5

 

 

 

 

 

 

 

 

 

 

 

And our result:

june6

 

 

 

 

 

 

You can see that these are simple but thought-provoking examples of the power of IBM Cognos Framework Manager.

Framework Manager is a metadata modeling tool that drives query generation for Cognos BI reporting. Every reporting project should begin with a solid meta model to ensure success. More to come…

]]>
https://blogs.perficient.com/2014/06/16/exercising-ibm-cognos-framework-manager/feed/ 0 200021
Framework Manager – Creating a Parameter Map https://blogs.perficient.com/2014/06/09/framework-manager-creating-a-parameter-map/ https://blogs.perficient.com/2014/06/09/framework-manager-creating-a-parameter-map/#respond Mon, 09 Jun 2014 20:40:14 +0000 http://blogs.perficient.com/dataanalytics/?p=4559

A session parameter is a variable that IBM Cognos Framework Manager associates with a particular session. Examples include (current user name, current active language, current date and time, and others). Parameter maps are a method for substituting different values with different keys.

A parameter map can be thought of as simple data “look-up table”.

Each parameter map has two columns:

  • a key column and
  • a value column (holding the value that the key represents).

In Cognos TM1, Lookup (or mapping) cubes (and dimensions) are common (and I’ve blogged on them before).

So let’s create a simple Framework Manager Parameter Map:

Well, to construct your map, you can:

  • enter the keys and values (for your map) manually,
  • import them from an external file, or
  • base them on query items in your Meta model

– it all depends upon the size and/or complexity of the parameter map you need to build.

Some helpful hints:

  • All parameter map keys must be unique so that the Framework Manager can reliably obtain the correct value!
  • The value of one parameter can be the value of another parameter, so you must enclose the entire value in number signs (#).
  • There is a limit of five levels when nesting parameters in this way.

So let’s look at an example exercise. I chose to use the “source file” method to create my map.

In Framework Manager, right-click in the Parameter Maps icon, then select Create and Parameter Map:

TPM1

 

 

 

 

 

 

 

 

 

 

From there, you can enter a name for your parameter map.

Since I am converting (or mapping) (Corvette) part numbers into part descriptions, I’m naming my new parameter map:

“Keen Corvette Restoration Parts”,

and then selecting the option “Manually enter the parameter keys and/or import them from a file”:

tpm2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

On the Create Parameter Map Wizard dialog, I entered a default value (a value to be used if a key doesn’t have a value in your map) and then clicked on Import File…

tpm3

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Navigated and selected mysource file (to use a .txt file for import, the values must be separated by tabs and the file must be saved as UTF8 or in the Unicode format. ANSI text files are not supported):

tpm4

 

 

 

 

 

 

 

 

 

 

 

 

 

Clicked OK and Framework Manager created my parameter map. It looks good, (it does!) so I clicked on Finish:

tpm5

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

And you can see my new map now existing in my project:

tpm6

 

 

 

 

 

 

 

 

 

 

Done!

If you double-click on your map, the Parameter Map dialog opens again where you can clear your map, import a new source file (to over lay or add to your map), add new specific keys, export your map or edit it directly.

Next time I will illustrate how to use the new parameter map!

]]>
https://blogs.perficient.com/2014/06/09/framework-manager-creating-a-parameter-map/feed/ 0 200017