Perficient Enterprise Information Solutions Blog

Blog Categories

Subscribe via Email

Subscribe to RSS feed

Archives

Follow our Enterprise Information Technology board on Pinterest

Posts Tagged ‘bi’

IBM OpenPages GRC Platform –modular methodology

The OpenPages GRC platform includes 5 main “operational modules”. These modules are each designed to address specific organizational needs around Governance, Risk, and Compliance.

Operational Risk Management module “ORM”

IBM OpenPages GRC Platform - modular methodologyThe Operational Risk Management module is a document and process management tool which includes a monitoring and decision support system enabling an organization to analyze, manage, and mitigate risk simply and efficiently. The module automates the process of identifying, measuring, and monitoring operational risk by combining all risk data (such as risk and control self-assessments, loss events, scenario analysis, external losses, and key risk indicators (KRI)), into a single place.

Financial Controls Management module “FCM”

The Financial Controls Management module reduces time and resource costs associated with compliance for financial reporting regulations. This module combines document and process management with awesome interactive reporting capabilities in a flexible, adaptable easy-to-use environment, enabling users to easily perform all the necessary activities for complying with financial reporting regulations.

Policy and Compliance Management module “PCM”

The Policy and Compliance Management module is an enterprise-level compliance management solution that reduces the cost and complexity of compliance with multiple regulatory mandates and corporate policies. This model enables companies to manage and monitor compliance activities through a full set of integrated functionality:

  • Regulatory Libraries & Change Management
  • Risk & Control Assessments
  • Policy Management, including Policy Creation, Review & Approval and Policy Awareness
  • Control Testing & Issue Remediation
  • Regulator Interaction Management
  • Incident Tracking
  • Key Performance Indicators
  • Reporting, monitoring, and analytics

IBM OpenPages IT Governance module “ITG”

This module aligns IT services, risks, and policies with corporate business initiatives, strategies, and operational standards. Allowing the management of internal IT control and risk according to the business processes they support. In addition, this module unites “silos” of IT risk and compliance delivering visibility, better decision support, and ultimately enhanced performance.

IBM OpenPages Internal Audit Management module “IAM”

This module provides internal auditors with a view into an organizations governance, risk, and compliance, affording the chance to supplement and coexist with broader risk and compliance management activities throughout the organization.

One Solution

The IBM OpenPages GRC Platform Modules Object Model (“ORM”, “FCM”, “PCM”, “ITG” an “IAM”) interactively deliver a superior solution for Governance, Risk, and Compliance. More to come!

The installation Process – IBM OpenPages GRC Platform

When preparing to deploy the OpenPages platform, you’ll need to follow these steps:

  1. Determine which server environment you will deploy to – Windows or AIX.
  2. Determine your topology – how many servers will you include as part of the environment? Multiple application servers? 1 or more reporting servers?
  3. Perform the installation of the OpenPages prerequisite software for the chosen environment -and for each server’s designed purpose (database, application or reporting).
  4. Perform the OpenPages installation, being conscious of the software that is installed as part of that process.

Topology

Depending upon your needs, you may find that you’ll want to use separate servers for your application, database and reporting servers. In addition, you may want to add additional application or reporting servers to your topology.

 

 

topo

 

 

 

 

 

 

 

 

 

 

 

 

After the topology is determined you can use the following information to prepare your environment. I recommend clean installs (meaning starting with fresh or new machines and VM’s are just fine (“The VMWare performance on a virtualized system is comparable to native hardware. You can use the OpenPages hardware requirements for sizing VM environments” – IBM).

(Note – this is if you’ve chosen to go Oracle rather than DB2):

MS Windows Severs

All servers that will be part of the OpenPages environment must have the following installed before proceeding:

  • Microsoft Windows Server 2008 R2 and later Service Packs (64-bit operating system)
  • Microsoft Internet Explorer 7.0 (or 8.0 in Compatibility View mode)
  • A file compression utility, such as WinZip
  • A PDF reader (such as Adobe Acrobat)

The Database Server

In addition to the above “all servers” software, your database server will require the following software:

  • Oracle 11gR2 (11.2.0.1) and any higher Patch Set – the minimum requirement is Oracle 11.2.0.1 October 2010 Critical Patch Update.

Application Server(s)

Again, in addition to the above “all servers” software, the server that hosts the OpenPages application modules should have the following software installed:

  • JDK 1.6 or greater, 64-bit Note: This is a prerequisite only if your OpenPages product does not include WebLogic Server.
  • Application Server Software (one of the following two options)

o   IBM Websphere Application Server ND 7.0.0.13 and any higher Fix Pack Note: Minimum requirement is Websphere 7.0.0.13.

o   Oracle WebLogic Server 10.3.2 and any higher Patch Set Note: Minimum requirement is Oracle WebLogic Server 10.3.2. This is a prerequisite only if your OpenPages product does not include Oracle WebLogic Server.

  • Oracle Database Client 11gR2 (11.2.0.1) and any higher Patch Set

Reporting Server(s)

The server that you intend to host the OpenPages CommandCenter must have the following software installed (in addition to the above “all servers” software):

  • Microsoft Internet Information Services (IIS) 7.0 or Apache HTTP Server 2.2.14 or greater
  • Oracle Database Client 11g R2 (11.2.0.1) and any higher Patch Set

During the OpenPages Installation Process

As part of the OpenPages installation, the following is installed automatically:

 

For Oracle WebLogic Server & IBM WebSphere Application Server environments:

  • The OpenPages application
  • Fujitsu Interstage Business Process Manager (BPM) 10.1
  • IBM Cognos 10.2
  • OpenPages CommandCenter
  • JRE 1.6 or greater

If your OpenPages product includes the Oracle WebLogic Server:

  • Oracle WebLogic Server 10.3.2

If your OpenPages product includes the Oracle Database:

  • Oracle Database Server Oracle 11G Release 2 (11.2.0.1) Standard Edition with October 2010 CPU Patch (on a database server system)
  • Oracle Database Client 11g Release 2 (11.2.0.1) with October 2010 CPU Patch applied 64-bit (on an application server system)
  • Oracle Database Client 11g Release 2 (11.2.0.1) with October 2010 CPU Patch applied 32-bit (on a reporting server system)

 Thanks!

IBM OpenPages Start-up

In the beginning…

OpenPages was a company “born” in Massachusetts, providing Governance, Risk, and Compliancesoftware and services to customers. Founded in 1996, OpenPages had more than 200 customers worldwide including Barclays, Duke Energy, and TIAA-CREF. On October 21, 2010, OpenPages was officially acquired by IBM:

http://www-03.ibm.com/press/us/en/pressrelease/32808.wss

IBM OpenPages Start-upWhat is it?

OpenPages provides a technology driven way of understanding the full scope of risk an organization faces. In most cases, there is extreme fragmentation of a company’s risk information – like data collected and maintained in numerous disparate spreadsheets – making aggregation of the risks faced by a company extremely difficult and unmanageable.

Key Features

IBM’s OpenPages GRC Platform can help by providing many capabilities to simplify and centralize compliance and risk management activities. The key features include:

  • Provides a shared content repository that can (logically) present the processes, risks and controls in many-to-many and shared relationships.
  • Supports the import of corporate data and maintains an audit trail ensuring consistent regulatory enforcement and monitoring across multiple regulations.
  • Supports dynamic decision making with its CommandCenter interface, which provides interactive, real-time executive dashboards and reports with drill-down.
  • Is simple to configure and localize with detailed user-specific tasks and actions accessible from a personal browser based home page.
  • Provides for Automation of Workflow for management assessment, process design reviews, control testing, issue remediation and sign-offs and certifications.
  • Utilizes Web Services for Integration. OpenPages utilizes OpenAccess API Interoperate with leading third-party applications to enhance policies and procedures with actual business data.

Understanding the Topology

The OpenPages GRC Platform consists of the following 3 components:

  • 1 database server
  • 1 or more application servers
  • 1 or more reporting servers

Database Server

The database is the centralized repository for metadata, (versions of) application data, and access control. OpenPages requires a set of database users and a tablespace (referred to as the “OpenPages database schema”). These database components install automatically during the OpenPages application installation, configuring all of the required elements. You can use either Oracle or DB2 for your OpenPages GRC Platform repository.

 Application Server(s)

The application server is required to host the OpenPages applications. The application server runs the application modules, and includes the definition and administration of business metadata, UI views, user profiles, and user authorization.

 Reporting Server

The OpenPages CommandCenter is installed on the same computer as IBM Cognos BI and acts as the reporting server.

Next Steps

An excellent next step would be to visit the ibm site and review the available slides and whitepapers. After that, keep tuned to this blog!

Configuring Cognos TM1 Web with Cognos Security

Recently I completed upgrading a client’s IBM Cognos environment – both TM1 and BI. It was a “jump” from Cognos 8 to version 10.2, and TM1 9.5 to version 10.2.2. In this environment, we had multiple virtual servers (Cognos lives on one, TM1 on one and the third is the gateway/webserver).

Once the software was all installed and configured (using IBM Cognos Configuration and, yes, you still need to edit the TM1 configuration cfg file), we started the services and (it appeared) everything looked good. I spin through the desktop applications (Perspectives, Architect, etc.) and then go the Web browser, first to test TM1Web:

http:// stingryweb:9510/tm1web/

The familiar page loads:

01

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

But when I enter my credentials, I get the following:

 

02

 

 

Go to Goggle

Since an installation and configuration is not something you do every day, goggle reports that there are evidentially 2 files that the installation placed on the web server that belong on the Cognos BI server. These files need to be located, edited and then copied to the correct location for TM1Web to use IBM Cognos authentication security.

What files?

There are 2 files; an XML file (variables_TM1.xml.sample) and an HTML file (tm1web.html). These can be found on the server that you installed TM1Web – or can they? Turns out, they are not found individually but are included in zip files: Read the rest of this post »

Looping through files in a folder using ODI

On a recent project, I was faced with a requirement to scan the contents of a folder and load all the files into their respective staging tables. There were multiple file types – Customer file, Store file, Products file, Sales file, etc. Every day, we received zero to many files for each type of file. The information known at design time is the base file name for each file type and the format and frequency in which the files will arrive. The solution needed to be flexible so that it can handle multiple files in different formats arriving at any given frequency (daily/monthly/quarterly/yearly).

With that in mind, I created a master and detail table to store the file names and other information.

The FILE_MSTR table stores the metadata about the file type and has the following fields:

FILE_MSTR_ID FILE_BASE_NAME FILE_EXT FILE_FOLDER FILE_FREQ FILE_FORMAT FILE_TGT_NAME CREATE_DT UPDATE_DT FILE_ORDER FILE_SERVER FTP_USER_ID FTP_PWD
1 data .txt /home/oracle/Desktop/aroy/files Daily Text SRC_DATA.txt 7/30/2014 7/30/2014 1 ftp.***.com ftpuser ****

Read the rest of this post »

Disruptive Scalability

The personal computer, internet, digital music players (think ipods), smart phones, tablets are just a few of the disruptive technologies that have become common place in our lifetime.   What is consistent about these technology disruptions is that they all have changed the way we work, live, and play.  Whole industries have grown up around these technologies.   Can you imagine a major corporation being competitive in today’s Disruptive Scalabilityworld without personal computers?

Big Data is another disruptive technology.    Big Data is spawning its own industry with 100s of startups and every major technology vendor seems to have a “Big Data Offering.”  Soon, companies will need to leverage Big Data to stay competitive.   The Big Data technology disruption in an Enterprise’s data architecture is significant. How we source, integrate, process, analyze, manage, and deliver will evolve and change. Big Data truly is changing everything!   Over the next few weeks I will focusing my blogging on how Big Data is changing our enterprise information architecture.   Big Data’s effect on MDM, data integration, analytics, and overall data architecture will be covered.   Stay-tuned!

Perficient takes Cognos TM1 to the Cloud

IBM Cognos TM1 is well-known as the planning, analysis, and forecasting software that delivers flexible solutions to address requirements across an enterprise, as well as provide real-time analytics, reporting, and what-if scenario modeling and Perficient is well-known for delivering expertly designed TM1 based solutions.

Analytic Projects

Perficient takes Cognos TM1 to the CloudPerhaps phase zero of a typical analytics project would involve our topology experts determining the exact server environment required to support the implementation of a number of TM1 servers (based upon not only industry proven practices, but our own breadth of practical “in the field” experiences). Next would be the procurement and configuration of said environment (and prerequisite software) and finally the installation of Cognos TM1.

It doesn’t stop there

As TM1 development begins, our engineers work closely with internal staff to outline processes for the (application and performance) testing and deployment (of developed TM1 models) but also to establish a maintainable support structure for after the “go live” date. “Support” includes not only the administration of the developed TM1 application but the “road map” to assign responsibilities such as:

  • Hardware monitoring and administration
  • Software upgrades
  • Expansion or reconfiguration based upon additional requirements (i.e. data or user base changes or additional functionality or enhancements to deployed models)
  • And so on…

Teaming Up

Earlier this year the Perficient analytics team teamed up with the IBM Cloud team to offer an interesting alternative to the “typical”: Cognos TM1 as a service in the cloud.

Using our internal TM1 models and colleagues literally all over the country, we evaluated and tested the viability of a fully cloud based TM1 solution.

What we found was, it works and works well, offering unique advantages to our customers:

  • Lowers the “cost of entry” (getting TM1 deployed)
  • Lowers the total cost of ownership (ongoing “care and feeding”)
  • Reduces the level of capital expenditures (doesn’t require the procurement of internal hardware)
  • Reduces IT involvement (and therefore expense)
  • Removes the need to plan for, manage and execute upgrades when newer releases are available (new features are available sooner)
  • (Licensed) users anywhere in world have access form day 1 (regardless of internal constraints)
  • Provides for the availability of auxiliary environments for development and testing (without additional procurement and support)

In the field

Once we were intimate with all of the “ins and outs” of TM1 10.2 on a cloud platform, we were able to to work directly with IBM to demonstrate how a cloud based solution would work to address the specific needs of one of our larger customers. After that, the Perficient team “on the ground” developed and deployed a “proof of concept” using real customer data, and partnered with the customer for the “hands on” evaluation and testing. Once the results were in, it was unanimous: “full speed ahead!””.

A Versatile platform

During the project life-cycle, the cloud environment was seamless; allowing Perficient developers to work (at the client site or remotely) and complete all necessary tasks without issue. The IBM cloud team was available (24/7) to analyze any perceived bottlenecks and, when required, to “tweak” things per the Perficient team’s suggestions, ensuring an accurately configured cloud and a successful, on-time solution delivery.

Bottom Line

Built upon our internal teams experience and IBM’s support, our delivered cloud based solution is robust and cutting edge and infinitely scalable.

Major takeaways

Even given everyone’s extremely high expectations, the project team was delighted and reported back the following major takeaways from the experience:

  • There is no “hardware administration” to worry about
  • No software installation headaches to hold things up!
  • The cloud provided an accurately configured VM -including dedicated RAM and CPU based exactly upon the needs of the solution.
  • The application was easily accessible, yet also very secure.
  • Everything was “powerfully fast” – did not experience any “WAN effects”.
  • 24/7 support provided by the IBM cloud team was “stellar”
  • The managed RAM and “no limits” CPU’s set things up to take full advantage of features like TM1’s MTQ.
  • The users could choose a complete web based experience or install CAFÉ on their machines.

In addition, IBM Concert (provided as part of the cloud experience) is a (quote) “wonderful tool for our user community to combine both TM1 & BI to create intuitive workflows and custom dashboards”.

More to Come

To be sure, you’ll be hearing much more about Concert & Cognos in the cloud and when you do, you can count on the Perficient team for expert delivery.

How to Report on Employee Utilization in OBIEE?

One of the common HR reporting needs is to determine the Utilization and Availability of employees. These metrics may also be studied at a higher level. For example, checking Workforce Utilization Percentages across a company’s different organizations provides insight into how overstaffed or understaffed each organization is. This blog describes an OBIEE design methodology to support such reporting requirements.

A quick functional overview of how Utilization is calculated

While Utilization % tells how much actual work an employee has completed compared to their overall capacity, Availability indicates the remainder of the time where an employee has been inactive or non-utilizable. For example if the Utilization of someone is 80%, their Availability is 20% (100 – 80).

Utilization is defined as the ratio of Hours Worked over Capacity. Hours Worked is a function of the actual hours entered on a timecard throughout an employee’s workweek. And there may be several variations of what defines Hours Worked depending on the organization’s specific definition of the type of timecard hours that are utilizable. For instance, a consulting firm may include billable hours to a client as utilizable, but not hours spent on non-billable categories such as bench time and vacations. Capacity is typically a standard number of hours an employee is expected to work irrespective of what gets entered on timesheets. For example, an employee who works 8 hour workdays has a capacity of 40 hours a week, whereas a part-time employee who works 3 days a week has a capacity of 24. Capacity usually excludes standard holiday hours as such hours are not expected to be utilizable in the first place.

Following is a summary of the key metrics:

Utilization % = 100 x Hours Worked / Capacity

Availability % = 100 – Utilization %

Hours Worked: Timecard Hours that are considered utilizable

Capacity: Standard Work Schedule Hours – Standard Holiday Hours

 

Data Model

No matter what transactional system your data is sourced from, Hours Worked and Capacity are most likely going to be stored in different tables in that system. For example, in Oracle E-Business Suite, Hours Worked are sourced from Oracle Time and Labor timecard tables. Whereas, Capacity is sourced from the HR assignment tables that associate employees to their corresponding work schedules and holiday calendars.

In my solution of a data warehouse model that supports Utilization calculations, I use 2 facts: Timecard Fact and Capacity Fact. Not all the dimensions in both star schemas are conforming. For example, the Timecard Fact has dimensions that describe the type of hours whether they are billable or not, vacation hours or project hours, work hours that were performed onsite or remote, etc… Such attributes of a timecard are not relevant when we talk about capacity facts. For this reason, if we were to store both metrics (Hours Worked and Capacity Hours) in the same fact table, we end up with an incorrect capacity as it doesn’t relate to all the timecard dimensions. Following is my schema for both stars where Project, Task and Time Entry Status are non-conforming dimensions:

Capture1

 

OBIEE Design

In the RPD business layer, I built 3 logical facts and the same facts are made available in the Presentation layer:

  1. Timecard Fact: Sourced from the timecard OLAP fact table
  2. Capacity Fact: Sourced from the capacity OLAP fact table
  3. Utilization Fact: This fact has no physical data sources as all the metrics are based on the other 2 logical facts.

Capture2

I am now able to build a simple trend report that shows utilization broken down by Organization. Such a report is straightforward to build since both the Time and Organization dimensions are conforming between both facts: Timecard and Capacity.

Capture3

A more advanced reporting requirement may ask for utilization to be dynamically re-calculated in the report based on additional prompts on dimensions like Time Entry Status, Project or Task. These dimensions are not conforming and therefore cannot be added as prompts in the typical way. If interested in adding dynamic prompting on timecard-specific dimensions, you can see an example of how that is possible by referring to my other blog: OBIEE Prompting on Non-Conforming Dimensions.

Exercising IBM Cognos Framework Manager

In Framework Manager, an expression is any combination of operators, constants, functions, and other components that evaluates to a single value. You can build expressions to create calculation and filter definitions. A calculation is an expression that you use to create a new value from existing values contained within a data item. A filter is an expression that you use to retrieve a specific subset of records. Lets walk though a few simple examples:

Using a Session Parameter

I’ve talked before about session parameters in Framework manager (a session parameter is a variable that IBM Cognos Framework Manager associates with a session, for example user ID and preferred language and you also create your own) in a previous post.

It doesn’t matter if you use a default session parameter or one you’ve created, it’s easy to include a session parameter in your Framework Manager Meta Model.

Here is an example.

In a Query Subject (a query subject is a set of query items that have a relationship and are used to optimize the data being received for reporting); you can click on the Calculations tab and then click Add.

Framework Manager shows the Calculation Definition dialog where you can view and select from the Available Components to create a new Calculation. The Components are separated into 3 types – Model, Functions and Parameters.

I clicked on Parameters and then expanded Session Parameters. Here FM lists all of the default parameters and any I’ve created as well. I selected current_timestamp (to add as my Expression definition (note – FM wraps the expression with the # character to indicate that it’s a MACRO that will be resolved at runtime).

During some additional experimentation I found:

  • You can add a reasonable name for your calculation
  • You may have to (or want to) nest functions within the expression statement (i.e. I’ve added the function “sq” as an example. This function wraps the returned value in single quotes). Hint: the more functions you nest, the slower the performance, so think it thorough).
  • If you’ve got the expression correct (the syntax anyway), the blue Run arrow lights up and you can test the expression and view the results the lower right hand pane of the dialog. Tips will show you errors/Results will show the runtime result of your expression.
  • Finally, you can click OK to save your calculation expression with your Query Subject.

june1

 

 

 

 

 

 

 

 

 

 

 

 

Filtering

Filtering works the same way as calculations. In my example I’m dealing with parts and inventories. If I’d like to create a query subject that perhaps lists only part numbers with a current inventory count of 5 or less, I can set a filter by clicking on the Filter tab and then Add (just like we just did for the calculation).

This time I can select the column InventoryCount from the Model tab and add it as my Expression definition. From there I can grab the “less than or equal to” operator (you can type it directly or select it from the Function list).

june2

 

 

 

 

 

 

 

 

 

 

 

 

Filter works the same as Calculation as far as syntax and tips (but it does not give you a chance to preview your result or the effect of your filter).

Click OK to save your filter.

JOIN ME

Finally, my inventory report is based upon the SQL table named PartInventory which only provides a part number and an inventory count. I’d like to add part descriptions (which are in a table named simply “Part”) to my report so I click on the SQL tab and create a simple join query (joining the tables using PartNo):

june3

 

 

 

 

 

 

 

 

 

 

To make sure everything looks right, I can click on the tab named Test and then click Test Sample.

You can see that you have a part name for each part number, the session parameter Time Stamp is displayed for each record and only those parts in the database where the inventory count is 5 or less:

june4

 

 

 

 

 

 

 

 

 

 

 

By the way, back on the SQL tab, you can:

  • Clear everything (and start over)
  • Enter or Modify SQL directly (remember to click the Validate button to test your code)
  • Insert an additional data source into your Query subject to include data from another source, perhaps an entirely different SQL database.
  • Insert a Macro, For example, you can add inline macro functions to your SQL query.

Here is an example:

#$Corvette_Year_Grouping{$CarYear}#

Notice the # character to indicate the code within is a function to be resolved within the SQL query.

This code uses a parameter map (I’ve blogged about PM’s in the past) to convert a session parameter (set to a particular vehicle model year) to the name of a particular SQL table column (and include that column of information in my query subject result). So in other words, the database table column included in the query result will be decided at run time.

june5

 

 

 

 

 

 

 

 

 

 

 

And our result:

june6

 

 

 

 

 

 

You can see that these are simple but thought-provoking examples of the power of IBM Cognos Framework Manager.

Framework Manager is a metadata modeling tool that drives query generation for Cognos BI reporting. Every reporting project should begin with a solid meta model to ensure success. More to come…

OBIEE Prompting on Non-Conformed Dimensions

A report that uses multiple facts may be prompted on dimensions that are not necessarily conforming to all the facts. At first one may think such a functionality is not valid. This posting demonstrates how such reporting requirements are common and are achievable in OBIEE though not in a very straightforward manner.

It is a basic OBIEE reporting concept that a report using metrics from more than one fact, requires that all the dimensional columns be conformed across the facts used in the report. In other words, it makes no sense to look at a side by side comparison of revenue and cost by product if the cost information is not available by product to start with. However, it is a valid question to ask how is revenue generated from certain products compared to the overall cost. Requirements like this usually have us facing the problem of developing a report that sources data from two facts: a revenue fact supporting a product dimension, and a cost fact that does not support the product dimension. At first one may be tempted to respond to the requester that a report like this is not possible since we are dealing with a “multiple facts and a non-conforming dimension” situation. But a closer look reveals that such requirements are completely valid from a functional perspective and therefore should be doable. The problem that remains though is that prompting a report on a non-conforming dimension will have OBIEE at a loss on how to aggregate a metric along a dimension it is not linked to.  Read the rest of this post »