Perficient Enterprise Information Solutions Blog

Blog Categories

Subscribe via Email

Subscribe to RSS feed

Archives

Follow Enterprise Information Technology on Pinterest

Posts Tagged ‘bi’

Implementing Cognos ICM at Perficient

The Bait by nist6dh, on Flickr
Creative Commons Creative Commons Attribution-Share Alike 2.0 Generic License   by  nist6dh 

Defining the Problem

For any growing organization, with a good size sales team compensated through incentives for deals and revenue, calculating payments becomes a bigger and bigger challenge. Like many organizations, Perficient handled this problem with Excel spreadsheets, long-hours, and Excedrin. Our sales team is close to a hundred strong and growing 10% each year. To help reward activities aligned to our business goals and spur sales that move the company in its strategic direction, the Perficient sales plans are becoming more granular and targeted. Our propensity to acquire new companies jolts the sales teams size and introduces new plans, products, customers, and territories. With Excel, it is almost impossible, without a Herculean effort, to identify whether prior plan changes had the desired effect or what plan changes might cost. With, literally, hundreds of spreadsheets being produced each month the opportunity to introduce errors is significant. Consequently, executives, general managers, sales directors, business developers, and accountants spend hundreds if not thousands of hours each month validating, checking, and correcting problems. The risks involved in using Excel are significant, with an increased likelihood of rising costs for no benefit, and limited ability to model alternative compensation scenarios

Choosing Cognos Incentive Compensation Management (ICM)

While there are many tools on the market, the choice to use Cognos ICM was relatively simple. Once we had outlined the benefits and capabilities of the tool, our executive team was onboard.

Cognos ICM is a proven tool, having been around for a number of years. Cognos ICM was formerly known as Varicent, before Varicent’s acquisition by IBM. The features of the tool that really make sense for Perficient are numerous. The calculation engine is fast and flexible allowing any type of complexity and exception to be handled with ease, and for reports and commission statements to be opened virtually instantaneously. The data handling and integration capabilities are excellent, allowing the use of virtually any type of data from any system. In our case, we are consuming data from our ERP, CRM, and HR systems, along with many other files and spreadsheets. Cognos ICM’s hierarchy management capabilities, allow us to manage sales team, reporting, and approval hierarchies with ease. User and payee management permissions with security comes bundled with the tool and will allow integration with external authentication tools. From a process point of view, workflow and scheduling are built in and can be leveraged to simplify the administration of the incentive compensation calculation and payment processes. Finally, the audit module tracks everything that is going on in the system from user activity, to process and calculation timing, to errors that occur.

Perficient is one of a few elite IBM Premier Business Partners. As the Business Analytics business unit within Perficient, we have a history of not only implementing IBM’s Business Analytics tools for our clients but also ourselves. We have implemented Cognos TM1 as a time and expense management system from which we could generate invoices, feed payroll, and pay expenses directly. We use Cognos Business Intelligence (BI) to generate utilization and bonus tracking reports for our consultants. We feel it essential that we not only implement solutions for our clients but to eat our own dog food, if you will.

Implementation and Timeline

Once we made the decision to implement and the budget had been approved, we decided on a waterfall-based lifecycle to drive the project. The reason for this selection has to do with our implementation team’s availability. As a consulting organization, the need to pull consultants into client engagements is absolute. We are also geographically dispersed so co-location with the business users was not an option. Having discrete phases, which could be handed from resource to resource was a must. As is typical with most waterfall projects, we implemented Cognos ICM in four major phases: requirements, design, development, and testing.

During the requirements phase, we broke down what we did today and layered that with what we wanted to do tomorrow. The output of the requirements phase was the Requirements Document with narrative and matrix style requirements.

Our design approach was to use the Cognos ICM Attributes Approach best practices developed by IBM. Rather than blindly following IBM’s prescribed methodology, we adopted the components that fit and discarded those that did not. The output of our design phase was a detailed design document that was ready for direct implementation in the tool.

The development phase had three distinct flavors. Data integration, where we sourced, prepared, and loaded the incoming data. Our goal was to load as much data as possible without forcing manual intervention. The calculation development segment, where we developed the calculations for hierarchies, quota, crediting, attainment, and payout. This is where the ICM logic resides and feeds into the compensation statements and reports. The last component was reporting. This included the development of the commission statements, analytical reports, and the file sent to payroll.

The testing phase had two components, one of system testing and one of user acceptance and parallel testing. Today we are in the midst of the parallel testing, ensuring that we mirror the current statements or know exactly why we have differences.

Already, we are defining enhancements and future uses of the system. We need new reports to support detailed review of compensation statements and to analyze the success of programs. We have new plans for different types of business developers and others in the organization with incentive compensation. We have new data sources to be integrated to allow prospective and booked projects to feature into the report set.

Our goal, at the outset, was to get to parallel testing in three months assuming that our resources were available full-time. Starting at the end of January and being in parallel test today got us close. We lost out because client engagements took two of our resources; one part-time and one full-time. Targeting 90 days for an initial implementation is quite feasible.

Team

The most important people on our team were the accountants and sales plan designers. They are the ones who know the ins and outs of the current plan and all the exceptions that apply. Going forward, they are the people who will continue to administer the plans and system. We also identified a secondary group of VIPs; business developers, managers, and executives to be involved as they are on the sharp end of the ICM system.

Our implementation team consisted of three to four resources. A solution architect who drove the design and calculation development. A developer who was responsible for data integration and report development. A business analyst for requirements gathering and system testing. A project manager who also moonlit as a business analyst.

Benefits

We expect to receive many benefits from implementing Cognos ICM. We expect that the accuracy and consistency of our compensation statements to improve. Accenture, Deloitte, and Gartner estimate that variable compensation overpayments range from 2% to 8%. A company with $30M in incentive compensation will overpay between $600,000 and $2,400,000 every year. During the development process we identified issues with the current commission statements that needed correction.

Using Cognos ICM will improve incentive compensation visibility and transparency. Our business developers can review their commission statements throughout the month to ensure they are credited for the correct transactions. They can quickly identify where they stand in terms of accounts receivable, for which they are penalized. The sales managers can see how their teams are doing and who needs assistance. Our management team can perform what-if analyses to understand plan changes

Amongst the biggest benefits across the board will be time. Our Business Developers and General Managers can reduce their shadow accounting time. Our accounting team can reduce the amount of time they spend on data integration and cleanup, manually generating compensation statements, along with the amount of time they spend resolving errors and issues.

Challenges

Going into this we knew one of the problems we would face is having resources available. For a consulting company like Perficient this is a great problem to have, our Cognos ICM resources are engaged on client implementation projects. It is always said that the Cobbler’s children have no shoes.

The second challenge of implementing Cognos ICM is exceptions. For the most part, implementing an incentive compensation solution is simple and the project sponsors will express a desire for it to be simpler. Then all the exceptions will come to light that need to be handled. We found a number of exceptions after beginning the project, but because of the power of Cognos ICM we were able to handle them and reduce the manual changes the accounting team needed to make.

The other challenge we faced was the data. The data coming out of our systems supports its original purpose but is often lacking for other uses. We needed to integrate and cleanse the data, all processes the accounting team had done manually, in order to have it flow through the ICM system. As we used the Cloud version of Cognos ICM, we leveraged staging and intra-system imports to smooth the integration process.

Finding Out More

Perficient will have a booth at the IBM Vision 2015 conference, which will feature Cognos ICM heavily. I will be there and look forward to meeting with you if you plan on attending. If you’re at the event, stop by and chat for a while. You can also leave me a comment in the box below. I look forward to hearing from you.

Analytical Talent Gap

As new companies embark on the Digital Transformation leveraging Big Data, key concerns and challenges get amplified especially for the near term before the technology and talent pool supply adjusts to the demand. Looking at the  earlier post Big Data Challenges, the top 3 concerns were:

  1. Identifying the Business value/Monetizing the Big Data
  2. Setting up the Governance to manage Big Data
  3. Availability of skills

    Source: Mckinsey

    Source: Mckinsey

Big Data Skills can be broadly classified into 4 categories:

  • Business / Industry Knowledge
  • Analytical Expertise
  • Big Data Architecture
  • Big Data Tools (Infrastructure management, Development)

The value creation or the monetizing of the Big Data (see Architecture needed to monetize API’s) depends on the Business and the Analytical talent. See talent gap on the right specifically in the analytical area. Educating and augmenting the talent shortage through partner companies is critical for the niche and must have technology. As tools evolve coping up with the Architecture becomes very important as past tool / platform short comings addressed with new complexities.

While business continues to search for the Big Data gold, System Integrators and Product vendors are perfecting the methods to shrink the time to market, best practices and through Modern Architecture. How much of the gap we can shrink depends on multiple factors of Companies and their partners.

See also our webinar on: Creating a Next-Generation Big Data Architecture

Think Better Business Intelligence

Think First by jDevaun.Photography, on FlickrCreative Commons Creative Commons Attribution-No Derivative Works 2.0 Generic License by jDevaun.Photography

Everyone is guilty of falling into a rut and building reports the same way over and over again. This year, don’t just churn out the same old reports, resolve to deliver better business intelligence. Think about what business intelligence means. Resolve, at least in your world, to make business intelligence about helping organizations improve business outcomes by making informed decisions. When the next report requests land on your desk leave the tool of choice alone, Cognos in my case, and think for a while. This even applies to those of you building your own reports in a self-service BI world.

Think about the business value. How will the user make better business decisions? Is the user trying to understand how to allocate capital? Is the user trying to improve patient care? Is the user trying to stem the loss of customers to a competitor? Is the user trying find the right price point for their product? No matter what the ultimate object, this gets you thinking like the business person and makes you realize the goal is not a report.

Think about the obstacles to getting the information. Is the existing report or system to slow? Is the data dirty or incorrect? Is the data to slow to arrive or to old to use? Is the existing system to arcane to use? You know the type – when the moon is full, stand on your left leg, squint, hit O-H-Ctrl-R-Alt-P then the report comes out perfectly – if it doesn’t time out. Think about it, if there were no obstacles there would be no report request in your hands

Think about the usage. Who is going to use the analysis? Where will they be using it? How will they get access to the reports? Can everyone see all the data or is some of it restricted? Are users allowed to share the data with others? How will the users interact with the data and information? When do the users need the information in their hands? How current does the data need to be? How often does the data need to be refreshed? How does the data have to interact with other systems? Thinking through the usage gives you a perspective beyond the parochial limits of your BI tool.

Think like Edward Tufte. What should the structure of the report look like? How would it look in black and white? What form should the presentation take? How should the objects be laid out? What visualizations should be used? And, those are never pie-charts. What components can be taken away without reducing the amount of information presented? What components can be added, in the same real-estate, without littering, to improve the information provided? How can you minimize the clutter and maximize the information. Think about the flaws of write once and deliver anywhere, and the garish palates many BI tools provide.

Think about performance. Is the user thinking instantaneous response? Is the user thinking get a cup of tea and come back response time? Is the user okay kicking off a job and getting the results the next morning? If you find one of these, cherish them! They are hard to find these days. Will the user immediately select the next action or do they require some think time. Is the data set a couple of structured transactional records or is the data set a chunk of a big-data lake? Does the data set live in one homogenous source or across many heterogeneous sources? Thinking about performance early means you won’t fall into a trap of missed expectations or an impossible implementation.

Think about data quality. It is a fact of life. How do you deal with and present missing data? How do you deal with incorrect values? How do you deal with out of bounds data? What is the cost of a decision made on bad data? What are the consequences of a decision made on incorrect data? What is the cost of perfect data? What is the value of better data. Thinking about quality before you start coding lets you find a balance between cost and value.

Think about maintenance. Who is going to be responsible for modifications and changes? You know they are going to be needed. As good as you are, you won’t get everything right. Is better to quickly replicate a report multiple times and change the filters, or is it better to spend some extra time and use parameters and conditional code to have a single report server many purposes? Is it better to use platform specific outputs or is it better to use a “hybrid” solution and support every output format from a single build? Are the reports expected to be viable in 10-years or will they be redone in 10-weeks? Thinking through the maintenance needs will let you invest your time in the right areas

Think you are ready to build? Think again. Think through your tool sets capabilities and match them to you needs. Think through your users skills and match them to the tools. Think about your support team and let them know what you need. Think through your design and make sure it is viable.

Here’s to thinking better Business Intelligence throughout the year.

 

DevOps Considerations for Big Data

Big Data is on everyone’s mind these days. Creating an analytical environment involving Big Data technologies is exciting and complex. New technology, new ways of looking at the data which is otherwise remained dark or not available. The exciting part of implementing the Big Data solution is to make it a production ready solution.

Once the enterprise comes to rely on the solution, dealing with typical production issues is a must. Expanding the data lakes and creating multiple applications accessing, changing and deploying new statistical learning solutions can hit the overall platform performance. In the end-user experience and trust will become an issue if the environment is not managed properly. Models which used to run in minutes may turn into hours and days based on the data changes and algorithm changes deployed. bigdata_1Having the right DevOps process framework is important to the success of Big Data solutions.

In many organizations the Data Scientist reports to the business and not to IT. Knowing the business and technological requirements and setting up the DevOps process is key to make the solutions production ready.

Key DevOps Measures for Big Data environment:

  • Data acquisition performance (ingestion to creating a useful data set)
  • Model execution performance (Analytics creation)
  • Modeling platform / Tool performance
  • Software change impacts (upgrades and patches)
  • Development to Production –  Deployment Performance (Application changes)
  • Service SLA Performance (incidents, outages)
  • Security robustness / compliance

 

One of the top key issue is Big Data security. How secured is the data and who has the access and the oversight of the data? Putting together a governance framework to manage the data is vital for the overall health and compliance of the Big Data solutions. Big Data is just getting the traction and much of best practices for Big Data DevOps scenarios yet to mature.

Cloud BI use cases

Cloud BI comes in different forms and shapes, ranging from just visualization to full-blown EDW combined with visualization and Predictive Analytics. The truth of the matter is every niche product vendor offers some unique feature which other product suite does not offer. In most case you almost always need more than one suite of BI to meet all the needs of the Enterprise.

De-centralization definitely helps the business in achieving agility and respond to the market challenges quickly. At the same token that is how companies may end up with silos of information across the enterprise.

Let us look at some scenarios where a cloud BI solution is very attractive to Departmental use.

time_2_mktTime to Market

Getting the business case built and approved for big CapEx projects is a time-consuming proposition. Wait times for HW/SW and IT involvement means lot longer delays in scheduling the project. Not to mention the push back to use the existing reports or wait for the next release which is allegedly around the corner forever.

 

deploymentDeployment Delays

Business users have immediate need for analysis and decision-making. Typical turnaround for IT to get new sources of data takes anywhere between 90 days to 180 days. This is absolutely the killer for the business which wants the data now for analysis. Spreadsheets are still the top BI tool just for this reason. With Cloud BI (not just the tool) Business users get not only  the visualization and other product features but also the data which is not otherwise available. Customer analytics with social media analysis are available as  a third-party BI solution. In the case of value-added analytics there is business reason to go for these solutions.

 

Tool CapabilitiesBI_cap

Power users need ways to slice and dice the data, need integration of other non traditional sources (Excel, departmental cloud applications) to produce a combined analysis. Many BI tools comes with light weight integration (mostly push integration) to make this a reality without too much of IT bottleneck.

So if we can add new capability, without much delay and within departmental budget where is the rub?

The issue is not looking at the Enterprise Information in a holistic way. Though speed is critical, it is equally important to engage Governance and IT to secure the information and share appropriately to integrate into the Enterprise Data Asset.

As we move into the future of Cloud based solutions, we will be able to solve many of the bottlenecks, but we will also have to deal with security, compliance and risk mitigation management of leaving the data in the cloud. Forging a strategy to meet various BI demands of the enterprise with proper Governance will yield the optimum use of resources and /solution mix.

An Architectural Approach to Cognos TM1 Design

Overtime, I’ve written about keeping your TM1 model design “architecturally pure”. What this means is that you should strive to keep a models “areas of functionality” distinct within your design.

Common Components

I believe that all TM1 applications, for example, are made of only 4 distinct “areas of functionality”. They are absorption (of key information from external data sources), configuration (of assumptions about the absorbed data), calculation (where the specific “magic” happens; i.e. business logic is applied to the source data using the set assumptions) and consumption (of the information processed by the application and is ready to be reported on).

Some Advantages

Keeping functional areas distinct has many advantages:

  • Reduces complexity and increases sustainability within components
  • Reduces the possibility of one component negativity effecting another
  • Enables the probability of reuse of the particular (distinct) components
  • Promotes a technology independent design; meaning components can be built using the technology that best fits their particular objective
  • Allows components to be designed, developed and supported by independent groups
  • Diminishes duplication of code, logic, data, etc.
  • Etc.

Resist the Urge

There is always a tendency to “jump in” and “do it all” using a single tool or technology or, in the case of Cognos TM1, a few enormous cubes and today, with every release of software, there are new “package connectors” that allow you to directly connect (even external) system components. In addition, you may “understand the mechanics” of how a certain technology works which will allow you to “build” something, but without comprehensive knowledge of architectural concepts, you may end up with something that does not scale, has unacceptable performance or is costly to sustain.

Final Thoughts

Some final thoughts:

  • Try white boarding the functional areas before writing any code
  • Once you have your “like areas” defined, search for already existing components that may meet your requirements
  • If you do decide to “build new”, try to find other potential users for the new functionality. Could you partner and co-produce (and thus share the costs) a component that you both can use?
  • Before building a new component, “try out” different technologies. Which best serves the need of these components objectives? (A rule of thumb, if you can find more than 3 other technologies or tools that better fit your requirements than the technology you planned to use, you’re in trouble!).

And finally:

Always remember, just because you “can” doesn’t mean you “should”.

A Practice Vision

Vision

Most organizations today have had successes implementing technology and they are happy to tell you about it. From a tactical perspective, they understand how to install, configure and use whatever software you are interested in. They are “practitioners”. But, how may can bring a “strategic vision” to a project or to your organization in general?

An “enterprise” or “strategic” vision is based upon an “evolutionary roadmap” that starts with the initial “evaluation and implementation” (of a technology or tool), continues with “building and using” and finally (hopefully) to the organization, optimization and management of all of the earned knowledge (with the tool or technology). You should expect that whoever you partner with can explain what their practice vision or mythology is or, at least talk to the “phases” of the evolution process:

Evaluation and Implementation

The discovery and evaluation that takes place with any new tool or technology is the first phase of a practices evolution. A practice should be able to explain how testing is accomplished and what it covers How was it that they determined if the tool/technology to be used will meet or exceed your organization’s needs? Once a decision is made, are they practiced at the installation, configuration and everything that may be involved in deploying the new tool or technology for use?

Build, Use, Repeat

Once deployed, and “building and using” components with that tool or technology begin, the efficiency at which these components are developed as well as the level of quality of those developed components will depend upon the level of experience (with the technology) that a practice possess. Typically, “building and using” is repeated with each successful “build” so how many times has the practice successfully used this technology? By human nature, once a solution is “built” and seems correct and valuable, it will be saved and used again. Hopefully, this solution would have been shared as a “knowledge object” across the practice. Although most may actually reach this phase, it is not uncommon to find:

  • Objects with similar or duplicate functionality (they reinvented the wheel over and over).
  • Poor naming and filing of objects (no one but the creator knows it exists or perhaps what it does)
  • Objects not shared (objects visible only to specific groups or individuals, not the entire practice)
  • Objects that are obsolete or do not work properly or optimally are being used.
  • Etc.

Manage & Optimization

At some point, usually while (or after a certain number of) solutions have been developed, a practice will “mature its development or delivery process” to the point that it will begin investing time and perhaps dedicate resources to organize, manage and optimize its developed components (i.e. “organizational knowledge management”, sometimes known as IP or intellectual property).

You should expect a practice to have a recognized practice leader and a “governing committee” to help identify and manage knowledge developed by the practice and:

  • inventory and evaluate all known (and future) knowledge objects
  • establish appropriate naming standards and styles
  • establishing appropriate development and delivery standards
  • create, implement and enforce a formal testing strategy
  • continually develop “the vision” for the practice (and perhaps the industry)

 

More

As I’ve mentioned, a practice needs to take a strategic or enterprise approach to how it develops and delivers and to do this it must develop its “vision”. A vision will ensure that the practice is leveraging its resources (and methodologies) to achieve the highest rate of success today and over time. This is not simply “administrating the environment” or “managing the projects” but involves structured thought, best practices and continued commitment to evolved improvement. What is your vision?

IBM OpenPages GRC Platform –modular methodology

The OpenPages GRC platform includes 5 main “operational modules”. These modules are each designed to address specific organizational needs around Governance, Risk, and Compliance.

Operational Risk Management module “ORM”

IBM OpenPages GRC Platform - modular methodologyThe Operational Risk Management module is a document and process management tool which includes a monitoring and decision support system enabling an organization to analyze, manage, and mitigate risk simply and efficiently. The module automates the process of identifying, measuring, and monitoring operational risk by combining all risk data (such as risk and control self-assessments, loss events, scenario analysis, external losses, and key risk indicators (KRI)), into a single place.

Financial Controls Management module “FCM”

The Financial Controls Management module reduces time and resource costs associated with compliance for financial reporting regulations. This module combines document and process management with awesome interactive reporting capabilities in a flexible, adaptable easy-to-use environment, enabling users to easily perform all the necessary activities for complying with financial reporting regulations.

Policy and Compliance Management module “PCM”

The Policy and Compliance Management module is an enterprise-level compliance management solution that reduces the cost and complexity of compliance with multiple regulatory mandates and corporate policies. This model enables companies to manage and monitor compliance activities through a full set of integrated functionality:

  • Regulatory Libraries & Change Management
  • Risk & Control Assessments
  • Policy Management, including Policy Creation, Review & Approval and Policy Awareness
  • Control Testing & Issue Remediation
  • Regulator Interaction Management
  • Incident Tracking
  • Key Performance Indicators
  • Reporting, monitoring, and analytics

IBM OpenPages IT Governance module “ITG”

This module aligns IT services, risks, and policies with corporate business initiatives, strategies, and operational standards. Allowing the management of internal IT control and risk according to the business processes they support. In addition, this module unites “silos” of IT risk and compliance delivering visibility, better decision support, and ultimately enhanced performance.

IBM OpenPages Internal Audit Management module “IAM”

This module provides internal auditors with a view into an organizations governance, risk, and compliance, affording the chance to supplement and coexist with broader risk and compliance management activities throughout the organization.

One Solution

The IBM OpenPages GRC Platform Modules Object Model (“ORM”, “FCM”, “PCM”, “ITG” an “IAM”) interactively deliver a superior solution for Governance, Risk, and Compliance. More to come!

The installation Process – IBM OpenPages GRC Platform

When preparing to deploy the OpenPages platform, you’ll need to follow these steps:

  1. Determine which server environment you will deploy to – Windows or AIX.
  2. Determine your topology – how many servers will you include as part of the environment? Multiple application servers? 1 or more reporting servers?
  3. Perform the installation of the OpenPages prerequisite software for the chosen environment -and for each server’s designed purpose (database, application or reporting).
  4. Perform the OpenPages installation, being conscious of the software that is installed as part of that process.

Topology

Depending upon your needs, you may find that you’ll want to use separate servers for your application, database and reporting servers. In addition, you may want to add additional application or reporting servers to your topology.

 

 

topo

 

 

 

 

 

 

 

 

 

 

 

 

After the topology is determined you can use the following information to prepare your environment. I recommend clean installs (meaning starting with fresh or new machines and VM’s are just fine (“The VMWare performance on a virtualized system is comparable to native hardware. You can use the OpenPages hardware requirements for sizing VM environments” – IBM).

(Note – this is if you’ve chosen to go Oracle rather than DB2):

MS Windows Severs

All servers that will be part of the OpenPages environment must have the following installed before proceeding:

  • Microsoft Windows Server 2008 R2 and later Service Packs (64-bit operating system)
  • Microsoft Internet Explorer 7.0 (or 8.0 in Compatibility View mode)
  • A file compression utility, such as WinZip
  • A PDF reader (such as Adobe Acrobat)

The Database Server

In addition to the above “all servers” software, your database server will require the following software:

  • Oracle 11gR2 (11.2.0.1) and any higher Patch Set – the minimum requirement is Oracle 11.2.0.1 October 2010 Critical Patch Update.

Application Server(s)

Again, in addition to the above “all servers” software, the server that hosts the OpenPages application modules should have the following software installed:

  • JDK 1.6 or greater, 64-bit Note: This is a prerequisite only if your OpenPages product does not include WebLogic Server.
  • Application Server Software (one of the following two options)

o   IBM Websphere Application Server ND 7.0.0.13 and any higher Fix Pack Note: Minimum requirement is Websphere 7.0.0.13.

o   Oracle WebLogic Server 10.3.2 and any higher Patch Set Note: Minimum requirement is Oracle WebLogic Server 10.3.2. This is a prerequisite only if your OpenPages product does not include Oracle WebLogic Server.

  • Oracle Database Client 11gR2 (11.2.0.1) and any higher Patch Set

Reporting Server(s)

The server that you intend to host the OpenPages CommandCenter must have the following software installed (in addition to the above “all servers” software):

  • Microsoft Internet Information Services (IIS) 7.0 or Apache HTTP Server 2.2.14 or greater
  • Oracle Database Client 11g R2 (11.2.0.1) and any higher Patch Set

During the OpenPages Installation Process

As part of the OpenPages installation, the following is installed automatically:

 

For Oracle WebLogic Server & IBM WebSphere Application Server environments:

  • The OpenPages application
  • Fujitsu Interstage Business Process Manager (BPM) 10.1
  • IBM Cognos 10.2
  • OpenPages CommandCenter
  • JRE 1.6 or greater

If your OpenPages product includes the Oracle WebLogic Server:

  • Oracle WebLogic Server 10.3.2

If your OpenPages product includes the Oracle Database:

  • Oracle Database Server Oracle 11G Release 2 (11.2.0.1) Standard Edition with October 2010 CPU Patch (on a database server system)
  • Oracle Database Client 11g Release 2 (11.2.0.1) with October 2010 CPU Patch applied 64-bit (on an application server system)
  • Oracle Database Client 11g Release 2 (11.2.0.1) with October 2010 CPU Patch applied 32-bit (on a reporting server system)

 Thanks!

IBM OpenPages Start-up

In the beginning…

OpenPages was a company “born” in Massachusetts, providing Governance, Risk, and Compliancesoftware and services to customers. Founded in 1996, OpenPages had more than 200 customers worldwide including Barclays, Duke Energy, and TIAA-CREF. On October 21, 2010, OpenPages was officially acquired by IBM:

http://www-03.ibm.com/press/us/en/pressrelease/32808.wss

IBM OpenPages Start-upWhat is it?

OpenPages provides a technology driven way of understanding the full scope of risk an organization faces. In most cases, there is extreme fragmentation of a company’s risk information – like data collected and maintained in numerous disparate spreadsheets – making aggregation of the risks faced by a company extremely difficult and unmanageable.

Key Features

IBM’s OpenPages GRC Platform can help by providing many capabilities to simplify and centralize compliance and risk management activities. The key features include:

  • Provides a shared content repository that can (logically) present the processes, risks and controls in many-to-many and shared relationships.
  • Supports the import of corporate data and maintains an audit trail ensuring consistent regulatory enforcement and monitoring across multiple regulations.
  • Supports dynamic decision making with its CommandCenter interface, which provides interactive, real-time executive dashboards and reports with drill-down.
  • Is simple to configure and localize with detailed user-specific tasks and actions accessible from a personal browser based home page.
  • Provides for Automation of Workflow for management assessment, process design reviews, control testing, issue remediation and sign-offs and certifications.
  • Utilizes Web Services for Integration. OpenPages utilizes OpenAccess API Interoperate with leading third-party applications to enhance policies and procedures with actual business data.

Understanding the Topology

The OpenPages GRC Platform consists of the following 3 components:

  • 1 database server
  • 1 or more application servers
  • 1 or more reporting servers

Database Server

The database is the centralized repository for metadata, (versions of) application data, and access control. OpenPages requires a set of database users and a tablespace (referred to as the “OpenPages database schema”). These database components install automatically during the OpenPages application installation, configuring all of the required elements. You can use either Oracle or DB2 for your OpenPages GRC Platform repository.

 Application Server(s)

The application server is required to host the OpenPages applications. The application server runs the application modules, and includes the definition and administration of business metadata, UI views, user profiles, and user authorization.

 Reporting Server

The OpenPages CommandCenter is installed on the same computer as IBM Cognos BI and acts as the reporting server.

Next Steps

An excellent next step would be to visit the ibm site and review the available slides and whitepapers. After that, keep tuned to this blog!