Perficient Enterprise Information Solutions Blog

Blog Categories

Subscribe via Email

Subscribe to RSS feed


Follow Enterprise Information Technology on Pinterest

Implementing Cognos ICM at Perficient

The Bait by nist6dh, on Flickr
Creative Commons Creative Commons Attribution-Share Alike 2.0 Generic License   by  nist6dh 

Defining the Problem

For any growing organization, with a good size sales team compensated through incentives for deals and revenue, calculating payments becomes a bigger and bigger challenge. Like many organizations, Perficient handled this problem with Excel spreadsheets, long-hours, and Excedrin. Our sales team is close to a hundred strong and growing 10% each year. To help reward activities aligned to our business goals and spur sales that move the company in its strategic direction, the Perficient sales plans are becoming more granular and targeted. Our propensity to acquire new companies jolts the sales teams size and introduces new plans, products, customers, and territories. With Excel, it is almost impossible, without a Herculean effort, to identify whether prior plan changes had the desired effect or what plan changes might cost. With, literally, hundreds of spreadsheets being produced each month the opportunity to introduce errors is significant. Consequently, executives, general managers, sales directors, business developers, and accountants spend hundreds if not thousands of hours each month validating, checking, and correcting problems. The risks involved in using Excel are significant, with an increased likelihood of rising costs for no benefit, and limited ability to model alternative compensation scenarios

Choosing Cognos Incentive Compensation Management (ICM)

While there are many tools on the market, the choice to use Cognos ICM was relatively simple. Once we had outlined the benefits and capabilities of the tool, our executive team was onboard.

Cognos ICM is a proven tool, having been around for a number of years. Cognos ICM was formerly known as Varicent, before Varicent’s acquisition by IBM. The features of the tool that really make sense for Perficient are numerous. The calculation engine is fast and flexible allowing any type of complexity and exception to be handled with ease, and for reports and commission statements to be opened virtually instantaneously. The data handling and integration capabilities are excellent, allowing the use of virtually any type of data from any system. In our case, we are consuming data from our ERP, CRM, and HR systems, along with many other files and spreadsheets. Cognos ICM’s hierarchy management capabilities, allow us to manage sales team, reporting, and approval hierarchies with ease. User and payee management permissions with security comes bundled with the tool and will allow integration with external authentication tools. From a process point of view, workflow and scheduling are built in and can be leveraged to simplify the administration of the incentive compensation calculation and payment processes. Finally, the audit module tracks everything that is going on in the system from user activity, to process and calculation timing, to errors that occur.

Perficient is one of a few elite IBM Premier Business Partners. As the Business Analytics business unit within Perficient, we have a history of not only implementing IBM’s Business Analytics tools for our clients but also ourselves. We have implemented Cognos TM1 as a time and expense management system from which we could generate invoices, feed payroll, and pay expenses directly. We use Cognos Business Intelligence (BI) to generate utilization and bonus tracking reports for our consultants. We feel it essential that we not only implement solutions for our clients but to eat our own dog food, if you will.

Implementation and Timeline

Once we made the decision to implement and the budget had been approved, we decided on a waterfall-based lifecycle to drive the project. The reason for this selection has to do with our implementation team’s availability. As a consulting organization, the need to pull consultants into client engagements is absolute. We are also geographically dispersed so co-location with the business users was not an option. Having discrete phases, which could be handed from resource to resource was a must. As is typical with most waterfall projects, we implemented Cognos ICM in four major phases: requirements, design, development, and testing.

During the requirements phase, we broke down what we did today and layered that with what we wanted to do tomorrow. The output of the requirements phase was the Requirements Document with narrative and matrix style requirements.

Our design approach was to use the Cognos ICM Attributes Approach best practices developed by IBM. Rather than blindly following IBM’s prescribed methodology, we adopted the components that fit and discarded those that did not. The output of our design phase was a detailed design document that was ready for direct implementation in the tool.

The development phase had three distinct flavors. Data integration, where we sourced, prepared, and loaded the incoming data. Our goal was to load as much data as possible without forcing manual intervention. The calculation development segment, where we developed the calculations for hierarchies, quota, crediting, attainment, and payout. This is where the ICM logic resides and feeds into the compensation statements and reports. The last component was reporting. This included the development of the commission statements, analytical reports, and the file sent to payroll.

The testing phase had two components, one of system testing and one of user acceptance and parallel testing. Today we are in the midst of the parallel testing, ensuring that we mirror the current statements or know exactly why we have differences.

Already, we are defining enhancements and future uses of the system. We need new reports to support detailed review of compensation statements and to analyze the success of programs. We have new plans for different types of business developers and others in the organization with incentive compensation. We have new data sources to be integrated to allow prospective and booked projects to feature into the report set.

Our goal, at the outset, was to get to parallel testing in three months assuming that our resources were available full-time. Starting at the end of January and being in parallel test today got us close. We lost out because client engagements took two of our resources; one part-time and one full-time. Targeting 90 days for an initial implementation is quite feasible.


The most important people on our team were the accountants and sales plan designers. They are the ones who know the ins and outs of the current plan and all the exceptions that apply. Going forward, they are the people who will continue to administer the plans and system. We also identified a secondary group of VIPs; business developers, managers, and executives to be involved as they are on the sharp end of the ICM system.

Our implementation team consisted of three to four resources. A solution architect who drove the design and calculation development. A developer who was responsible for data integration and report development. A business analyst for requirements gathering and system testing. A project manager who also moonlit as a business analyst.


We expect to receive many benefits from implementing Cognos ICM. We expect that the accuracy and consistency of our compensation statements to improve. Accenture, Deloitte, and Gartner estimate that variable compensation overpayments range from 2% to 8%. A company with $30M in incentive compensation will overpay between $600,000 and $2,400,000 every year. During the development process we identified issues with the current commission statements that needed correction.

Using Cognos ICM will improve incentive compensation visibility and transparency. Our business developers can review their commission statements throughout the month to ensure they are credited for the correct transactions. They can quickly identify where they stand in terms of accounts receivable, for which they are penalized. The sales managers can see how their teams are doing and who needs assistance. Our management team can perform what-if analyses to understand plan changes

Amongst the biggest benefits across the board will be time. Our Business Developers and General Managers can reduce their shadow accounting time. Our accounting team can reduce the amount of time they spend on data integration and cleanup, manually generating compensation statements, along with the amount of time they spend resolving errors and issues.


Going into this we knew one of the problems we would face is having resources available. For a consulting company like Perficient this is a great problem to have, our Cognos ICM resources are engaged on client implementation projects. It is always said that the Cobbler’s children have no shoes.

The second challenge of implementing Cognos ICM is exceptions. For the most part, implementing an incentive compensation solution is simple and the project sponsors will express a desire for it to be simpler. Then all the exceptions will come to light that need to be handled. We found a number of exceptions after beginning the project, but because of the power of Cognos ICM we were able to handle them and reduce the manual changes the accounting team needed to make.

The other challenge we faced was the data. The data coming out of our systems supports its original purpose but is often lacking for other uses. We needed to integrate and cleanse the data, all processes the accounting team had done manually, in order to have it flow through the ICM system. As we used the Cloud version of Cognos ICM, we leveraged staging and intra-system imports to smooth the integration process.

Finding Out More

Perficient will have a booth at the IBM Vision 2015 conference, which will feature Cognos ICM heavily. I will be there and look forward to meeting with you if you plan on attending. If you’re at the event, stop by and chat for a while. You can also leave me a comment in the box below. I look forward to hearing from you.

Data & Security on Top of the Mind of CIOs


Of the 10 top concerns of CIOs and CTOs, as reported in Janco Associates Annual Review, Consolidation of Legacy Data and Big Data both show up in the Top 5, and have moved up substantially from prior years’ surveys. Furthermore, In Forbes’ Top 10 Strategic CIO Issues For 2015, “Drive Customer-Centric Innovation Throughout Your Organization”, comes in at #1.

This shows that CIOs and CTOs are becoming increasingly aware that they are in the hot seat for fixing their data mess. This is also a growing justification for introducing the Chief Data Officer role, most of the times, reporting directly to the CEO. The steady increase in concern also points to the urgency around becoming data-driven organizations in order to effectively support Business innovation and corporate objectives that are tied directly to the bottom-line.

If you look at the recent Security breaches at Sony, and elsewhere, it is clear that data and security are intertwined issues, and big impediments to digital business. Consequently, we also see the injection of predictive analytics in this discussion. For any real transformation to take place, especially around customer-centricity, organizations first need to become data-driven and must focus on addressing data, holistically, from a Process, People and Platform standpoint.

Posted in News

Oracle BI Cloud Service (BICS): Data Sync for Automated Data Load

bi_161-315In a few days Oracle will release Data Sync, a utility that facilitates the process of loading data from various on-premise sources into BI Cloud Services (BICS). The Data Sync utility comes as an addition to several other already available means of loading data into BICS: data load wizard to upload data from files, SQL Developer cart deployment feature, and RESTful APIs. What is special about Data Sync is that it makes the effort of data loading more manageable from a capability and scalability perspective (full loads vs incremental loads of many tables), and its scheduling, notification and automation aspects as well. This approach makes data loading a lot closer to conventional ETL methods. And if you ever worked with DAC before to run execution plans of Informatica mappings, you will find that Data Sync follows a similar methodology of defining connections, tasks and jobs. However, instead of referencing Informatica mappings, as is the case with DAC, Data Sync itself handles the mapping of source columns to target columns. And it supports SQL Queries so your opportunities for doing data transformations are endless. In this blog I present the key features of Data Sync.

The whole Data Sync software sits in one place. There is no server/client components to it. So you basically extract the zipped file you download from Oracle onto a machine on your network that can connect to the different data sources you wish to source data from. Once you unzip the file, you configure the location of your java home and that’s it! You are ready to launch Data Sync and start moving data.

There are 3 main steps to follow when designing data loads in Data Sync. Below the item menu there are 3 corresponding tabs to configure: Connections, Projects and Jobs. Read the rest of this post »

Oracle BI Cloud Service: How to migrate from Test to Production?

LogoWith an Oracle Business Intelligence Cloud Service (BICS) subscription, you get accesses to two instances: Test and Production. The test instance can be the initial playground where the upfront development work is carried out before pushing the developed components to the Production instance. The development work may entail creating tables in the Oracle Cloud Schema Database, loading data, creating data models, reports and dashboards. You may very well find yourself with a fully functional system on the Test instance and now thinking of how to migrate everything to Production. This blog elaborates on how to achieve such a task.

Read the rest of this post »

Essentials for Transforming into an Information-Driven Enterprise

shutterstock_223785412I recently got published in the Special Big Data Edition of CIO Story (see page 20), where I talked about the “six” essentials for transforming into an Information-driven organization.

Information is a hot commodity. Research suggests that in the next two to three years, businesses will begin to apply monetary value to their information assets by trading or selling them. Throughout history, this notion has been referred to as “Infonomics.”

The principles of Infonomics are based on the premise that information has both potential and realized economic value, which can be quantified and should be managed as an asset. The benefits in doing so include improving the collection and use of company information, determining how much to spend on business or IT initiatives, and improving relationships with customers, employees and partners by sharing better information with them. More and more organizations realize that the trick to experiencing these benefits by better managing assets, however, is to effectively apply the organization’s existing experience in managing other assets toward managing information assets. But, in order to get to that point, executive leadership (business and IT) needs to recognize the barriers to becoming an information-driven enterprise while focusing on certain fundamental strategy essentials.

If organizations are serious about improving the value and speed of information, they must consider the following six imperatives. Doing so will drive their organization’s ability to become an info-centric enterprise:
Read the rest of this post »

Posted in News

It’s All About “IoM”

All of this focus on the Internet of Things (IoT) is really about the “Internet of Me” (IoM). From social media sites to smartphone apps and GPS systems, loads of data are being generated today about individuals – their interests, their travels, their behavioral patterns, their purchases, and so on. No one in this digital economy can afford to ignore the demands of the “me” generation. It is no longer good enough to tailor marketing based on customer demographics alone. All interactions now need to be customized to your customer’s specific situation and emotions.

With all of this digital interconnectedness, one thing that is very clear is that customer loyalty is at risk. Comparison shopping is as easy as a few mouse clicks, and previously loyal customers can quickly discover new products, new services and new vendors, and learn what other buyers like and dislike — all without leaving their laptops and other mobile devices.

Research shows that more than 50% of consumer interactions are now occurring in this multi-event, multi-channel environment. But, 65% of consumers get frustrated by companies that do not provide a consistent experience through these various media. Those firms that put a priority on the consumer experience and can provide consistency regardless of source have been shown to generate 60% in additional profits versus their less enlightened competitors.

The bottom-line is that a brand is no longer simply what we tell the consumer it is. “It’s increasingly what consumers tell each other it is.” So, how do you ensure brand competitiveness in such a volatile environment?

Well, this is where competitive organizations must recognize and strategically embrace the emerging nexus of social, mobile, cloud and information, where Big Data and advanced analytics serve as revolutionary ways of advancing the digital ecosystem. We must therefore look for big data opportunities across the nexus shifts, and in turn craft a vision that takes into account more of an all-encompassing “personalization” perspective.

It is imperative that we leverage these newer data sources and types for personalizing the digital experience for each customer by considering the environmental factors and circumstances that surround an individual use case (so contextual), or then a customer’s previous interactions to provide an evolving experience that spans across interactions (behavioral), in attempts to align a customer’s preferences with those of a pre-defined target persona through that journey! So, persona and journey- based personalizations.

Therefore, it is pertinent that customer-centric organizations be able to create a continuous, seamless virtual cycle of targeting the right customer with the right offer @ the right time by looking at avenues to enhance the existing 360- degree view of the customer. Initiatives focused on such a view have gone a long way toward providing those benefits by synthesizing customer profiles, sales and other structured data from multiple sources across the enterprise. But today, there is more opportunity for growth when you enhance that view with information from more sources, both within and beyond the enterprise. Information in email messages, unstructured documents, web logs, machine data, and social media sentiments – previously beyond reach – is now extending this view.

Organizations that make full use of these data sources can deliver better insights and a sharper competitive edge by making the customer’s experience more personalized, thereby encouraging loyalty and accelerating sales. Therefore, an enhanced 360-degree view of the customer is a holistic approach that takes into account all available and meaningful information about the customer to drive better engagement, more revenue and long- term loyalty. It combines data integration, data exploration, data governance, data access and analytics in a cohesive solution that harnesses the volume, velocity and variety of Big Data. To establish such a view of the customer, you must be able to:
• Eliminate duplicates and rationalize conflicting information through matching, linking and semantic reconciliation of master data to create and maintain a golden record
• Integrate high-quality data across multiple enterprise systems
• Manage new data types and navigate quickly through massive amounts of both structured and unstructured information from within and beyond the enterprise to find the most pertinent information
• Creating a single, up-to-date view of customers and other key entities that can be used throughout the organization, all by Leveraging Hadoop systems so that information of all types, in any volume and at any velocity, can be incorporated into the single view
• Assessing streaming data sources to analyze perishable data quickly and to select valuable data and insights to be stored for further processing
• Federate search, discovery and navigation securely across a wide range of applications, data sources and formats

New analytic opportunities are driven from this centralized, data lake architecture, where Hadoop is increasingly being leveraged as an enabler. What is critical here is to make the analytical process as specific as it can be to each customer’s digital journey by leveraging capabilities such as advanced customer segmentation, predictive and prescriptive analytics to enable cross-sell and up-sell, along with next best offer generation, thereby helping you create and evaluate the consistency of that experience across relevant products and channels.

In essence, digital transformation needs big data analytics technology, at-rest and in-motion, in order to enable a deeper level of analysis across various touch points: Mobile, Social, Web, Multi-channel. And, in order to make that happen, an effective analytical process tied to distinct digital data architectural capabilities needs to be created and implemented.

Posted in Big Data

How to avoid Big Data pitfalls…

If history is any indication, companies will encounter  false starts in Big Data initiatives, like we did in the early Data Warehouse days. I see similar confusion in terms of the tools, types of solutions. The variety, volume and veracity of new tools and technology companies offering solutions are enormous. For starters Big Data is tech heavy and geeky. One should know when you see the green screens and Unix/Linux prompts and wonder whatever happened to the GUI. Part of the reason why getting business value out of it is difficult.

But not to worry, technological advances and lessons from the history, will keep us straight with options. Before we talk about that let’s look at what it takes to implement a decent Big Data solution. Assuming the dream team with all the right skills at the right price are in place. Just to prove if the Data is worth producing the business value you are looking for will cost at least 715K.  This does not include HW and SW.  You get the picture where it is going, expensive experiment no matter how bare to the bones you get.big_data_cost1

So how to minimize risk and know the business value for real than some hypothetical assumptions and hypothesis, especially when it is going to cost close to a million dollars. The good news is we have options.  Depending on the level of or organization’s maturity in handling Big Data Analytics the following options are worth considering

  • Leverage the cloud
  • Leverage Partners
  • Conduct a POC to identify the value of the Data in question rather than going full steam ahead

Leveraging cloud option relieves the IT infrastructure delays. It also provides ways to compare different solutions. Partners bring not only technology solutions but also provide industry experts who have done this at different clients. Finally POC will validate the assumptions or even level set the expectations.

Posted in News

Reasons for chronic Data Quality issues…

Many companies have invested millions in building a successful BI / EDW and are investing in advanced analytics for the future. But the mystery remains about the data quality. Though glaring DQ issues might be contained through constant backend data corrections or through exception handling, many organizations still faces the challenge of poor data quality.


Source: Information Week

The reason Data Quality does not get addressed in many organizations because of several reason. Typically you find:

  • The IT organization manually corrects the data issues over and over
  • Business takes the report and adds/ modifies the data for further use
  • Reports are just to verify basic information, real data resides in someone’s spreadsheet





So the problem gets buried in various facets of the organization. Everybody knows the problem but no one will step up to own it or sponsor to fix it permanently.  The more efficient IT is, harder it is to build a business case for DQ tools or initiating DQ projects.

Having a Data Governance organization becomes very critical in bringing the business and IT together. This is the forum where business and IT can work together to solve the DQ issues and define the ownership and accountability.  A day a month of cleaning up each person who uses the data adds up quickly in terms of hours and not to mention the data discrepancies due to manual changes to data.

Matured organizations understand the DQ issues and implements the DQ as part of the overall development / operations. It is an expensive affair if the DQ goes unchecked. One time cleanup of data will slowly decay over time to right where we started. Investing in setting up DQ metrics, data ownership and other quality related policies enabled by appropriate tools is the right way to solve the Data Quality issues. DQ does not mean perfect data but good enough data to do the analysis for right decision-making.





How to Customize OBIEE 11g Error Messages


Whenever OBIEE encounters a run-time error, instead of getting a report you typically see an error message like this on screen:


Expand “Error Details” and you are provided with several lines of query information.


For an administrator, the more information the better so one can figure out what went wrong without having to search through sessions log files to identify the problem. However, this much information may be a lot more than what a typical report user would like to see when a failure happens. In addition, the default error messages reported on the screen are more tailored for IT, someone who reads “ODBC” and deduces that the problem is related to the data source. Is it possible to make these error messages more user-friendly for business users? The answer is Yes.

I have split the default error message into 3 sections as denoted in the image above. Here is how to customize each section.

  •  Section C: I am starting with section C since it’s the easiest to control. Keep in mind that this section gets displayed only when the Error Details are expanded. On order to prohibit users from seeing the logical queries in the error message, you can deny specific users/roles the “See SQL issued in errors” privilege. This setting is accessible from the “Admin: General” section by navigating to Administration in OBIEE analytics and then “Manage Privileges”.


Users denied this privilege see a much shorter message if they expand “Error Details”. Users who are granted this privilege, such as report developers or administrators, can still see the full message with the query. Here is what it looks like when the privilege is denied.



  • Section A: Customizing OBIEE error messages follows the same approach of customizing any other OBIEE messages. All default messages are located under the following directory:


For example: E:\OBIEE\Oracle_BI1\bifoundation\web\msgdb

Generally when customizing OBIEE messages, you will need to find the file under the above out-of-the-box directory that includes the messages you are customizing. Once you have located the file, copy it into a custom folder and customize the message in the copied file. The reason why the custom file needs to go under a custom folder as opposed to the out-of-the-box msgdb folder is to avoid having any customization overridden when OBIEE patches/updates are applied.

Similarly for the error messages I referenced earlier, create a custom directory called customMessages under analyticsRes to store the custom message files. The folder structure under customMessages should resemble the folder structure of the corresponding files under the out of the box messages folder.

Let’s go back to my earlier error screenshot and in particular Section A which contains the message: “View Display Error”. This default message is defined in:


Copy this file over to the custom directory:


Modify the copied file by searching for the following line and replace the text between the HTML tags with your custom message. You may also leave it empty to not display this message if you chose to do so.

<WebMessage name=”kmsgEVCViewDisplayErrorTitle”><HTML>View Display Error</HTML></WebMessage>


  •  Section B: Section B can be customized in a similar way to how Section A is customized. The Default message “ODBC driver returned an error” is located in the following file:


Copy this file over to the custom directory:


Modify the copied file by searching for the following line and replace the text between the TEXT tags with your custom message.

<WebMessage name=”kmsgOdbcAccessOdbcException”><TEXT>Odbc driver returned an error (<sawm:param insert=”1″/>).</TEXT>

Once done with all the above changes, restart BI Presentation Services for the custom files to be picked up by the server. Your custom error messages should now show up instead of the default messages.

Analytical Talent Gap

As new companies embark on the Digital Transformation leveraging Big Data, key concerns and challenges get amplified especially for the near term before the technology and talent pool supply adjusts to the demand. Looking at the  earlier post Big Data Challenges, the top 3 concerns were:

  1. Identifying the Business value/Monetizing the Big Data
  2. Setting up the Governance to manage Big Data
  3. Availability of skills

    Source: Mckinsey

    Source: Mckinsey

Big Data Skills can be broadly classified into 4 categories:

  • Business / Industry Knowledge
  • Analytical Expertise
  • Big Data Architecture
  • Big Data Tools (Infrastructure management, Development)

The value creation or the monetizing of the Big Data (see Architecture needed to monetize API’s) depends on the Business and the Analytical talent. See talent gap on the right specifically in the analytical area. Educating and augmenting the talent shortage through partner companies is critical for the niche and must have technology. As tools evolve coping up with the Architecture becomes very important as past tool / platform short comings addressed with new complexities.

While business continues to search for the Big Data gold, System Integrators and Product vendors are perfecting the methods to shrink the time to market, best practices and through Modern Architecture. How much of the gap we can shrink depends on multiple factors of Companies and their partners.

See also our webinar on: Creating a Next-Generation Big Data Architecture