Perficient Enterprise Information Solutions Blog

Blog Categories


Posts Tagged ‘business intelligence’

Implementing Cognos ICM at Perficient

The Bait by nist6dh, on Flickr
Creative Commons Creative Commons Attribution-Share Alike 2.0 Generic License   by  nist6dh 

Defining the Problem

For any growing organization, with a good size sales team compensated through incentives for deals and revenue, calculating payments becomes a bigger and bigger challenge. Like many organizations, Perficient handled this problem with Excel spreadsheets, long-hours, and Excedrin. Our sales team is close to a hundred strong and growing 10% each year. To help reward activities aligned to our business goals and spur sales that move the company in its strategic direction, the Perficient sales plans are becoming more granular and targeted. Our propensity to acquire new companies jolts the sales teams size and introduces new plans, products, customers, and territories. With Excel, it is almost impossible, without a Herculean effort, to identify whether prior plan changes had the desired effect or what plan changes might cost. With, literally, hundreds of spreadsheets being produced each month the opportunity to introduce errors is significant. Consequently, executives, general managers, sales directors, business developers, and accountants spend hundreds if not thousands of hours each month validating, checking, and correcting problems. The risks involved in using Excel are significant, with an increased likelihood of rising costs for no benefit, and limited ability to model alternative compensation scenarios

Choosing Cognos Incentive Compensation Management (ICM)

While there are many tools on the market, the choice to use Cognos ICM was relatively simple. Once we had outlined the benefits and capabilities of the tool, our executive team was onboard.

Cognos ICM is a proven tool, having been around for a number of years. Cognos ICM was formerly known as Varicent, before Varicent’s acquisition by IBM. The features of the tool that really make sense for Perficient are numerous. The calculation engine is fast and flexible allowing any type of complexity and exception to be handled with ease, and for reports and commission statements to be opened virtually instantaneously. The data handling and integration capabilities are excellent, allowing the use of virtually any type of data from any system. In our case, we are consuming data from our ERP, CRM, and HR systems, along with many other files and spreadsheets. Cognos ICM’s hierarchy management capabilities, allow us to manage sales team, reporting, and approval hierarchies with ease. User and payee management permissions with security comes bundled with the tool and will allow integration with external authentication tools. From a process point of view, workflow and scheduling are built in and can be leveraged to simplify the administration of the incentive compensation calculation and payment processes. Finally, the audit module tracks everything that is going on in the system from user activity, to process and calculation timing, to errors that occur.

Perficient is one of a few elite IBM Premier Business Partners. As the Business Analytics business unit within Perficient, we have a history of not only implementing IBM’s Business Analytics tools for our clients but also ourselves. We have implemented Cognos TM1 as a time and expense management system from which we could generate invoices, feed payroll, and pay expenses directly. We use Cognos Business Intelligence (BI) to generate utilization and bonus tracking reports for our consultants. We feel it essential that we not only implement solutions for our clients but to eat our own dog food, if you will.

Implementation and Timeline

Once we made the decision to implement and the budget had been approved, we decided on a waterfall-based lifecycle to drive the project. The reason for this selection has to do with our implementation team’s availability. As a consulting organization, the need to pull consultants into client engagements is absolute. We are also geographically dispersed so co-location with the business users was not an option. Having discrete phases, which could be handed from resource to resource was a must. As is typical with most waterfall projects, we implemented Cognos ICM in four major phases: requirements, design, development, and testing.

During the requirements phase, we broke down what we did today and layered that with what we wanted to do tomorrow. The output of the requirements phase was the Requirements Document with narrative and matrix style requirements.

Our design approach was to use the Cognos ICM Attributes Approach best practices developed by IBM. Rather than blindly following IBM’s prescribed methodology, we adopted the components that fit and discarded those that did not. The output of our design phase was a detailed design document that was ready for direct implementation in the tool.

The development phase had three distinct flavors. Data integration, where we sourced, prepared, and loaded the incoming data. Our goal was to load as much data as possible without forcing manual intervention. The calculation development segment, where we developed the calculations for hierarchies, quota, crediting, attainment, and payout. This is where the ICM logic resides and feeds into the compensation statements and reports. The last component was reporting. This included the development of the commission statements, analytical reports, and the file sent to payroll.

The testing phase had two components, one of system testing and one of user acceptance and parallel testing. Today we are in the midst of the parallel testing, ensuring that we mirror the current statements or know exactly why we have differences.

Already, we are defining enhancements and future uses of the system. We need new reports to support detailed review of compensation statements and to analyze the success of programs. We have new plans for different types of business developers and others in the organization with incentive compensation. We have new data sources to be integrated to allow prospective and booked projects to feature into the report set.

Our goal, at the outset, was to get to parallel testing in three months assuming that our resources were available full-time. Starting at the end of January and being in parallel test today got us close. We lost out because client engagements took two of our resources; one part-time and one full-time. Targeting 90 days for an initial implementation is quite feasible.


The most important people on our team were the accountants and sales plan designers. They are the ones who know the ins and outs of the current plan and all the exceptions that apply. Going forward, they are the people who will continue to administer the plans and system. We also identified a secondary group of VIPs; business developers, managers, and executives to be involved as they are on the sharp end of the ICM system.

Our implementation team consisted of three to four resources. A solution architect who drove the design and calculation development. A developer who was responsible for data integration and report development. A business analyst for requirements gathering and system testing. A project manager who also moonlit as a business analyst.


We expect to receive many benefits from implementing Cognos ICM. We expect that the accuracy and consistency of our compensation statements to improve. Accenture, Deloitte, and Gartner estimate that variable compensation overpayments range from 2% to 8%. A company with $30M in incentive compensation will overpay between $600,000 and $2,400,000 every year. During the development process we identified issues with the current commission statements that needed correction.

Using Cognos ICM will improve incentive compensation visibility and transparency. Our business developers can review their commission statements throughout the month to ensure they are credited for the correct transactions. They can quickly identify where they stand in terms of accounts receivable, for which they are penalized. The sales managers can see how their teams are doing and who needs assistance. Our management team can perform what-if analyses to understand plan changes

Amongst the biggest benefits across the board will be time. Our Business Developers and General Managers can reduce their shadow accounting time. Our accounting team can reduce the amount of time they spend on data integration and cleanup, manually generating compensation statements, along with the amount of time they spend resolving errors and issues.


Going into this we knew one of the problems we would face is having resources available. For a consulting company like Perficient this is a great problem to have, our Cognos ICM resources are engaged on client implementation projects. It is always said that the Cobbler’s children have no shoes.

The second challenge of implementing Cognos ICM is exceptions. For the most part, implementing an incentive compensation solution is simple and the project sponsors will express a desire for it to be simpler. Then all the exceptions will come to light that need to be handled. We found a number of exceptions after beginning the project, but because of the power of Cognos ICM we were able to handle them and reduce the manual changes the accounting team needed to make.

The other challenge we faced was the data. The data coming out of our systems supports its original purpose but is often lacking for other uses. We needed to integrate and cleanse the data, all processes the accounting team had done manually, in order to have it flow through the ICM system. As we used the Cloud version of Cognos ICM, we leveraged staging and intra-system imports to smooth the integration process.

Finding Out More

Perficient will have a booth at the IBM Vision 2015 conference, which will feature Cognos ICM heavily. I will be there and look forward to meeting with you if you plan on attending. If you’re at the event, stop by and chat for a while. You can also leave me a comment in the box below. I look forward to hearing from you.

Reasons for chronic Data Quality issues…

Many companies have invested millions in building a successful BI / EDW and are investing in advanced analytics for the future. But the mystery remains about the data quality. Though glaring DQ issues might be contained through constant backend data corrections or through exception handling, many organizations still faces the challenge of poor data quality.


Source: Information Week

The reason Data Quality does not get addressed in many organizations because of several reason. Typically you find:

  • The IT organization manually corrects the data issues over and over
  • Business takes the report and adds/ modifies the data for further use
  • Reports are just to verify basic information, real data resides in someone’s spreadsheet





So the problem gets buried in various facets of the organization. Everybody knows the problem but no one will step up to own it or sponsor to fix it permanently.  The more efficient IT is, harder it is to build a business case for DQ tools or initiating DQ projects.

Having a Data Governance organization becomes very critical in bringing the business and IT together. This is the forum where business and IT can work together to solve the DQ issues and define the ownership and accountability.  A day a month of cleaning up each person who uses the data adds up quickly in terms of hours and not to mention the data discrepancies due to manual changes to data.

Matured organizations understand the DQ issues and implements the DQ as part of the overall development / operations. It is an expensive affair if the DQ goes unchecked. One time cleanup of data will slowly decay over time to right where we started. Investing in setting up DQ metrics, data ownership and other quality related policies enabled by appropriate tools is the right way to solve the Data Quality issues. DQ does not mean perfect data but good enough data to do the analysis for right decision-making.





Bootstrapping Data Governance – Part II

As I mentioned in my earlier post, building the vision for Data Governance happens through multiple meetings, interactions and discussions at various levels. Depending on the company culture and type of industry, progress may end up being faster or slower. Leadership in building the awareness and linking the business issues which can be solved through DG are key strategies to gain support for setting up DG organization.



Here are some of the key activities that need to happen in building the vision, even if you are getting external help:

  • Identify key initiatives and link them to benefits of DG effectiveness
  • Use the strategic planning meetings / Road Map opportunities to include DG track
  • Name someone capable as Data Steward even if it is a part-time role
  • Gather and Highlight the Data Quality issues and the business impacts


Once you have established support from the Executives, start building the business case using the major initiatives to pay for it. Though establishing the DG seems trivial, it has several levels of complexity. Part of the preparation is to knowing the Data issues, remedies and approaches clearly before launching the DG. Key SME’s and Stewards have to work on collecting the information even if external help is brought in. Ultimately insiders will know what works within the organization.

  • Identifying and getting the support of Key SME’s and stakeholders
  • Current business process and pitfalls (Use Business SME’s/ Leaders)
  • Estimating how much of SME’s time and involvement is needed for DG – for socializing the idea
  • Knowing your Key supporters and potential resistance
  • Keeping as much information ready as possible before even engaging external help
  • Create a plan to keep the DG operating independently with Business in the driving seat
  • IT should be servant leader, take the lead in doing all the grunt work and help DG to make the right decision
  • Build / gather the material for business case using the key initiatives / imperatives which can fund the DG effort
  • Start with specific goals agenda and expand once success / participation rate meets expectations
  • Ultimately the execution is the key – will discuss that in the following posts.

Support & Sponsorship

Once sponsorship and support is secured, swift execution and follow through is a must. Bringing in the external help at this point will be very beneficial. Doing the ground work early on and preparing the needed artifacts, any relevant information for DG and quality in general will greatly help in cutting down the current state and future state development, Road Map and in applying the best practices.

In summary, early ground work is crucial for developing the road map with key initiatives for short term wins and quick ROI. Seasoned managers know the value of ground work and they don’t waste time while the planning is in progress. Also the technology (tools, platform) business case should be built at this point along with establishing the DG. Folding the tools expenses as part of the key initiative is always a winning strategy.

Bootstrapping Data Governance – Part I

A lot has been said and written about Data Governance (DG) and the importance of having one. However it is still a mystery for many companies to create an effective DG. Based on our experience majority of the companies in their early stages of DG fall into one of these areas:


  1. Had too many false starts
  2. Not much impact and the DG lost much of the support
  3. No clue, not even attempted

Why is it so difficult to set up a reasonably functioning Data Governance?

The typical scenario is that IT leads the Data Governance initiative, as part of the overhaul of the IT or as part of a new initiative like re-building Data Warehouse / launching Master Data Management program. Too often companies tend to establish DG with limited vision and narrow scope with minimal business involvement. The problem areas and possible pitfalls companies run into  can be broadly classified under three major areas for the DG establishment phase viz., Vision, Preparation and Sponsorship & Support.


Getting the Executive buy-in and setting the Data Governance vision is a process of evolution. Typically this takes 3 – 12 months of pre-work through casual meetings and by including DG topic in the strategy meeting agendas for discussion. Awareness through common education by attending industry seminars/ conferences is another dimension for setting the vision. If DG concept has been discussed and socialized for some time, then leveraging the common understanding to launch the program is the next step.


Being prepared is the best way to avoid false starts. Opportunity to launch the DG happens when you are least prepared. It is not easy to devote time to DG incubation when you have burning issues around you. But those burning issues especially the catastrophic events may escalate the urgency for DG and may gain unprecedented Executive support or even mandate from the top. Now you are definitely stuck if you are least prepared.

Sponsorship & Support

Once you get the go ahead, approaching DG without a holistic vision and complete picture will water down the momentum and slowly the support will start to disappear. Keeping the executive team committed to DG means, producing meaningful results and engaging the business in the planning through execution of DG.


DG Establishment is followed by the organization’s ability to successfully execute the DG mandates. Again putting together a solid approach of people, process and technology will guarantee the success of DG.

In the next segment let’s look at nimble and effective strategies to keep the DG a successful organization from establishment through execution.

It’s all about the data, the data…

Credit_cardWhen Apple jumped into the payment processing with ApplePay, I thought this would be a great leg up for Apple. But who will be the winner and who will be the loser? Granted the payment switches from the credit card to ApplePay which indirectly pays for the purchase, who cares as long as we can charge on the card we want, right? Also what is the market share of Apple Pay going to be? Before we answer all those questions, let’s take a look at how we pay today for services and goods.

Cash may still be the king, that may very well be the last one to die, but what everyone is after is the middle class market which is fast adapting to credit cards and now to smart phones based services, dwindling check usage tells you so. With many ways of shopping using credit cards, store cards, pre-paid cards, Paypal, Internet (billpay,  bitcoin?), the convenience I see is carrying less or no cards at all. I seldom carry my store cards, especially when they can look it up.

Apple pay will be convenient, and may help get rid of the cards altogether, if it is accepted by majority of the merchants. Discover has to go through hurdles before it got accepted, so I don’t see myself getting rid of the cards in the near future, although cards may disappear before cash does.


I read the news that many major merchants have signed up with Apple and I thought, what happens to the data? Who will be owning the granular consumer spend information? Before I could finish the blog I heard the news 2 major retailers pulled out of Apple. Ha, they realized it, the data is more valuable than the technology or convenience to customers. Imagine the data movement and explosion even if Apple shares the detailed information to each of the parties involved.

Apple is expected to have around 34 Million customers with an average of 200 transaction per customer it is going to explode. You can do the math, if this information has to be shared with 2- 5 parties. No wonder some retailers are wary of signing up. I won’t be surprised if each one of the financial institutions / retailers come up with their own App for payment mechanism.

In the end having the customer spend data is more valuable for the business operations, customer excellence etc. Having the right Information Governance to manage this Information asset is not only strategic but also a matter of survival to the enterprise.

Internet of Things and Enterprise Data Management…


It is amazing to see the technology terms we come up with to explain new technology or trend. The consulting thought leadership coins the words to group a set of technology, trend to make it easier for people to have a context. However the success and adoption of the technology/trend defines the term’s reputation. For example Data warehouse was an in-thing only to be shunned when it did not deliver on its promises. Industry quickly realized the mistake and called it Business Intelligence and hid Data Warehouse behind BI until things settled. Now no one questions value of DW or EDW or perceive that as a risky project.

Some terms are really great and they are here to stay for a long time. Some withers away, some change and take a different meaning. One such term which got my attention is IoT – Internet of Things – what is this? It sounds like ‘Those things’ but really what is this trend or technology?

Wikipedia gives you this definition:

“The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded computing devices within the existing Internet infrastructure. Typically, IoT is expected to offer advanced connectivity of devices, systems, and services that goes beyond machine-to-machine communications (M2M) and covers a variety of protocols, domains, and applications.[1] The interconnection of these embedded devices (including smart objects), is expected to usher in automation in nearly all fields, while also enabling advanced applications like a Smart Grid.[2]


That is a lot of stuff. Looks like pretty much everything we do with Internet. I am sure this term will change and take shape. But let’s look how this relates to Enterprise Data Management. So from an enterprise data perspective, Let us consider a subset of IoT – machine generated internet data and consolidation of data from the systems operating on the cloud. What we end up with is a whole lot of data which is new, and also not in the traditional Enterprise Data framework. The impact and exposure are real, and much of the IoT data may live outside the firewalls.

In essence, the Enterprise Data Management need to deal with the added dimension of Architecture, Technology, and Governance of IoT. Considering IoT Data as out of scope for Enterprise Data Management will lead to more issues than it can solve, especially if you are generating or depend on the IoT data.

SSRS – Have you used it yet?

While there are several BI technologies and more coming into the foray every day, SSRS has remained a key player in this area for quite some time now.  One of the biggest advantages of SSRS reporting is that it involves the participation of the end user and that is very intuitive to use.

Let’s go back few years when excel was the go to tool for dash boarding.  Every time a director or VP wanted a report, he would go to his developers to extract information from the database to help him make dashboards for his meetings.  The end user had to rely on the developers to extract information and had to spend several minutes if not hours to make a dashboard.  This all works ok when the meeting is scheduled for a specific day of the week or month.  We all know this is a myth and most meetings happen impromptu.  In such cases, there is not enough time to extract data and to extrapolate that information into graphs.

Here is why SSRS came in as a key player.  With a strong foundation of Microsoft, SSRS brought in some of the best features and much needed features:

  • Easy connection to databases
  • User friendly interface allowing users to design reports and make changes on the fly.
  • Report generation on a button click.
  • Subscription based delivery to deliver reports on a specific day and time of the month.


While these features may not look ground breaking in the first look, these features actually bring in a lot of value.  These features save a lot of time and that time in business directly translates into revenue.  The developers can design dashboards once and deploy them to a server.  The VP or director can press a button to get these reports on his machine.  Furthermore, the reports can be exported in several formats.  What I really like about the reports though is the look and feel.  Microsoft retained the aesthetics of MS excel reports and by that I mean that you can have a pie chart in excel and in SSRS look exactly same.  This is a great feature especially for the audience since it most people do not like to see the look of the reports change over time.  Another great feature is that SSRS has fantastic security options and one can implement a role based reporting.

In summary, SSRS is a power packed tool and you should reap benefits of the great features that come with it.

For information on Microsoft’s future BI roadmap and self-service BI options check out this post over on our Microsoft blog


Realizing Agile Data Management …

Years of work went into building the elusive single version of truth. Despite all the attempts from IT and business, Excel reporting and Access databases were impossible to eliminate. Excel is the number one BI tool in the industry and for the following good reasons : accessibility to the tool, speed and familiarity. Almost all the BI tools export data to Excel for those reasons. Business will produce the insight they need as soon as the data is available, manual or otherwise. It is time to come to terms with the fact change is imminent and there is no such thing as Perfect Data but only what is good enough to business. As the saying goes:

‘Perfect is the enemy of Good!’

So waiting for all the business rules and perfect data to produce the report or analytics, is too late for the business. Speed is of essence, when the data is available, business wants it; stale data is as good as not having it.


In the changing paradigm of Data Management, agile ideas and tools are in play. Waiting for Months, weeks or even a day to analyze the data from Data warehouse is a problem. Data Discovery through Agile BI tools which doubles as ETL, offers significant reduction in data availability. Data Virtualization provides access to data in real-time for quicker insights along with metadata. In-Memory data appliances produce analytics in fraction of the time compared to traditional Data warehouse/ BI.

We are moving from the Gourmet sit-in dining to fast food concept for Data access and analytical insights. Though both have its place, usage benefits and short comings. They complement each other in terms of use and the value they bring to the Business. In the following series let’s look at these new set of tools and how they help Agile  Data Management throughout the life cycle.

  1. Tools in play:
    1. Data Virtualization
    2. In-Memory Database (appliances)
    3. Data Life Cycle Management
    4. Data Visualization
    5. Cloud BI
    6. Big Data (Data Lake & Data Discovery)
    7. Cloud Integration (on-prem and off-prem)
    8. Information Governance (Data Quality, Metadata, Master Data)
  2. Architectural changes traditional Vs Agile
  3. Data Management Impacts
    1. Data Governance
    2. Data Security & Compliance
    3. Cloud Application Management

Virtualization – THE WHY?


The speed in which we receive information from multiple devices and the ever-changing customer interactions providing new ways of customer experience, creates DATA! Any company that knows how to harness the data and produce actionable information is going to make a big difference to their bottom line. So Why Virtualization? The simple answer is Business Agility.

As we build the new information infrastructure and the tools for the modern Enterprise Information Management, one has to adapt and change. In the last 15 years, the Enterprise Data Warehouse has matured to a point with proper ETL framework and Dimension models.

With the new ‘Internet of Things’ (IoT) a lot more data is created and consumed from external sources. Cloud applications create data which may not be readily available for analysis. Not having the data for analysis will greatly change the critical insights outcome.

Major Benefits of Virtualization


Additional considerations

  • Address performance impact of Virtualization on the underlying Application and the overall refresh delays appropriately
  • It is not a replacement for Data Integration (ETL) but it is a quicker way to get data access in a controlled way
  • May not include all the Business rules, which implies Data Quality issues, may still be an issue

In conclusion, having the Virtualization tool in the Enterprise Data Management portfolio of products will add more agility in Data Management. However, use Virtualization  appropriately to solve the right kind problem and not as a replacement to traditional ETL.

Cloud BI use cases

Cloud BI comes in different forms and shapes, ranging from just visualization to full-blown EDW combined with visualization and Predictive Analytics. The truth of the matter is every niche product vendor offers some unique feature which other product suite does not offer. In most case you almost always need more than one suite of BI to meet all the needs of the Enterprise.

De-centralization definitely helps the business in achieving agility and respond to the market challenges quickly. At the same token that is how companies may end up with silos of information across the enterprise.

Let us look at some scenarios where a cloud BI solution is very attractive to Departmental use.

time_2_mktTime to Market

Getting the business case built and approved for big CapEx projects is a time-consuming proposition. Wait times for HW/SW and IT involvement means lot longer delays in scheduling the project. Not to mention the push back to use the existing reports or wait for the next release which is allegedly around the corner forever.


deploymentDeployment Delays

Business users have immediate need for analysis and decision-making. Typical turnaround for IT to get new sources of data takes anywhere between 90 days to 180 days. This is absolutely the killer for the business which wants the data now for analysis. Spreadsheets are still the top BI tool just for this reason. With Cloud BI (not just the tool) Business users get not only  the visualization and other product features but also the data which is not otherwise available. Customer analytics with social media analysis are available as  a third-party BI solution. In the case of value-added analytics there is business reason to go for these solutions.


Tool CapabilitiesBI_cap

Power users need ways to slice and dice the data, need integration of other non traditional sources (Excel, departmental cloud applications) to produce a combined analysis. Many BI tools comes with light weight integration (mostly push integration) to make this a reality without too much of IT bottleneck.

So if we can add new capability, without much delay and within departmental budget where is the rub?

The issue is not looking at the Enterprise Information in a holistic way. Though speed is critical, it is equally important to engage Governance and IT to secure the information and share appropriately to integrate into the Enterprise Data Asset.

As we move into the future of Cloud based solutions, we will be able to solve many of the bottlenecks, but we will also have to deal with security, compliance and risk mitigation management of leaving the data in the cloud. Forging a strategy to meet various BI demands of the enterprise with proper Governance will yield the optimum use of resources and /solution mix.