Perficient Business Intelligence Solutions Blog

Blog Categories

Subscribe via Email

Subscribe to RSS feed

Duane Schafer

Posts by this author: RSS

“Accelerate your Insights” – Indeed!

I have to say, I was very excited today as I listened to Satya Nadella describe the capabilities of the new SQL 2014 Data Platform during the Accelerate your Insights event. My excitement wasn’t tweaked by the mechanical wizardry of working with a new DB platform, nor was it driven by a need to be the first to add another version label to my resume. Considering that I manage a national Business Intelligence practice, my excitement was fueled by seeing Microsoft’s dedication to providing a truly ubiquitous analytic platform that addresses the rapidly changing needs of the clients I interact with on a daily basis.

If you’ve followed the BI/DW space for any length of time you’re surely familiar with the explosion of data, the need for self-service analytics and perhaps even the power of in-memory computing models. You probably also know that the Microsoft BI platform has several new tools (e.g. PowerPivot, Power View, etc.) that run inside of Excel while leveraging the latest in in-memory technology.

PeopleDataAnalytics But… to be able to expand your analysis into the Internet of Things (IoT) with a new Azure Intelligent Systems Service and apply new advanced algorithms all while empowering your ‘data culture’ through new hybrid architectures…, that was news to me!

OK, to be fair, part of that last paragraph wasn’t announced during the key note, it came from meetings I attended earlier this week and that I’m not at liberty to discuss, but suffice it to say, I see the vision!

What is the vision? The vision is that every company should consider what their Data Dividend is.


DataDividend
Diagram: Microsoft Data Dividend Formula

Why am I so happy to see this vision stated the way it is? Because for years I’ve evangelized to my clients to think of their data as a ‘strategic asset’. And like any asset, if given the proper care and feeding, you should expect a return on it! Holy cow and hallelujah, someone is singing my song!! :-)

What does this vision mean for our clients? From a technical standpoint it means the traditional DW, although still useful, is an antiquated model. It means hybrid architectures are our future. It means the modern DW may not be recognizable to those slow to adopt.

From a business standpoint it means that we are one step closer to being constrained only by our imaginations on what we can analyze and how we’ll do it. It means we are one step closer to incorporating ambient intelligence into our analytical platforms.

So, in future posts and an upcoming webinar on the modern DW, let’s imagine…

Technology Confusion

While returning from a client presentation and reflecting on the meeting conversations I was struck by a similarity that seems to be creeping into the minds of our clients.

While discussing our approach to performing a strategy assessment for this new client we were reviewing an example architectural diagram and a question was raised. One of the business sponsors commented that the ‘Operational Data Store’ that was referenced on the diagram seemed like an ‘archaic’ term from the past that may not be appropriate for their new platform. I explained that they may need a hybrid environment and that each technology had its place.

However, on the plane ride home I realized that I had heard a similar question just a few weeks earlier. I was at a different client, manufacturing as opposed to software, in a different part of the country, speaking about a different type of proposal, although both would have resulted in architectural enhancements, and a stakeholder asked about ‘new data warehouse’ technology such as Hadoop replacing the ‘older’ data warehouse paradigm we were discussing.

On both occasions I knew that the client wasn’t challenging my ideas as much as wanting to understand my recommendations better. What I knew in my head, but had failed to initially describe to both clients was the concept of ‘replacement’ technologies versus ‘complimentary’ technologies. Honestly it had never occurred to me that I needed to make such a designation since I wasn’t recommending both technologies. The client introduced the newer technology into the discussion at which point I fell victim to the assumption that both clients had a base understanding of what the different technologies were used for.

To be clear, we’re talking about the Big Data technology Hadoop and the well-known process of building a data warehouse with the Kimball or Inman approach. The former approach is relatively new and getting a lot of airplay as the latest thing. The latter approach is well known but has had its share of underwhelming successes.

So is Hadoop the new replacement for ‘traditional’ data warehousing? For that matter, is self-service BI a replacement for traditional dashboarding and reporting? How about Twitter, is it the replacement for traditional email or text messaging?

The answer is No. All of the technologies described are complementary technologies, not replacement technologies. These technologies offer additional capabilities with which to build more complete systems, but in some cases, certainly that of data warehousing, our clients are confusing them as carte blanche replacement options.

Considering that clear and concise messaging is fundamental to successful client engagements, I encourage all of our consultants to consider which category their respective technologies fall into and make sure your clients understand that positioning within their organization.

Why are BI Projects so difficult to implement? part 3

You can find the previous posts to this topic here: Part 1, Part 2, Part 2.5

So how do we reduce the amount of anxiety related to a BI project (thus making them less painful)? To start, the project team needs to be keenly aware of the following:

They must consistently over communicate that some process change will happen and what it is.

Why must they over communicate? It takes time for people to assimilate new ideas, and this is only after they actually start listening. A standard metric in television marketing states that a consumer must make 3 ‘mental connections’ to a TV ad before deciding if the product is relevant to them. Is your process change more interesting than that TV ad?

Ensure that everyone feels they are in this learning process together. In other words, don’t let some users put themselves on an island.

We’ve all been in a class at some point where someone started to fall behind and was too embarrassed to raise their hand, again. Don’t let your users fall behind.

Realize that the mechanical capabilities of moving through a new report or dashboard are not inherent in all users and may demand ‘more obvious’ training.

I just finished reading an interesting article about technology anxiety in an older workforce and the study cited interface design as a leading cause of anxiety; specifically the practice of designing an interface with a ‘layered menu’ system in which the user must remember that there are ‘invisible options’ and the sequence of actions to find them. Dashboards typically employ this functionality through the ‘right-click’ context menu.

I immediately thought of the introduction of the Microsoft ‘ribbon’ and if the ‘invisible option’ problem wasn’t a leading factor in that design change.

Finally, be prepared for the data anomaly effect.

I’ve written about this before but it remains to be relevant. Users of a new analytical platform need to be prepared for the fact that they will be faced with data anomalies. Some of these anomalies will turn out to be actual bugs, but some will not. Those that are not obvious bugs will require research. Research means delayed responses back to the business and delayed responses mean a frustrated business user, if the project team has not embraced the recommendation of over-communicating.

Over time, the number of bugs will decrease, but the number of research requests is likely to increase (as shown in the example below) especially as new users are rolled on.

Additionally, there are a series of events that can cause these numbers to fluctuate. Version releases are an obvious source for bugs, but what about a key team member leaving?

In the example below, during October of 2012, we see a spike in research requests but no spike in bugs and no additional users added. The only major event that occurred was a new member joining the team. We can infer that either the previous support person had direct lines of communication open to the users (very likely that hallway discussions answered some users questions) or that the new team member is recording ‘discussions’ differently. Regardless, this event may be a source of frustration to the business that has no direct tie to the functionality or stability of the system.

Production Support Analysis (Example)

In conclusion, there are a number of reasons why a BI project can be difficult to implement, but not all of them are related to ‘BI technology’ per se. Challenging fundamental truths, brain pain and good old fashioned learning anxiety play a big role in the perceived success of an implementation.

Why are BI projects so difficult to implement? part 2.5

..so I hadn’t planned on exploring the next piece of the puzzle so soon, but this article jumped out at me. I just finished reading “Achieving Greater Agility with Business Intelligence” from the TDWI and found several interesting comments.

The article was based around ‘faster decision cycles and competitive pressures’ but it had a few points that were relevant to our topic. For instance;

Page 7, paragraph 2 states “Amid this instability and increased—sometimes unexpected—competition, executives and managers doubt whether their forecasts will hold true. Operations managers have difficulty allocating resources and personnel because they lack confidence in their organizations’ planning and budget assumptions.” – Really? this sounds suspicially like ‘challenging fundamental truths’ from my part-2 article.

Page 10, paragraph 2: “For better agility, data has to be put in a format that makes it relevant to
the decision process.”
– Agreed, and introducing a change to a business process (paragraph 2 from my part-2 article ) can be a big hit to user adoption if not introduced properly.

Additionally, (from Page 10, paragraph 3) – “We asked research respondents how satisfied different types of users in their organizations are with their ability to access and analyze information to achieve objectives for which they are held accountable.” – And the highest level of satisfaction was from…. Finance.

Really? Finance? Well those people are just a bunch of number crun.. oh wait…, I get it, number crunches are most comfortable with BI applications because they don’t get brain pain as easily as others. That sounds familiar as well. :-)

OK, so we have shown that we’re on the right track with our ‘pain and difficulties’ theory, but how do we mitigate it? We’ll talk about options and responsibilities next time.

Why are BI projects so difficult to implement? part 2

In a previous post (here) I posed the question regarding the difficulty we experience when implementing a BI project, and promised to address mitigating the pain that our project teams and clients typically experience. But before we can do that, we need to explore where the pain and difficulty comes from.

Difficulties:
- Unfamiliar technologies, black box processes and governance concepts are difficult to explain and when the business tries to visualize how these concepts interact with one another, the picture is cloudy at best.
- New business processes (e.g. drilling down in a dashboard vs. looking up a spreadsheet cell) have to be learned and the average age of our users don’t always bode well for this exercise.
- Fundamental ‘truths’ are often challenged with BI projects.

What do I mean by ‘fundamental truths’? The business is use to looking at their data through a very familiar lens. It could be Excel, it could be a report, but it’s familiar to them. They trust it. Or at least they know which pieces to trust. When a new way of viewing the data is introduced, it not only shines the light from a different direction (i.e. slice and dice), in some cases it exposes that their old trusted views were actually wrong. (and this is a hard conversation to have)

Now, we’re not forgetting that the actual development of the BI platform is difficult, but the people doing that actually enjoy that type of work, so it no longer factors into our discussion. (more…)

Why are BI projects so difficult to implement?

I’m not really asking the question, I’m setting the stage for a topic that has been on my mind recently. For background, I grew up through the data architecture ranks so the problems I see our project teams experiencing seem natural to me (if not basic). In my mind it’s always been this difficult, but I just got used to it, so I don’t think about it any longer. However that doesn’t help the teams that have never experienced the ‘data anomaly’ issue and had to spend a week chasing ghosts in the data.

For starters, let’s define two different projects:
Project 1 is a web design project that has a couple of forms that allow the user to enter data.
Project 2 takes that data and combines it with some other data and produces a couple of reports.

The testing for Project 1 is fairly straight forward. e.g. Are all of the pages there? When I click Save does the data get stored and can I retrieve it back later?
**Disclaimer: I realize web projects can be more complex. This is just an example.**

The testing for Project 2 is much different however. For example:
- What business rules are governing the combination of the two datasets?
- What business rules were already applied to the dataset we’re combining with?
- Are the two datasets at a level of granularity (detail) that they logically should be combined?
- Finally, can the client visualize in their mind how these factors interact with each other and how this new dataset will be used?

Obviously there are a lot of other factors that can affect the outcome of Project 2, but the idea here is to point out that data analytic projects are built around ‘black box’ functionality that is difficult for people to understand. Understanding the individual steps is one thing, but visualizing a working, data analytics machine is something completely different.

In reality, if Project 2 were a true BI project there would probably be a number of additional black boxes, e.g. ODS, DM, DW, Cube, Semantic Layer, Universe, etc. and every one of them adds another layer of complexity to the final solution, which makes it even more difficult for our clients to fully grasp the complexity of what is being built.

So what do we do to make this process as painless as possible? Let’s explore this in the next post…

BI trends for 2013

While attending the SharePoint conference a couple of weeks ago I noted the three major technology trends being presented, namely: Social, Cloud and Mobile. While not being surprised that these three where at the top of the list, I did wonder if the sentiment was the same across the industry and what that would mean for BI delivery, so I decided to look.

Gartner had recently released their Top 10 Technology Trends for 2013 which I have re-listed below.

- Mobile device battles (Windows 8 was mentioned by David Cearley of Gartner)
- Mobile applications and HTML 5
- Personal Cloud (Shift from personal computing to how services are delivered to the consumer)
- Internet of Things
- Hybrid IT & Cloud Computing (IT as a Service Broker for this capability)
- Strategic Big Data
- Actionable Analytics
- Mainstream In-memory Computing
- Integrated Ecosystems (Pendulum moving back towards Tightly Integrated vs. Best of Breed)
- Enterprise App Stores

* Social was on the 2011 top 10 list

It was certainly easy to see the theme emerging, but how would this affect BI delivery I wondered?

After attending several sessions it became clear that not only is the SharePoint 2013 stack clearly in line to track with these technology changes, but that the self-service BI stack is following closely. Check back with this blog as we explore how these major trends are baked into the products that you are most familiar with.

Microsoft is raising the bar on Self-service BI!

I’m attending the SharePoint conference in Las Vegas and man are there some sights to see. Sleek and inviting, everything you would want in a BI platform of course!

I work extensively with all of the technologies from the Microsoft BI stack, but have realized already this week that the changes coming in the SharePoint and Office 2013 wave will affect how we will address our customers analytic needs.

More on this soon…

SQL Server 2012 ad-hoc reporting in action

As the date of the SQL Server 2012 virtual release draws near, I’m getting more and more requests for client demos. And this should be no surprise as the new platform does show very well! So in an effort to spread the mindshare, here’s a link to a nice overview of PowerPivot and Power View from SQL Server 2012.

PowerPivot and Power View in action.

Remember, these demos can be tailored for specific scenarios so feel free to request a private showing!

EIM awareness with SQL Server 2012

I just finished reading the results of an interview I gave earlier this week to technology webzine www.CRN.com regarding the pending release of SQL Server. I was interested to know which parts of my 30 minute discussion with the technical writer would make it into the article. I was happy to see that the writer picked up on a key point that I made, “that the platform is maturing across the EIM landscape”. Read the entire article here: CRN Article

So what does this mean? It means the SQL Server platform is now driving EIM awareness into organizations of all sizes. Is your organization ready?