Duane Schafer, Author at Perficient Blogs https://blogs.perficient.com/author/dschafer/ Expert Digital Insights Wed, 25 Apr 2018 22:32:32 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Duane Schafer, Author at Perficient Blogs https://blogs.perficient.com/author/dschafer/ 32 32 30508587 Power BI Primer – 4 part series https://blogs.perficient.com/2014/10/21/power-bi-primer-4-part-series/ https://blogs.perficient.com/2014/10/21/power-bi-primer-4-part-series/#respond Tue, 21 Oct 2014 21:54:14 +0000 http://blogs.perficient.com/microsoft/?p=23829

In previous posts we’ve discussed how to introduce advanced analytics into your BI platform and along the way we introduced several new technologies. These technologies range from self-service query tools to cloud-based visualizations. Even though the previous scenario was based on the Healthcare industry, the concepts and technologies can be applied across all industries.
But how do you get started exploring these new technologies? Use this 4 part series as your guide:
Video 1 – Introduction to Power BI
Video 2 – Administration and Permissions in Power BI
Video 3 – Data Exploration and Visualization in Power BI
Video 4 – Data Management Gateway for Power BI
Also, don’t forget to register below for our upcoming webinar on implementing hybrid architectures in your organization!

[pardot-form height=”700″ id=”33712″ title=”MSFT: 2014-11-12: Webinar BLOG form : Hybrid Analytics in Healthcare: Leveraging Power BI and Office 365 to Make Smarter Business Decisions”]

]]>
https://blogs.perficient.com/2014/10/21/power-bi-primer-4-part-series/feed/ 0 224770
Visualization options with Microsoft https://blogs.perficient.com/2014/09/25/visualization-options-with-microsoft/ https://blogs.perficient.com/2014/09/25/visualization-options-with-microsoft/#respond Thu, 25 Sep 2014 20:33:02 +0000 http://blogs.perficient.com/microsoft/?p=23564

I’ve been speaking to a lot of clients lately about the visualization capabilities of the Microsoft BI platform and want to clarify a point of confusion. When building an enterprise analytics platform you will be faced with several decisions around architecture as well as delivery. The architectural options will be vetted by your IT department, but in large part they will be driven by how you want to deliver and consume information in your organization. Typically there will be a balance between ‘traditional’ BI delivery and ‘self-service’ BI delivery.
What’s the difference? Traditional BI delivery comes in the form of reports and dashboards that are built by your IT department with tools such as SSRS or PerformancePoint. Both are solid tools with a lot of functionality. In contrast, most organizations are looking for ways to reduce their dependency on IT-built reports and therefore need a technology that enables their business users to be self-sufficient. This comes in the form of Excel with PowerPivot and PowerView.
A complete explanation of these new tools can be found here.
Feel free to contact us on how these tools can be used in your enterprise to delivery pervasive insights!

]]>
https://blogs.perficient.com/2014/09/25/visualization-options-with-microsoft/feed/ 0 224755
Advanced analytics in healthcare with Epic, SQL Server and Azure https://blogs.perficient.com/2014/08/15/advanced-analytics-in-healthcare-with-epic-sql-server-and-azure/ https://blogs.perficient.com/2014/08/15/advanced-analytics-in-healthcare-with-epic-sql-server-and-azure/#respond Fri, 15 Aug 2014 19:51:16 +0000 http://blogs.perficient.com/microsoft/?p=23236

Over the months we have released a lot of information on building analytic platforms in healthcare. Several members of my team have played key architectural roles in not only implementing the Cogito platform and performing readmission analysis with it, but also expanding the platform to include customer satisfaction data from Press Ganey.
These functions were deemed critical to the initial phases of these projects, but are largely ‘back-end’ architectural projects. They do not address the ad-hoc analysis needs of the business, the delivery technologies available or much less the predictive capabilities that can be added to the platforms.
Fortunately there are a lot of new technologies in the Microsoft stack to address these needs.
As part of our advisory services to help our clients understand what new capabilities they have with their new platforms we regularly build concept visualizations. The following videos are examples of out of the box capabilities we built for one of our clients utilizing:
Self-service analytics with Power Pivot and Power View
3D visualizations with Power Map
And finally natural language query processing in the cloud with Q&A in Power BI
These technologies are well known and are being leveraged within several of our large clients, but a couple of recent announcements from Microsoft introduces even more exciting capabilities.
Power View now supports forecasting. This is a great new add currently available in the HTML5 version of Power View in Power BI. It gives the user the ability to quickly forecast a trend line, account for seasonality and even adjust the confidence intervals of the calculation. Below is a screenshot of some readmission forecasting being performed on the dataset from the earlier videos.
Forecasting
Important to note is that you not only see the forecasted line (light blue lines which runs through the top chart gray box) but the second chart also shows the hindcasting feature which lets a user start a forecast in the past in order to see how accurate it would have been against real data. (light blue line to the left of the gray box in the second chart).
While valuable and easy to use, this technology doesn’t give us the ability to predict who is at risk of readmitting. For that, we need a more powerful tool.
Azure Machine Learning Services is a recently announced cloud service for the budding Data Scientist. Through a drag and drop interface you can now build experiments of predictive models, train and score the models and even evaluate the accuracy of different algorithms within your model.
The screenshot below shows an experiment that was built against the same readmission data used in the forecasting example (Epic Cogito dataset). The dataset was modified to flatten multiple patient admissions onto one record and included the following attributes as well as some others:
Attributes
The experiment was then created to compare two different classification algorithms, a boosted decision tree vs. a logistic regression. *Note that this blog is not intended to debate the accuracy or appropriate use of these particular algorithms. These were just the two I used.
Model
Once the experiment is complete and evaluated a simple visual inspection shows the accuracy gains one algorithm has over the other.
Results
After some tweaking (and this model still needs it) there is a simple process to create a web service with the associated API key which you can use to integrate the model into a readmission prediction application. One that accepts single record or batch inputs.
API
As you can see, there are a number of options for introducing advanced analytics into your healthcare environment. Feel free to contact me with questions on how these tools can be put to work in your new healthcare analytics platform.

]]>
https://blogs.perficient.com/2014/08/15/advanced-analytics-in-healthcare-with-epic-sql-server-and-azure/feed/ 0 224730
“Accelerate your Insights” – Indeed! https://blogs.perficient.com/2014/04/15/accelerate-your-insights-indeed/ https://blogs.perficient.com/2014/04/15/accelerate-your-insights-indeed/#respond Wed, 16 Apr 2014 01:12:30 +0000 http://blogs.perficient.com/dataanalytics/?p=4257

I have to say, I was very excited today as I listened to Satya Nadella describe the capabilities of the new SQL 2014 Data Platform during the Accelerate your Insights event. My excitement wasn’t tweaked by the mechanical wizardry of working with a new DB platform, nor was it driven by a need to be the first to add another version label to my resume. Considering that I manage a national Business Intelligence practice, my excitement was fueled by seeing Microsoft’s dedication to providing a truly ubiquitous analytic platform that addresses the rapidly changing needs of the clients I interact with on a daily basis.

If you’ve followed the BI/DW space for any length of time you’re surely familiar with the explosion of data, the need for self-service analytics and perhaps even the power of in-memory computing models. You probably also know that the Microsoft BI platform has several new tools (e.g. PowerPivot, Power View, etc.) that run inside of Excel while leveraging the latest in in-memory technology.

PeopleDataAnalytics But… to be able to expand your analysis into the Internet of Things (IoT) with a new Azure Intelligent Systems Service and apply new advanced algorithms all while empowering your ‘data culture’ through new hybrid architectures…, that was news to me!

OK, to be fair, part of that last paragraph wasn’t announced during the key note, it came from meetings I attended earlier this week and that I’m not at liberty to discuss, but suffice it to say, I see the vision!

What is the vision? The vision is that every company should consider what their Data Dividend is.


DataDividend
Diagram: Microsoft Data Dividend Formula

Why am I so happy to see this vision stated the way it is? Because for years I’ve evangelized to my clients to think of their data as a ‘strategic asset’. And like any asset, if given the proper care and feeding, you should expect a return on it! Holy cow and hallelujah, someone is singing my song!! 🙂

What does this vision mean for our clients? From a technical standpoint it means the traditional DW, although still useful, is an antiquated model. It means hybrid architectures are our future. It means the modern DW may not be recognizable to those slow to adopt.

From a business standpoint it means that we are one step closer to being constrained only by our imaginations on what we can analyze and how we’ll do it. It means we are one step closer to incorporating ambient intelligence into our analytical platforms.

So, in future posts and an upcoming webinar on the modern DW, let’s imagine…

]]>
https://blogs.perficient.com/2014/04/15/accelerate-your-insights-indeed/feed/ 0 199991
The state of social sentiment analysis – Part 1 https://blogs.perficient.com/2013/10/18/social-sentiment-analysis-pt-1/ https://blogs.perficient.com/2013/10/18/social-sentiment-analysis-pt-1/#respond Fri, 18 Oct 2013 13:30:22 +0000 http://blogs.perficient.com/microsoft/?p=20040

Sentiment analysis is arguably one of the fastest growing concepts in the analytics space and is poised to become a major source of information to enterprise BI programs in the very near future. The idea of receiving instant feedback from a consumer base, enabling near real-time follow-up or correction holds an incalculable ROI. Unfortunately the social analytics space is complicated by ambiguous terms, overlapping capabilities and merging suppliers.
sentimentTo better understand the landscape and potential impact to an analytics platform some general terms need to be defined. Social Sentiment Analysis, Social Analytics and Social Listening are assumed here to be synonymous; defined as – the ability to tap into the stream of social feedback for a given topic, identify global, regional or local sentiments and understand the context in which they were given. Additionally, collecting metadata about the statement for further analysis is assumed to be a default capability.
So how does one ‘tap into’ the social stream? Who are the companies providing this service and what are the impacts to a BI program? To answer these questions the following categories need to be defined:
– SaaS products with integrated social analytics capabilities
This is a group of companies that have a SaaS based product that has been enhanced with social analytic capabilities. SalesForce.com is a good example. In 2010 they acquired Radian6 to enhance their CRM product with ‘social listening’ capabilities.
– Social Analytic SaaS providers
This group of companies has built SaaS based products specifically for social analytics. They ‘aggregate’ social data to some degree and provide a web-based user interface with some form of dashboarding capability. Their service offerings typically do not include any form of data extraction beyond basic exporting and the user is locked in to using their proprietary dashboard platform. An example of this type of provider is Tquila, Sysomos, Spredfast, and SimplyMeasured.

A few companies fall into a subset of this group and can be thought of as ‘Hadoop integration companies’. These platforms are built to natively run on a particular instance of Hadoop but generally provide the same type of service. An example of this subgroup is Datameer and Karmasphere.
– Social Data Aggregators
Lastly, this group focuses solely on building a standardized ‘pipeline’ of social data which is the primary service they sell. This group builds connections to the API’s of Twitter, Facebook, and others; standardizes the data, aggregates the data, enriches it and finally provides a platform to filter it. No additional analysis is performed on the data and proprietary dashboard capabilities are typically not available.
Interestingly, these data aggregators also sell their data streams to a large number of the Social Analytic SaaS providers as they typically have exclusive reseller rights from the larger social platforms like Twitter and Facebook. An example of this type of provider is GNIP and Datasift.
Grouping companies in these categories does not diminish the value of another as they simply provide different services for different needs. As an example, significant insight can be gained by running a one-time analysis from almost any Social Analytics SaaS provider.
Below we see the results of analysis that was ran on Bose for the last week of June 2013:
Bose1
Diagram: Brand and Competitor Sentiment
At a quick glance, we can see that sentiment is positive for all monitored brands. The fact that Bose shows a bigger sphere indicates there are more conversations about this brand than the others.
Bose2
Diagram: Social Analytics
This dashboard shows a set of metrics, the most relevant being the share of voice between brands, which illustrates which brands are more present in social media conversations.
However, understanding that most clients wish to further integrate this type of social data with their core ERP/manufacturing/whatever data in order to glean further insights, we must eliminate the providers with no export capabilities and bypass those that require the use of proprietary analytic platforms. We must build our own Social Sentiment analysis platform and tap directly into the social pipeline.
We’ll explore the implications of this in the next post…

]]>
https://blogs.perficient.com/2013/10/18/social-sentiment-analysis-pt-1/feed/ 0 224498
Technology Confusion https://blogs.perficient.com/2013/07/18/technology-confusion/ https://blogs.perficient.com/2013/07/18/technology-confusion/#respond Thu, 18 Jul 2013 16:04:21 +0000 http://blogs.perficient.com/dataanalytics/?p=3642

While returning from a client presentation and reflecting on the meeting conversations I was struck by a similarity that seems to be creeping into the minds of our clients.

While discussing our approach to performing a strategy assessment for this new client we were reviewing an example architectural diagram and a question was raised. One of the business sponsors commented that the ‘Operational Data Store’ that was referenced on the diagram seemed like an ‘archaic’ term from the past that may not be appropriate for their new platform. I explained that they may need a hybrid environment and that each technology had its place.

However, on the plane ride home I realized that I had heard a similar question just a few weeks earlier. I was at a different client, manufacturing as opposed to software, in a different part of the country, speaking about a different type of proposal, although both would have resulted in architectural enhancements, and a stakeholder asked about ‘new data warehouse’ technology such as Hadoop replacing the ‘older’ data warehouse paradigm we were discussing.

On both occasions I knew that the client wasn’t challenging my ideas as much as wanting to understand my recommendations better. What I knew in my head, but had failed to initially describe to both clients was the concept of ‘replacement’ technologies versus ‘complimentary’ technologies. Honestly it had never occurred to me that I needed to make such a designation since I wasn’t recommending both technologies. The client introduced the newer technology into the discussion at which point I fell victim to the assumption that both clients had a base understanding of what the different technologies were used for.

To be clear, we’re talking about the Big Data technology Hadoop and the well-known process of building a data warehouse with the Kimball or Inman approach. The former approach is relatively new and getting a lot of airplay as the latest thing. The latter approach is well known but has had its share of underwhelming successes.

So is Hadoop the new replacement for ‘traditional’ data warehousing? For that matter, is self-service BI a replacement for traditional dashboarding and reporting? How about Twitter, is it the replacement for traditional email or text messaging?

The answer is No. All of the technologies described are complementary technologies, not replacement technologies. These technologies offer additional capabilities with which to build more complete systems, but in some cases, certainly that of data warehousing, our clients are confusing them as carte blanche replacement options.

Considering that clear and concise messaging is fundamental to successful client engagements, I encourage all of our consultants to consider which category their respective technologies fall into and make sure your clients understand that positioning within their organization.

]]>
https://blogs.perficient.com/2013/07/18/technology-confusion/feed/ 0 199936
Why are BI Projects so difficult to implement? part 3 https://blogs.perficient.com/2013/07/10/why-are-bi-projects-so-difficult-to-implement-part-3/ https://blogs.perficient.com/2013/07/10/why-are-bi-projects-so-difficult-to-implement-part-3/#respond Wed, 10 Jul 2013 18:33:15 +0000 http://blogs.perficient.com/dataanalytics/?p=3597

You can find the previous posts to this topic here: Part 1, Part 2, Part 2.5

So how do we reduce the amount of anxiety related to a BI project (thus making them less painful)? To start, the project team needs to be keenly aware of the following:

They must consistently over communicate that some process change will happen and what it is.

Why must they over communicate? It takes time for people to assimilate new ideas, and this is only after they actually start listening. A standard metric in television marketing states that a consumer must make 3 ‘mental connections’ to a TV ad before deciding if the product is relevant to them. Is your process change more interesting than that TV ad?

Ensure that everyone feels they are in this learning process together. In other words, don’t let some users put themselves on an island.

We’ve all been in a class at some point where someone started to fall behind and was too embarrassed to raise their hand, again. Don’t let your users fall behind.

Realize that the mechanical capabilities of moving through a new report or dashboard are not inherent in all users and may demand ‘more obvious’ training.

I just finished reading an interesting article about technology anxiety in an older workforce and the study cited interface design as a leading cause of anxiety; specifically the practice of designing an interface with a ‘layered menu’ system in which the user must remember that there are ‘invisible options’ and the sequence of actions to find them. Dashboards typically employ this functionality through the ‘right-click’ context menu.

I immediately thought of the introduction of the Microsoft ‘ribbon’ and if the ‘invisible option’ problem wasn’t a leading factor in that design change.

Finally, be prepared for the data anomaly effect.

I’ve written about this before but it remains to be relevant. Users of a new analytical platform need to be prepared for the fact that they will be faced with data anomalies. Some of these anomalies will turn out to be actual bugs, but some will not. Those that are not obvious bugs will require research. Research means delayed responses back to the business and delayed responses mean a frustrated business user, if the project team has not embraced the recommendation of over-communicating.

Over time, the number of bugs will decrease, but the number of research requests is likely to increase (as shown in the example below) especially as new users are rolled on.

Additionally, there are a series of events that can cause these numbers to fluctuate. Version releases are an obvious source for bugs, but what about a key team member leaving?

In the example below, during October of 2012, we see a spike in research requests but no spike in bugs and no additional users added. The only major event that occurred was a new member joining the team. We can infer that either the previous support person had direct lines of communication open to the users (very likely that hallway discussions answered some users questions) or that the new team member is recording ‘discussions’ differently. Regardless, this event may be a source of frustration to the business that has no direct tie to the functionality or stability of the system.

Production Support Analysis (Example)

In conclusion, there are a number of reasons why a BI project can be difficult to implement, but not all of them are related to ‘BI technology’ per se. Challenging fundamental truths, brain pain and good old fashioned learning anxiety play a big role in the perceived success of an implementation.

]]>
https://blogs.perficient.com/2013/07/10/why-are-bi-projects-so-difficult-to-implement-part-3/feed/ 0 199933
Why are BI projects so difficult to implement? part 2.5 https://blogs.perficient.com/2013/02/05/why-are-bi-projects-so-difficult-to-implement-part-2-5/ https://blogs.perficient.com/2013/02/05/why-are-bi-projects-so-difficult-to-implement-part-2-5/#respond Wed, 06 Feb 2013 02:54:59 +0000 http://blogs.perficient.com/dataanalytics/?p=3121

..so I hadn’t planned on exploring the next piece of the puzzle so soon, but this article jumped out at me. I just finished reading “Achieving Greater Agility with Business Intelligence” from the TDWI and found several interesting comments.

The article was based around ‘faster decision cycles and competitive pressures’ but it had a few points that were relevant to our topic. For instance;

Page 7, paragraph 2 states “Amid this instability and increased—sometimes unexpected—competition, executives and managers doubt whether their forecasts will hold true. Operations managers have difficulty allocating resources and personnel because they lack confidence in their organizations’ planning and budget assumptions.” – Really? this sounds suspicially like ‘challenging fundamental truths’ from my part-2 article.

Page 10, paragraph 2: “For better agility, data has to be put in a format that makes it relevant to
the decision process.”
– Agreed, and introducing a change to a business process (paragraph 2 from my part-2 article ) can be a big hit to user adoption if not introduced properly.

Additionally, (from Page 10, paragraph 3) – “We asked research respondents how satisfied different types of users in their organizations are with their ability to access and analyze information to achieve objectives for which they are held accountable.” – And the highest level of satisfaction was from…. Finance.

Really? Finance? Well those people are just a bunch of number crun.. oh wait…, I get it, number crunches are most comfortable with BI applications because they don’t get brain pain as easily as others. That sounds familiar as well. 🙂

OK, so we have shown that we’re on the right track with our ‘pain and difficulties’ theory, but how do we mitigate it? We’ll talk about options and responsibilities next time.

]]>
https://blogs.perficient.com/2013/02/05/why-are-bi-projects-so-difficult-to-implement-part-2-5/feed/ 0 199892
Why are BI projects so difficult to implement? part 2 https://blogs.perficient.com/2013/02/01/why-are-bi-projects-so-difficult-to-implement-part-2/ https://blogs.perficient.com/2013/02/01/why-are-bi-projects-so-difficult-to-implement-part-2/#respond Fri, 01 Feb 2013 23:59:29 +0000 http://blogs.perficient.com/dataanalytics/?p=3020

In a previous post (here) I posed the question regarding the difficulty we experience when implementing a BI project, and promised to address mitigating the pain that our project teams and clients typically experience. But before we can do that, we need to explore where the pain and difficulty comes from.

Difficulties:
– Unfamiliar technologies, black box processes and governance concepts are difficult to explain and when the business tries to visualize how these concepts interact with one another, the picture is cloudy at best.
– New business processes (e.g. drilling down in a dashboard vs. looking up a spreadsheet cell) have to be learned and the average age of our users don’t always bode well for this exercise.
– Fundamental ‘truths’ are often challenged with BI projects.

What do I mean by ‘fundamental truths’? The business is use to looking at their data through a very familiar lens. It could be Excel, it could be a report, but it’s familiar to them. They trust it. Or at least they know which pieces to trust. When a new way of viewing the data is introduced, it not only shines the light from a different direction (i.e. slice and dice), in some cases it exposes that their old trusted views were actually wrong. (and this is a hard conversation to have)

Now, we’re not forgetting that the actual development of the BI platform is difficult, but the people doing that actually enjoy that type of work, so it no longer factors into our discussion.

So do the difficulties described above actually cause pain, or do we generate that through poor management and communication of the project?

A recent study (here) published by the University of Chicago states that in fact, yes, some people experience physical pain when thinking they are faced with a math question.

A math question? Well we’re not asking them to do math are we?

Of course we are! Maybe we’re not asking them to recite Algebra laws but the only reason we’re doing business with them is because it’s too difficult or time consuming for them to produce their monthly reports the same old way.

As a simple example of this issue, please verify that the dashboard below is correct by counting the number of red cells and we’ll compare your answer to the actual number later in the post.

HeatMap

Now, I didn’t expect anyone to actually count the number of red squares, but if you started to, I’m assuming the first thing you did was furrow your brow, squint your eyes and maybe put your hand to your head. If you actually counted them I’m assuming you now have the question of “Which shade of red were you referring to?” or “Why are there two shades of red? Is that a bug?”

Oh by the way, did anyone question why Apr-08 total GM% is so green despite the fact that it has so much red?

….. so does it feel like a math problem yet?

As you can see, this line of questioning could go on and on, and we haven’t even determined if any cells should be red because we don’t know what the algorithm is that decides the color, but for now we’ll have to trust the author.

In the next post we’ll start to explore how to reduce the trauma inflicted by this process, but in the mean time, if someone has the answer to the original question can you send it to me? I have a headache…

]]>
https://blogs.perficient.com/2013/02/01/why-are-bi-projects-so-difficult-to-implement-part-2/feed/ 0 199884
Why are BI projects so difficult to implement? https://blogs.perficient.com/2013/01/16/why-are-bi-projects-so-difficult-to-implement/ https://blogs.perficient.com/2013/01/16/why-are-bi-projects-so-difficult-to-implement/#respond Wed, 16 Jan 2013 20:34:38 +0000 http://blogs.perficient.com/dataanalytics/?p=2969

I’m not really asking the question, I’m setting the stage for a topic that has been on my mind recently. For background, I grew up through the data architecture ranks so the problems I see our project teams experiencing seem natural to me (if not basic). In my mind it’s always been this difficult, but I just got used to it, so I don’t think about it any longer. However that doesn’t help the teams that have never experienced the ‘data anomaly’ issue and had to spend a week chasing ghosts in the data.

For starters, let’s define two different projects:
Project 1 is a web design project that has a couple of forms that allow the user to enter data.
Project 2 takes that data and combines it with some other data and produces a couple of reports.

The testing for Project 1 is fairly straight forward. e.g. Are all of the pages there? When I click Save does the data get stored and can I retrieve it back later?
**Disclaimer: I realize web projects can be more complex. This is just an example.**

The testing for Project 2 is much different however. For example:
– What business rules are governing the combination of the two datasets?
– What business rules were already applied to the dataset we’re combining with?
– Are the two datasets at a level of granularity (detail) that they logically should be combined?
– Finally, can the client visualize in their mind how these factors interact with each other and how this new dataset will be used?

Obviously there are a lot of other factors that can affect the outcome of Project 2, but the idea here is to point out that data analytic projects are built around ‘black box’ functionality that is difficult for people to understand. Understanding the individual steps is one thing, but visualizing a working, data analytics machine is something completely different.

In reality, if Project 2 were a true BI project there would probably be a number of additional black boxes, e.g. ODS, DM, DW, Cube, Semantic Layer, Universe, etc. and every one of them adds another layer of complexity to the final solution, which makes it even more difficult for our clients to fully grasp the complexity of what is being built.

So what do we do to make this process as painless as possible? Let’s explore this in the next post…

]]>
https://blogs.perficient.com/2013/01/16/why-are-bi-projects-so-difficult-to-implement/feed/ 0 199877
BI trends for 2013 https://blogs.perficient.com/2012/12/04/bi-trends-for-2013/ https://blogs.perficient.com/2012/12/04/bi-trends-for-2013/#respond Tue, 04 Dec 2012 22:51:32 +0000 http://blogs.perficient.com/dataanalytics/?p=2848

While attending the SharePoint conference a couple of weeks ago I noted the three major technology trends being presented, namely: Social, Cloud and Mobile. While not being surprised that these three where at the top of the list, I did wonder if the sentiment was the same across the industry and what that would mean for BI delivery, so I decided to look.

Gartner had recently released their Top 10 Technology Trends for 2013 which I have re-listed below.

Mobile device battles (Windows 8 was mentioned by David Cearley of Gartner)
Mobile applications and HTML 5
– Personal Cloud (Shift from personal computing to how services are delivered to the consumer)
– Internet of Things
– Hybrid IT & Cloud Computing (IT as a Service Broker for this capability)
– Strategic Big Data
– Actionable Analytics
– Mainstream In-memory Computing
– Integrated Ecosystems (Pendulum moving back towards Tightly Integrated vs. Best of Breed)
– Enterprise App Stores

* Social was on the 2011 top 10 list

It was certainly easy to see the theme emerging, but how would this affect BI delivery I wondered?

After attending several sessions it became clear that not only is the SharePoint 2013 stack clearly in line to track with these technology changes, but that the self-service BI stack is following closely. Check back with this blog as we explore how these major trends are baked into the products that you are most familiar with.

]]>
https://blogs.perficient.com/2012/12/04/bi-trends-for-2013/feed/ 0 199861
Microsoft is raising the bar on Self-service BI! https://blogs.perficient.com/2012/11/13/microsoft-is-raising-the-bar-on-self-service-bi/ https://blogs.perficient.com/2012/11/13/microsoft-is-raising-the-bar-on-self-service-bi/#respond Tue, 13 Nov 2012 17:24:31 +0000 http://blogs.perficient.com/dataanalytics/?p=2797

I’m attending the SharePoint conference in Las Vegas and man are there some sights to see. Sleek and inviting, everything you would want in a BI platform of course!

I work extensively with all of the technologies from the Microsoft BI stack, but have realized already this week that the changes coming in the SharePoint and Office 2013 wave will affect how we will address our customers analytic needs.

More on this soon…

]]>
https://blogs.perficient.com/2012/11/13/microsoft-is-raising-the-bar-on-self-service-bi/feed/ 0 199852