Microsoft

Blog Categories

Subscribe to RSS feed

Archives

Archive for the ‘SQL Server’ Category

Columnstore Indexes: When Should You Use Them?

When I speak to clients about In-Memory features in SQL Server, I find that Columnstore indexes just haven’t gained much traction as a marquee feature. The functionality itself is quite useful in the BI/DW realm as far as potentially boosting query performance by 1.5 to 10 times. But I think it gets overlooked because the use-case just isn’t very obviously derived from the typical description of it. The explanations I have read/heard/seen of Columnstore and how it works get tedious very quickly.

Columnstore Indexes: When Should You Use Them?So I don’t want to cover details of how Columnstore works in this post. I just want to clarify when it might be useful. Then, if it sounds like it fits your situation, you can dive into some links and have all the tedium you want.

So here are, to me, the most pertinent Columnstore facts to be aware of:

  • It stores data in a columnar data format, heavily compressed, and in-memory — so it’s FAST.
  • It is very focused on large result sets that need to be aggregated or grouped. If you are doing full table scans in your queries, you might be interested.
  • It requires partitioning. If you have a large Fact table that is a candidate for partitioning, this again is potentially right up your alley.
  • Columnstore is not ideal for frequently updated tables. You will end up having to drop and re-create the index before/after data update operations. So a rapid incremental refresh environment is not an ideal fit.  UPDATE: I am reminded by a very helpful colleague that SQL Server 2014 removes this limitation and allows table updates/deletes/etc.  (Thanks Andrew!)
  • Because it is an In-Memory feature, your capability and performance is dependent upon hardware and SQL Server memory configuration.

If you have large fact tables and query performance issues, and if SSAS is either not an option or itself has performance issues, columnstore is an option to investigate. Columnstore indexes have been shown to be faster than an Analysis Services cube in some instances!   From my perspective, a couple of use case scenarios immediately come to mind:

  • Creation of specific fact structures for highly responsive reports/dashboards — especially in situations where Analysis Services is not an option, or is also not performing adequately
  • Improving cube processing performance (although the drop/rebuild time for the columnstore index will then likely take place during ETL — so net performance gain would have to be tested)

For further info, this article seems to be the granddaddy of all columnstore articles. It contains a massive and detailed FAQ, and includes the formula for determining memory capacity.   More focused instructions and examples of creating and using a columnstore can be found here, on Microsoft’s TechNet site. Cheers!

Upcoming Webinar: The Modern Data Warehouse – A Hybrid Story

Organizations of all sizes are looking more seriously at the cloud, and beginning to understand the scalability and cost savings benefits of a hybrid solution. Given the benefits of a properly implemented hybrid solution, it’s logical to consider other areas of the business that can gain from the cloud – one of those being your data analytics platforms.

Data warehousing in the cloud. Sounds almost idyllic, right? Well, it just might be, especially if you are looking for a fast, efficient way to stand up a data warehouse. So, how can you leverage SQL Server 2014 and Azure to build that data warehouse? sql-server-2014-logoFind out next week, on Thursday, June 19 at 1 pm, during a webinar on The Modern Data Warehouse – A Hybrid Story.

Duane Schafer, Microsoft Business Intelligence Practice Director at Perficient, will discuss the hows and whys around implementing your analytics platform in a hybrid environment. While on premises vs. cloud is the major decision, you’ll also want to consider things like “traditional’ disk-based storage and computing vs. in-memory.

If you’d like to learn a bit more and can’t wait another week, Duane shares his excitement for the new capabilities of SQL Server 2014 and Azure in a recent blog post.

To register for the webinar, click here.
The Modern Data Warehouse – A Hybrid Story
Thursday, June 19, 2014
1:00 p.m. CT

Leverage Power BI with SQL Server 2014 & Azure: Webinar Recap

Earlier in the year, we held a popular webinar on utilizing Power BI, a new Microsoft offering (then in preview). Of course, much has changed since that previous webinar, with Power BI being available to the general public and the release of SQL Server 2014.

In light of those changes, my colleague, Andy Tegethoff, a Microsoft BI architect and senior level consultant here at Perficient, again shared his knowledge around Power BI. Click here for the replay.

During Wednesday’s session, Andy showed attendees how they can host their entire BI solution in the cloud, using Power BI along with SQL Server 2014 and Azure. He kicked things off with a quick review of Power BI, and then talked about how to access your data with Power BI. Next up was the topic of SQL Server 2014 – specifically, what’s new with the latest version and what does that mean for you / the user? For starters, dramatically improved query performance with in-memory DB options, and improved integration with cloud of backup, hot failover, DR, or even primary data storage. As part of this, Andy also discussed what this all means when integrating with Microsoft Azure.

This segued into Azure Virtual Machines and Big Data with HDInsight, as Andy reviewed what you need to host BI in the cloud using SQL Server 2014 and Azure vs. Power BI in the cloud with Office 365. He wrapped up with advice and lessons learned around cloud BI with SQL Server.

The attendees asked great questions, and Andy had time to answer them. You can view the webinar replay here, including the Q&A. If you are looking to learn more about BI in the cloud, Andy recently authored a white paper, ‘The Possibilities of Cloud BI with Microsoft Azure,” available here.

Webinar: Make the Most of Power BI with SQL Server 2014 & Azure

Within the last few months, you’ve probably heard about about Power BI for Office 365, the new cloud-centric BI front end tool set from Microsoft. You may have even attended one of our webinars on the topic, while Power BI was still in preview – it has since moved to general availability. power-bi(If you missed the session, you can view the on demand webinar, Power BI for Office 365: Using SharePoint to Deliver Self-Service BI, at your leisure.)

Power BI  is a collection of features and services that provide you with the ability to visualize data, share discoveries, and collaborate more intuitively. It works seamlessly with Excel – simply use Excel to create data models and visualizations, and then, use Power BI to share and extend those insights. More often, users now want self-service access to data of all types, and Power BI delivers on that.

When you take Power BI’s self-service business intelligence and combine it with SQL Server 2014 and Microsoft Azure, you can host your entire BI solution in the cloud, while ensuring greater access to business analytics. And by letting Microsoft manage the infrastructure for you, you can focus on your data and discoveries.

On Wednesday, May 21, 2014, at 1 p.m. CDT, join Andrew Tegethoff, Microsoft BI Architect at Perficient, for a complimentary webinar, Making the Most of Power BI with SQL Server 2014 and Azure. During the session, Andy will show you how to host your analytics and reporting solution in the cloud using Power BI and SQL Server 2014.

In the meantime, if you’d like to learn more about BI in the cloud, Andy wrote a comprehensive guide, The Possibilities of Cloud BI with Microsoft Azure.

To register for the webinar, click here.
Making the Most of Power BI with SQL Server 2014 and Azure
Wednesday, May 21, 2014
1:00 p.m. CDT

 

Power View Differences between Excel 2013 and SharePoint 2013

I knew that there were differences between Power View in desktop Excel and Power View as run from SharePoint. But I recently felt the impact of this for the first time.

The first confusing point I found was, the Export to Power Point function only works for reports created in Power View in SharePoint Server. If you create a Power View report in Excel you will not be able to export it live into a Power Point deck.

So, having encountered this hiccup, I wanted to try to catalog any other practical differences….

Here are a few that I have run into/heard about:

  • Power View in SharePoint Server can create multi-page reports.  In Excel, different Power View reports end up on different tabs. This is a subtle difference, but it’s important to consider if you are trying to tell a story or do drilldown across a number of views.
  • In Excel, there is connectivity from Power View directly against An Analysis Services DB — either Tabular or Multidimensional. You must use the Data Model (most likely a Power Pivot DB) as a kind of “go-between”.   In Power View in SharePoint Server, you can create reports directly against Analysis Services via a BISM Connector.
  • With Excel on the desktop, you can manually refresh data from a variety of source, whether or not your workbook contains Power View sheets.       In SharePoint Server, you can set workbooks to automatically refresh such data sources. BUT, you CAN’T set automatic refresh on workbooks with Power View sheets in them.
  • This is from a page I found in the Office Help files that somewhat clarifies where you can work with Power View as created/stored in which environment:
Workbook is stored Host configuration is Workbook is opened in Power View sheets are
On client computer Excel 2013 Editable and interactive
On premises In SharePoint view mode (SharePoint Server configured to render workbooks by using Excel Services) Excel Services Interactive
On premises In Office Web Apps Server view mode (SharePoint Server configured to render workbooks by using Office Web Apps Server) Excel Online Not visible
In Office 365 SharePoint Online Excel Online Interactive
On OneDrive Excel Online Not visible

So, there’s a good list of differences. And I’m leaving out the whole Power BI for Office 365 angle for the time being.

What it comes down to is that there is not complete compatibility of features across Power View in all situations. So consider what features are important for your situation, and use Power View in the best venue to meet your needs.

Big Data and the Microsoft Data Platform and You

The age of Big Data is upon us, and the age of Pervasive (or “Ambient”) Data is rapidly approaching.

This means that not only the typical loosely-structured Big Data sources that we’ve become accustomed to considering, like Twitter feeds, Facebook posts, and web logs. This also means we’re dealing with the streams of data generated by the growing Internet of Things: a universe of sensors deployed in everything from refrigerators to factory robots, serving purposes from managing self-driving cars to automatically adjusting air conditioning in our homes.

Big Data, the Microsoft Data Platform and YouMore and more, the data streams generated by the various elements of our wired world are being pulled together to improve performance, to save money, to enhance profit, and to improve our lives in general.   And more and more, as the effects of leveraging all forms of Big Data across a wide spectrum of industries are being felt, there is a stampede of organizations that want leverage it for all those purposes.

But amidst that stampede are thousands of already existing solutions, systems, and products already implemented and in place, being used for analytics and data warehousing. And most of those systems–regardless of platform or vendor–are not equipped to deal with the Volume, Velocity, and Variety of Big Data we are talking about.

Generally, the kind of processing demanded by Big Data takes a Hadoop installation, which may or may not be practical to either implement or maintain as either an add-on to these systems, or even if you consider starting from scratch.

But an ideal solution to this problem is Azure HDInsight, a 100% Apache Hadoop solution based entirely in the Cloud.   With the scalability, security, and adaptability of the Azure environment, HDInsight can allow you to bring Big Data into an existing solution in a matter of days. The Azure environment is both easy to manage and cleanly integrated with the Microsoft server platform via PowerShell scripting. Further, HDInsight integrates natively with the rest of the Microsoft BI stack (including Office 365 Power BI plus SQL Server 2012 and 2014) using Hive and ODBC.

On the other end of the spectrum, for massive quantities of both traditional relational data AND Big Data capability in a single solution, the Microsoft Analytics Platform System is a turnkey answer. The APS consists of a SQL Server Parallel Data Warehouse (PDW) appliance, which optionally includes a private instance of HDInsight, as well as the breakthrough Polybase data querying technology which allows SQL queries to combine relational and Hadoop data together. The APS can:

  • Scale out to 6PB (yes, that’s petabytes) of data
  • Provide up to 100x performance gains over traditional data warehouse architectures
  • Store and integrate traditional relational AND Big Data in the same appliance.

As a bonus, it’s also the cheapest option (per terabyte) of any data warehouse appliance available. :-)

So as the drive to Big Data intensifies, Microsoft’s Cloud OS and other Data Platform offerings are positioned to help organizations of all sizes take advantage.

Pervasive Data in Microsoft’s Cloud OS

As early as the beginning of this year, Microsoft began positioning their “Cloud OS” concept, based on the pillars of Windows Server and Microsoft (formerly Windows) Azure.  This perspective on the spectrum of Microsoft’s offerings casts Cloud OS as a giant next generation operating system, with all the scope and scalability that cloud computing offers.

Pervasive Data in Microsoft's Cloud OSComplementary to Windows Server and Microsoft Azure, additional supporting technologies provide services like identity management (Active Directory and Azure AD), built-in virtualization capability (Hyper-V), and consolidated management capability (System Center).  Basically, it’s a bundle of products that can be mostly seamlessly joined to provide a completely cloud-based computing environment.  Microsoft technologies are increasingly being tailored to help flesh out the Cloud OS story, and they demonstrate Microsoft’s pivot towards the “platforms and services” line.

But another critical part of the Cloud OS story is data, and that’s where SQL Server comes in.  SQL Server 2014 — running on-premises in your datacenter, in the cloud on Azure VMs, or on both — is your modern organization’s data backbone.  As part of the Cloud OS story, SQL Server 2014 is a bridge between on-premise and (Azure) cloud-based data assets.  Advanced integration with Microsoft Azure allows SQL Server to support next-generation Hybrid architectures for backups, DR, Replication and data storage/delivery.

Schedule automatic offsite backups.  Pin a frequently used Data Mart in-memory for maximum query performance.  Leverage your Big Data assets against SQL Server data with an Azure HDInsight cluster.  Refresh Power BI content from a Data Warehouse hosted on an Azure VM.  All from within the cost-effective environment of the Cloud.

So what does the Cloud OS story mean to users?  It means that what we’re seeing now in terms of shared experiences across our tablets, smartphones, and TVs is just the beginning.  As the era of Big Data continues to dawn, we will all be floating on a sea of data.  And the cloud is where that sea will reside.

The Cloud OS as a whole ultimately empowers consumers and users with computing capability over sea of data — wherever they are, wherever they want it.   In terms of data, this moves toward the larger goal of giving business the ability to identify, gather, and use data from an endless variety of internal and external sources to unlock business insights, and turn that information into action.

That’s the idea of pervasive data.  And in Microsoft’s Cloud OS story, it’s empowered by self-service BI from Office365 and SharePoint Online, using SQL Server and Azure technologies under the covers but all accessed through an interface as familiar and comfortable as Excel.    And it’s coming soon to a device near you…

Declarative data caching in .NET business layer

One of the most effective ways to improve application performance is to implement data caching. Most of the applications are relatively retrieving the same data from external sources like database of web service and if that source data is never or seldom changes then application is just wasting CPU time and I/O querying the source again and again. There are a great many ways to cache data in application, there are different software frameworks, standalone applications and even caching appliances, but probably the most widespread and easiest way to cache data is a built-in .NET MemoryCache class which is providing a simple in-process caching since .NET 4.0. cache

Read the rest of this post »

Webinar Recap: ProHealth Improves in Population Health Management

On Wednesday, ProHealth Care and Perficient teamed up to present a session describing how the healthcare system implemented Epic’s Cogito data warehouse to deliver clinically integrated insights.

During the webinar, Christine Bessler, CIO and VP of Information Technology at ProHealth Care, described the organization’s innovative approach to population health management. Why innovative? Well, for starters, ProHealth Care is the first to use Epic’s Cogito data warehouse in conjunction with the Microsoft BI stack to manage population health. In addition, the healthcare system is the first to use Cogito in a production environment. This has resulted in physicians being able to easily identify gaps in care, and also having access to analytics tools that enhance preventative care and improve chronic disease management.

Christine went on to explain the ProHealth Care journey, including key objectives, the various challenges encountered (from business to clinical to technology) and ProHealth Care’s strategy. She also discussed key requirements including ACO requirements, analytical needs, and how and why they made the platform choice that they did. Of course, she covered the actual solution and the business value derived.

Next, Juliet Silver, Healthcare Strategic Advisory Services Director at Perficient, showed attendees how the BI program, data governance, and the Business Intelligence Competency Center (BICC) was operationalized.

To review the webinar in its entirety, you can listen to the full replay here. You can also read the case study describing the solution.

Case Study: ProHealth Uses Cogito DW to Manage Population Health

Perficient recently worked with ProHealth Care to help the healthcare system with their data architecture efforts, specifically working on the integration of Epic System’s Cogito data warehouse and several other key data sources surfacing data via SQL Server. ProHealth Care,  a community-based healthcare system with 15 primary care clinics, three hospitals, and numerous other services, as well as the largest healthcare provider in Waukesha County, Wisconsin, sought to create an IT infrastructure that would consolidate reporting and allow physicians to evaluate and modify practice patterns. This included thorough data collection, measurement and analysis, and delivery of data. As an accountable care organization (ACO), the system also wanted to implement a series of new metrics and reports to facilitate physician “pay for performance.”prohealth2

Here is an excerpt from the case study describing the solution:

First, Perficient worked with ProHealth Care to perform a business intelligence assessment, which resulted in a BI strategic roadmap, providing guidance and direction to an implementation plan. Following this assessment, ProHealth decided to use Cogito as the foundation for their data warehouse model in order to meet the growing data and reporting needs and accelerate the development process.

Several factors drove that decision, including:

  • An existing relationship with Epic Systems
  • The need for a central “hub” for changing data requirements
  • Cogito’s capabilities

Epic Systems’ Cogito is a new data warehouse that provides the ability to integrate data out of Epic’s Clarity database, as well as extend the data model to load data out of other external systems and/or third-party sources. This provided the most flexibility with the speed-to-market requirements the health system was counting on.

The end result was a data warehouse that integrated data in a way that produces current clinical and business intelligence based on role and workflow, and it provides the health system and its providers with a central data repository for clinical, financial and business data. Read the rest of this post »