Microsoft

Blog Categories

Subscribe to RSS feed

Archives

Archive for the ‘SQL Server’ Category

Pervasive Data in Microsoft’s Cloud OS

As early as the beginning of this year, Microsoft began positioning their “Cloud OS” concept, based on the pillars of Windows Server and Microsoft (formerly Windows) Azure.  This perspective on the spectrum of Microsoft’s offerings casts Cloud OS as a giant next generation operating system, with all the scope and scalability that cloud computing offers.

Pervasive Data in Microsoft's Cloud OSComplementary to Windows Server and Microsoft Azure, additional supporting technologies provide services like identity management (Active Directory and Azure AD), built-in virtualization capability (Hyper-V), and consolidated management capability (System Center).  Basically, it’s a bundle of products that can be mostly seamlessly joined to provide a completely cloud-based computing environment.  Microsoft technologies are increasingly being tailored to help flesh out the Cloud OS story, and they demonstrate Microsoft’s pivot towards the “platforms and services” line.

But another critical part of the Cloud OS story is data, and that’s where SQL Server comes in.  SQL Server 2014 — running on-premises in your datacenter, in the cloud on Azure VMs, or on both — is your modern organization’s data backbone.  As part of the Cloud OS story, SQL Server 2014 is a bridge between on-premise and (Azure) cloud-based data assets.  Advanced integration with Microsoft Azure allows SQL Server to support next-generation Hybrid architectures for backups, DR, Replication and data storage/delivery.

Schedule automatic offsite backups.  Pin a frequently used Data Mart in-memory for maximum query performance.  Leverage your Big Data assets against SQL Server data with an Azure HDInsight cluster.  Refresh Power BI content from a Data Warehouse hosted on an Azure VM.  All from within the cost-effective environment of the Cloud.

So what does the Cloud OS story mean to users?  It means that what we’re seeing now in terms of shared experiences across our tablets, smartphones, and TVs is just the beginning.  As the era of Big Data continues to dawn, we will all be floating on a sea of data.  And the cloud is where that sea will reside.

The Cloud OS as a whole ultimately empowers consumers and users with computing capability over sea of data — wherever they are, wherever they want it.   In terms of data, this moves toward the larger goal of giving business the ability to identify, gather, and use data from an endless variety of internal and external sources to unlock business insights, and turn that information into action.

That’s the idea of pervasive data.  And in Microsoft’s Cloud OS story, it’s empowered by self-service BI from Office365 and SharePoint Online, using SQL Server and Azure technologies under the covers but all accessed through an interface as familiar and comfortable as Excel.    And it’s coming soon to a device near you…

Declarative data caching in .NET business layer

One of the most effective ways to improve application performance is to implement data caching. Most of the applications are relatively retrieving the same data from external sources like database of web service and if that source data is never or seldom changes then application is just wasting CPU time and I/O querying the source again and again. There are a great many ways to cache data in application, there are different software frameworks, standalone applications and even caching appliances, but probably the most widespread and easiest way to cache data is a built-in .NET MemoryCache class which is providing a simple in-process caching since .NET 4.0. cache

Read the rest of this post »

Webinar Recap: ProHealth Improves in Population Health Management

On Wednesday, ProHealth Care and Perficient teamed up to present a session describing how the healthcare system implemented Epic’s Cogito data warehouse to deliver clinically integrated insights.

During the webinar, Christine Bessler, CIO and VP of Information Technology at ProHealth Care, described the organization’s innovative approach to population health management. Why innovative? Well, for starters, ProHealth Care is the first to use Epic’s Cogito data warehouse in conjunction with the Microsoft BI stack to manage population health. In addition, the healthcare system is the first to use Cogito in a production environment. This has resulted in physicians being able to easily identify gaps in care, and also having access to analytics tools that enhance preventative care and improve chronic disease management.

Christine went on to explain the ProHealth Care journey, including key objectives, the various challenges encountered (from business to clinical to technology) and ProHealth Care’s strategy. She also discussed key requirements including ACO requirements, analytical needs, and how and why they made the platform choice that they did. Of course, she covered the actual solution and the business value derived.

Next, Juliet Silver, Healthcare Strategic Advisory Services Director at Perficient, showed attendees how the BI program, data governance, and the Business Intelligence Competency Center (BICC) was operationalized.

To review the webinar in its entirety, you can listen to the full replay here. You can also read the case study describing the solution.

Case Study: ProHealth Uses Cogito DW to Manage Population Health

Perficient recently worked with ProHealth Care to help the healthcare system with their data architecture efforts, specifically working on the integration of Epic System’s Cogito data warehouse and several other key data sources surfacing data via SQL Server. ProHealth Care,  a community-based healthcare system with 15 primary care clinics, three hospitals, and numerous other services, as well as the largest healthcare provider in Waukesha County, Wisconsin, sought to create an IT infrastructure that would consolidate reporting and allow physicians to evaluate and modify practice patterns. This included thorough data collection, measurement and analysis, and delivery of data. As an accountable care organization (ACO), the system also wanted to implement a series of new metrics and reports to facilitate physician “pay for performance.”prohealth2

Here is an excerpt from the case study describing the solution:

First, Perficient worked with ProHealth Care to perform a business intelligence assessment, which resulted in a BI strategic roadmap, providing guidance and direction to an implementation plan. Following this assessment, ProHealth decided to use Cogito as the foundation for their data warehouse model in order to meet the growing data and reporting needs and accelerate the development process.

Several factors drove that decision, including:

  • An existing relationship with Epic Systems
  • The need for a central “hub” for changing data requirements
  • Cogito’s capabilities

Epic Systems’ Cogito is a new data warehouse that provides the ability to integrate data out of Epic’s Clarity database, as well as extend the data model to load data out of other external systems and/or third-party sources. This provided the most flexibility with the speed-to-market requirements the health system was counting on.

The end result was a data warehouse that integrated data in a way that produces current clinical and business intelligence based on role and workflow, and it provides the health system and its providers with a central data repository for clinical, financial and business data. Read the rest of this post »

Changing the SQL mirror endpoint owner breaks mirror in Lync 2013

When you install Lync 2013 and configure the back end for mirroring, a lot of things happen “under the hood” to configure the mirror (and witness, if applicable). Microsoft really has done a fantastic job of hiding the overall complexity of SQL mirroring through Lync topology builder, but suffice it so say, there are many SQL pieces that get configured and more than a few moving parts. Following the completion of the Lync back end install many companies choose to go back and perform some cleanup duties on the SQL servers. Depending on the company and the DBA, there could be lots of customization, or as I’ve encountered with most deployments, the DBAs usually just want to change the owner of the databases and mirroring endpoint that got created. The assumption by many (myself included) was that this would be a completely seamless and a benign change, but I learned during a recent deployment that it can result in SQL issues that 1) cause the mirror to stop functioning and 2) put both SQL servers in a state where databases could not be accessed on either server. Initially I thought this was something specific to the customer’s SQL configuration, but I was able to confirm this exact behavior in my lab so I knew that the issue could potentially be more widespread than I first thought. It took more than a few hours to figure out the root cause, but thankfully the fix is quick and easy. Read on for the exciting conclusion!

Note: While I expect this issue to be a “your-mileage-may-vary” type of issue, it did give me pause that I could re-create this in my lab, which is very simple and has no complexity. Some may experience this issue and others may not. If you do, the info below should save you some heartache.

Understanding SQL Mirroring

For those less familiar with the inner workings of SQL mirroring, Microsoft has some great information on TechNet that will help you better understand what we’re about to discuss. I highly suggest you read the TechNet documentation first before continuing on.

SQL mirror functionality relies on mirroring endpoints, configured within each SQL instance, to allow inter-server communication. There are two easy ways to view the mirror endpoints:

Option 1 – In SQL Management Studio, go to Server Objects>Endpoints>Database Mirroring

MirrorEndpoint-ManagementStudioView

Option 2 – In SQL Management Studio, run this query:

select * from sys.database_mirroring_endpoints

MirrorEndpoint_TSQLView

In order to use the mirroring endpoint, logins must exist on the SQL server instance for the account that the SQL services are running under on the server. Just as a domain user account must be added to the logins to connect to a SQL server, so too must the account(s) that the SQL server instance are running under. The principal server will have logins for the mirror and witness, the mirror server will have logins for the principal and witness, and the witness server will have logins for the principal and mirror. In my lab the SQL instances are using the NETWORK SERVICE account on the server itself, so the domain computer accounts are logins within each SQL instance. If your environment uses dedicated domain user accounts to run SQL, then you would see the domain user accounts as logins within each SQL instance. For example, my principal server logins are shown below:

Read the rest of this post »

Webinar Recap & Replay: Big Data and the Intelligent Enterprise

On Wednesday, we hosted a webinar on Big Data: Using Microsoft Enterprise Information Solutions to Make Smarter Business Decisions. During the session, Duane Schafer, Microsoft BI Practice Director at Perficient, began by defining Big Data, then analyzing Big Data with the Microsoft platform (including ways to manage structured vs. unstructured data such as a parallel data warehouse or Hadoop), visualizing Big Data and Excel, and wrapped up with a discussion around the future of Big Data.

When defining Big Data, he kicked off with some interesting information. For example, do you know the original three Vs of Big Data? Volume, velocity and variety. What about the other Vs, like veracity, validity and volatility? If you want to know how this all relates to dinosaurs, you can check out the presentation below, or watch the webinar recording at your leisure.

Wondering about audio/video in terms of Big Data? Duane shared a recent article on why video is the next big thing in Big Data, which talks about how some retailers are already putting this into use – and it goes beyond tracking customers’ activities. While discussing ways to visualize Big Data, he showed interesting video clips, including a Power Map creation video and a Power Map healthcare video. Those are available for viewing in the on-demand webinar.

Duane also shared some remarkable uses of Big Data that show us what to expect next. One such use comes from General Motors, where GM is now putting RFID tags in bolts, turning what most of us think of as very simple hardware into tracking devices that ensure everything is assembled properly.

New Azure VMs improve SQL Server Data Warehousing in the cloud

While poking around in Azure, looking to set up a BI Demo VM , I noticed that Microsoft had added a few SQL Server-oriented images to their catalog.   VMs labeled “SQL Server… for Data Warehousing”!

There was one for SQL Server 2012 (SQL Server 2012 SP1 for Data Warehousing on WS 2012) and one for the current CTP2 version of SQL Server 2014  (SQL Server 2014 CTP2 Evaluation for Data Warehousing on WS 2012)!

My curiosity piqued, I ran (figuratively) to the Bings, and sure enough!  There it was:  confirmation, including some nice guidelines on configuration of VMs for DW purposes.

My favorite factoids:

  • Use an A6 VM for the SQL 2012 image, an A7 for 2014.   (This was well-timed for me because I was about to put the 2014 on an A6…)
  • Use page compression for Data Warehouses up to 400GB
  • Use one file per filegroup for best throughput (this prevents multilevel disk striping), and for Data Warehouses under 1GB you should need only one filegroup
  • However, you can look at using multiple file groups to store staging data separately from production, to separate low-latency data from high-latency, to run parallel data loads, and more.

Just be aware that the SQL Server 2014 Azure image will expire at the beginning of August, as that product moves along the path to RTM.

 

Steps to fully realize the promise of DX

CIO and CMO convergence has been mentioned by many esteemed thought leaders as the key driver to digital marketing (DX) success.  At Perficient, we very much agree with that statement.  See below for a few great examples:

http://blogs.perficient.com/microsoft/2013/10/the-blurring-line-between-cio-and-cmo/

http://blogs.perficient.com/portals/2013/12/09/cmo-and-cio-need-to-work-together/

We also believe that the age old adage, “you can lead a horse to water, but you can’t make it drink” still rings true.

CIOs and CMOs may hear and believe the guidance; their teams will read the articles and internalize the value; incentives can be defined that govern behavior, but without a common thread to establish communication and define clear patterns for success, there is very little chance that this will actually happen.

Sitecore to the rescue!  Following these steps will help Marketing and IT leaders plan and execute on their DX initiatives.  Each item below deserves its own discussion, however it is important to understand the entire scope at a cursory level before taking the first step.

Mark Sitecore blog postIdentify the opportunity.  DX isn’t for everyone.  If your firm engages in a competitive landscape, understands the different types of market participants, has a store of customer data and product offering content, DX should be investigated as low hanging fruit that can be levered to accelerate online revenue.

Establish a stakeholder governance team.  As with any significant opportunity to deliver marginal benefit, executive level leadership from multiple areas within the firm are necessary to provide alignment to corporate goals and remove barriers to success.

Create a DX Center of Excellence.  The DX team should be built from existing senior technology and marketing staffers that understand organizational structure, technical adoption, market position and core product value proposition. The team should be led by a digital strategist from within or outside the organization with deep experience in this space and proven results delivered with similar opportunities.

Craft common understanding.  This is the key step.  If this step isn’t executed well, all other steps are inconsequential.  Because the DX COE and Governance teams are staffed with multi-discipline individuals with a broad range of backgrounds, perceptions and management, it is important that each team member is able to find value in the solution and own project results.  This can’t be achieved with words or directives.  It can only be realized when each team member sees value in the solution through their own lens.   Sitecore is the common thread that allows a heterogeneous group to find a common purpose and rally around a core set of business goals.  A few examples follow:

  • Marketing:  Sitecore enables corporate, regional and local groups to own content messaging, personalization and marketing automation without the need to engage with IT.
  • ITSitecore frees IT to focus on architecture, standards and data governance to ensure solution stability and scalability.
  • Management:  Sitecore provides cost effective and easy to understand reporting capabilities that allow management to quickly visualize progress and goal achievement.

Set Expectations.  Creating a Digital Strategy based on best practices that sets realistic expectations aligned to corporate goals will become the blueprint that drives all further steps. DX is not accounting.  Presuming 1 + 2 equals 3 is a realistic accounting expectation.  Digital Marketing rarely results in the same linear progression.  Therefore, aligning the strategy to iterative advancement is an important step in determining results and managing expectations.

Deliver incremental results.  Trying to boil the ocean is a common pitfall.  Because there are so many capabilities included in Sitecore DMS, prioritization is sometimes a challenge.  Starting with explicit personalization is often the most pragmatic path to an early win.  Using the Perficient Sitecore framework to safely expose client data to marketers, is a high reward, low risk solution that can be used to build trust and deliver quick results.  Adding A/B (multivariate) testing and reporting capabilities to provide actionable feedback is a wise second step.  The Perficient framework provides logical next steps that are validated and prioritized based on business goals and digital strategy.

Power BI for Office 365: Self-Service BI Webinar Recap

Yesterday, we hosted a webinar on Power BI for Office 365: Using SharePoint Online to Deliver Self-Service Business Intelligence. During the well attended session, Andy Tegethoff, Microsoft BI architect at Perficient, first reviewed what Power BI is all about, who is likely to be a user, and the components of it. He then dug deeper into Excel self-service BI, explaining and walking through Power Query, Power Pivot, Power View and Power Map.

Next, Andy went into Power BI with Office 365, which includes Power BI Sites (an app in SharePoint Online), Data Management, and Power BI for Mobile (currently available in the Windows Store for Windows 8 and 8.1 devices). He also gave a detailed walkthrough here, illustrating the features. Lastly, he discussed IT management for Power BI.

In closing, Andy provided several helpful links to get started with Power BI:

The attendees asked many great questions, and Andy had time to answer a large portion of them. You can view the webinar replay here, including the Q&A. Read the rest of this post »

Leveraging In-Line User Defined Functions in SQL Server

User-Defined Functions (UDFs) are great, and have been part of SQL Server for a long time now. They come in two primary flavors: scalar and table-value. The first returns a single value, where as the second will return an entire result set. It’s not uncommon to want to reuse a block of SQL. Views are nice, but sometimes you want to be able to pass in parameters, which isn’t allowed with views.

Leveraging In-Line User Defined Functions in SQL ServerUDFs do allow parameters, but in my experience they aren’t very efficient, especially if you are using the CROSS APPLY operator. When this happens, SQL usually  must execute the stored procedure for each row in the result set. Even if your UDF is pretty light weight, this can cause a lot of extra overhead for the server. This is because a UDF can be a multi-statement function, and therefore SQL has to deal with it on a row-by-row basis, instead of working with an entire set of data at once (which is what SQL is optimized to do).

Read the rest of this post »

Posted in SQL Server