Microsoft

Blog Categories

Subscribe to RSS feed

Archives

Follow our Microsoft Technologies board on Pinterest

ProHealth Care’s BI Program, Data Governance & BICC: Part II

This is the second post in a two part series on how Perficient helped to support ProHealth Care in operationalizing their BI program, data governance, and the Business Intelligence Competency Center. In the first post, I talked about the workstreams and the roadmap. Below, I’ll cover the members of the data governance steering committee as well as the initiation of data governance and data governance priorities.

The anatomy of a data governance committee

One of the first things I’m often asked when I’m engaged in the delivery of a roadmap around data governance, is “Who should be on the data governance steering committee?” Because of that, I thought it would be interesting to share with some highlights from ProHealth Care’s governance oversight team.

We have senior executive sponsorship of the data governance program that I can’t emphasize enough – it is critically important. In this case, the Chief Innovation Officer is our executive sponsor.

The committee chairs are driven by both IT and Performance Excellence, so that gives us good representation across IT and Business domains. We have a good cross section of representation from Finance, HR, Operations and clinical domains, with respect to our standing members (some of which are data owners, some are data stewards, others are key stakeholders and decision makers).

We also have representatives from the BICC that sit on the data governance committee, and this is to provide a bridge between data governance and the tactical operational execution of the decisions the data governance committee makes.

We also join in ad hoc members, data owners and data stewards, and enable work groups based upon the specific initiatives we’re trying to address at the time.

Data governance initiation and priorities

So what did we do to initiate data governance? Well, as I mentioned earlier, we actually started with the fundamentals, and that’s the charter and scope and the guiding principles on which the data governance committee would make decisions.

What we defined are scheduled activities, and we put some definition around the voting and decision rights.  This particular committee does not only serve in the role of data governance but also serves as a steering committee helping to inform data strategy and make decisions about prioritization of BI projects. This means they get involved in prioritizing project requests. They also are principally involved in address data quality, information assurance concerns, and approving business metadata. The business terms and definitions are defined and stored within a business glossary, so that the information may be used and referenced universally across the enterprise, to facilitate the appropriate use and interpretation of data.

We started with policy development because it’s necessary to encourage the desired organizational behavior with respect to information security and data classification, and things like data quality and life cycle management, thus they were foundational to this initiative. There’s not a lot of focus around change management, stakeholder engagement and communication, because this is such an enterprise wide initiative that we had to make sure we have appropriate change management around the activities. We also had to make certain that we’re engaging our stakeholders by showing them, and communicating with them, what they can expect particularly as it relates to the Microsoft BI stack and the capabilities within that platform the potential for self-service business intelligence. Read the rest of this post »

Meet Yammer, Your Answer to Project Collaboration!

Yammer has a full range of features to help you communicate openly and expedite decision making, open new collaboration channels and breakdown email silos. Let’s start looking at our current ways of communicating with our team. A typical project is slated to begin and end with a vision and goal. In order to achieve these, it’s essential to have transparent and effective communication. Throughout the project lifecycle, we engage in numerous communication channels whether they are phone calls, emails, video calls, messenger chats etc. We are so engaged in making the project a success that we end up overseeing the numerous hours spent communicating with internal team or external customers. This is where Yammer steps in. The idea is not to replace each and every channel but to reduce the time spent and make it more effective so you can reach maximum throughput.

Three main reasons why would you consider using Yammer for internal and external collaboration are ease of use, mobile app, and collaborating with external users. Yammer can move your team beyond the hierarchical and glacial-paced decision making that can hobble a project’s progress. You can set up a private Yammer group where your team can conduct online conversations around important project elements; this allows each team member to be part of the decision-making process. To keep things in perspective, I will share a use case from one of my recent customer engagements. Delivery success is measured by how well the deliverables and activities match the agreed upon vision and goal objectives. One of the first sessions in these engagements is the project kick off. This meeting involves all the stakeholders of the project and establishes a sense of common goals and allows us to start understanding each individual. This is where all communication channels are discussed and confirmed and ultimately where Yammer can be introduced.

Today, I’ll share my firsthand experience of using Yammer as a project collaboration platform and showcase its value with a real world use case.

One of the biggest frustrations I face at the start of every project is the ton of emails exchanged, many times with attachments and their different versions end up choking my inbox. This is where Yammer comes to rescue. Follow these three basic steps and you will never go back to traditional ways of project management.

  1. Create an Internal Yammer Group
  2. Create an External Network
  3. Invite Members and Start Sharing

 

1. Your Internal GroupInternalGroup1

This will enable daily communication within our team. Drafts of documents, questions, clarification everything can be posted in the internal group.

  • Tagging People – Helps notify the right individuals and keep the noise from others inbox. All our posts were targeted to the group and at least one team member. This generates a notification for the tagged individual.
  • Tagging Content – Helps to find information when needed most. You’ve got to love the subscription model, and this is where it is most powerful. Subscribe to any topic and you are then fed all conversations around that topic on your home screen.
  • Ask a Question – Every project has issues and gaps and Yammer is your best bet to get those straightened out quickly. We made sure any question that involved more than two individuals is posted in the internal group. You will be amazed at how quick and effective this approach can be.
  • Upload Deliverables for Review – I have yet to meet someone who enjoys receiving multiple versions of documents (and sometimes huge slide decks) in their inbox followed by performing a clean-up activity. We used Yammer to share all project related documents which helped us unclog our inbox and tag the content with topics and people for appropriate notification.

Now, when you are ready with your deliverables, move them over to the external group for sharing. This keeps separation between internal team and customer communications.

2. Create an External NetworkExternalNetwork1

Creating an external network will allow you to have an dedicated collaboration space with the customer.

  • Allows Yammer groups to collaborate on individual project and social needs.
  • Advantage of transparency and a quick communication channel.

 

 

 

ExternalGroup2

When you have an external network setup, go ahead and create a project group. This will enable you to focus all project related conversations inside a group. Add all team members to this group and mark it as “Public” or “Private” based on your needs.

 

3. Don’t forget to add team members and post your first message

Remember there might be few folks on your team who are not familiar or not comfortable with the concept of using Yammer for this purpose. Sharing documents, deliverables and posting questions will all act as an ice breaker. Start with some water cooler talk if nothing else (keep it relevant to your team or project though). Upload files directly to Yammer for sharing across the group. You can upload new versions of documents and let Yammer maintain control over previous versions.

Suggestions:

  • Mark your uploaded content as “official and read only” if you are working on projects in which documents are changed often. The “official and read only” designation is also an effective way to get team members past sticking to their own versions of project documents.
  • Equip your team members with one of Yammer’s mobile apps and they will have always-on channel to team discussions and files. Social collaboration does take a little extra convincing and showcasing but once you get people on board it’s a breeze. Reducing those chunky emails, not having to clear your inbox every now and then, quick response, level of engagement, and ability to search topics and documents makes it a sure shot winner.
  • Use groups to receive feedback and approval on project deliverables by including your stakeholders/sponsors in the “cc” while sharing the posts.

* If you are concerned about compliance and security when uploading documents, no need to worry, you can still use Yammer effectively. In circumstances like those, utilize SharePoint as the document repository and Yammer as the front end for all communications, post links to SharePoint document libraries and start a conversation. Even better, if you are on Office 365, all the group conversations are now integrated with the documents and sites.

Here at Perficient we have utilized Yammer in various scenarios.  Along with our certified customer success managers and admins, we continue to help our customers adopt and roll out  successful social networks. Please add your feedback and share your experience here if you have used this approach.

Visualization options with Microsoft

I’ve been speaking to a lot of clients lately about the visualization capabilities of the Microsoft BI platform and want to clarify a point of confusion. When building an enterprise analytics platform you will be faced with several decisions around architecture as well as delivery. The architectural options will be vetted by your IT department, but in large part they will be driven by how you want to deliver and consume information in your organization. Typically there will be a balance between ‘traditional’ BI delivery and ‘self-service’ BI delivery.

What’s the difference? Traditional BI delivery comes in the form of reports and dashboards that are built by your IT department with tools such as SSRS or PerformancePoint. Both are solid tools with a lot of functionality. In contrast, most organizations are looking for ways to reduce their dependency on IT-built reports and therefore need a technology that enables their business users to be self-sufficient. This comes in the form of Excel with PowerPivot and PowerView.

A complete explanation of these new tools can be found here.

Feel free to contact us on how these tools can be used in your enterprise to delivery pervasive insights!

The Premise of On Premises

As a technical architect I am used to the rapid evolution of language to describe an accelerating technical world. Only a couple of years ago using the word Cloud would most likely conjure images of the cumulonimbus variety. Today I rarely join a conference call where The Cloud is not mentioned and we can be confident in technical circles that everybody understands the term.

The Cloud

According to Wikipedia, references to The Cloud began as early as 1996 when Compaq used the term in an internal document. Much later Amazon began to use it as part of their Elastic Compute Cloud terminology. We now use the term to describe great new services like Azure and Office 365.

I like The Cloud and feel it is a very fitting term for describing the way we now host services. My compliments to whomever actually first coined the term! It makes a lot of sense.

Now that we have The Cloud we have the premise of On Premises and need a term to clearly refer to services hosted on site (as opposed to in The Cloud).

So, do you say On Premises or On Premise?

On Premises

On Premises

Premises

“A house or building, together with its land and outbuildings, occupied by a business or considered in an official context.”

http://www.oxforddictionaries.com/definition/english/premises

Premise

“(British also premiss) Logic A previous statement or proposition from which another is inferred or follows as a conclusion.”

http://www.oxforddictionaries.com/definition/english/premise

Was it a Mistake?

It seems clear to me that On Premises is correct whereas On Premise is derived from a mistake made and copied many thousands (millions?) of times.

I find the discussion interesting because I think it highlights the rapid adoption of terminology, correct or otherwise. As technical professionals I think we should always strive to communicate better. Describing technology more accurately, clearly and concisely is important and will help us serve the needs of decision makers and users better. I think we should always question the terminology we use and improve upon it whenever possible.

Why Agile is the only methodology for SharePoint Online (O365)

I was recently preparing a presentation for a Chicago SharePoint Saturday. As I built out my slides explaining some O365 DevOps best practice it struck me that an Agile methodology could be the only viable methodology to deliver and maintain SharePoint Online projects. Here’s why…

At Perficient we have embraced SCRUM for many SharePoint projects and it has proven to be very successful. I took the SCRUM Master Course and certification to solidify my understanding of SCRUM. I recall the tutor saying that the largest part of adopting Agile is to think in an agile way. Quite simply I have modified the way I think about projects and I think this has helped me lead projects in the cloud.

To contrast, I began to think about how hard it would be to deliver SharePoint Online projects using a more traditional waterfall methodology. When you consider the ‘Evergreen’ service and how quickly we are seeing new features appear it’s a paradigm shift in my field of work as a SharePoint Architect.

I have made it part of my weekly routine to check the Office 365 public roadmap to assess features being rolled out as well as those on the horizon. This helps me understand, from a feature perspective, what I need to keep a close eye on in coming weeks.

O365 Public Roadmap

O365 Public Roadmap

In conjunction I also ensure that our development and QA tenants are signed up for ‘First Release’ (under O365 Service Settings). This enables me to see the features being rolled out at least two weeks prior to general availability and the change hitting our production tenants. This gives first sight of potential issues as well as identifying new feature opportunities.

O365 First Release

O365 First Release

Whether it’s the desire to work with a new feature or the need to respond to a change you’ll have a minimum of two weeks to respond. There is no longer the option to hold off a service pack or ‘hang five’ on that security update as we may have done on-premises.

How would your project handle the need to change, test and deploy within a two week period? Most likely, if you are following a traditional waterfall approach, this will be very difficult. If the service changes during a Build phase, how would you change direction and redesign? If you are a consultant, how would this affect scope and budget? What about your release cycle? Is it frequent enough to keep pace?

Our SharePoint Online SCRUM projects are typically running on a 1-2 week Sprint cycle. We usually start out with a 2 week cycle but then accelerate to a 1 week during a stabilization phase, when we do less new development and enter early support and maintenance. This enables us to achieve 1-2 releases during this critical window and keep pace with the service.

Is your methodology agile enough to keep pace in the cloud?

PowerShell Deployment to SharePoint Online

In my last blog post about DevOps for SharePoint Online the process I presented relied a lot upon scripted deployment to SharePoint Online (O365). I wanted to expand upon that a little and explain in a little more detail about how Perficient is using PowerShell to manage our deployments for our Development, QA and Production environments.

PowerShell Deployment to SharePoint OnlineAutomating any task which is repeated can be a productivity benefit providing the time invested in developing the automation takes less time than repeating the task itself. Automation also significantly reduces chance of ‘human error’.

Automating deployments is of little benefit to light users of SharePoint who do minimal customization of SharePoint in a single O365 tenant. However, as you begin to customize more and introduce the need for testing cycles then automation starts to become valuable. When you add multiple tenants into your DevOps and add multiple developers or administrators then automated deployment can really pay huge dividends.

I think it is fair to say we are in a period of emerging standards for deployment of customizations to SharePoint Online. When we worked on-premises with SharePoint the WSP provided great deployment options especially when you consider Feature stapling. This is basically off the table with O365 and we’re looking for new best practice.

I think that the combination of PowerShell and the SharePoint Server 2013 Client Components SDK is a strong candidate for best practice automation of deployment to SharePoint Online. PowerShell gives us the lightweight scripting we need in order to move rapidly through automated builds and deployments. The Client Components SDK gives us the full Client Object Model on the administrator’s desktop allowing them to execute on a huge variety of scripted tasks. Here are a couple of useful resources on this topic, one from my colleague Roydon Gyles-Bedford whom I credit with a lot of Perficient’s thought leadership in this area:

https://github.com/rgylesbedford
http://soerennielsen.wordpress.com/2013/08/25/use-csom-from-powershell

At Perficient we have invested in PowerShell Modules which use XML configuration to drive deployment of items such as:

  • Master Pages
  • Page Layouts
  • Content Types
  • Display Templates
  • Term Store Terms

The XML configuration files are pseudo-CAML (Collaborative Application Markup Language!) which is wrapped in our own markup to help the Modules know what to do with it. The nice thing about CAML is that it is already defined and baked into SharePoint. We will often use the Client Browser Tool http://spcb.codeplex.com to browse existing artifacts like Content Types to understand how to define Content Types from scratch. E.g.

ContentType

Aside from configuration defined in XML we also simply drive configuration through PowerShell modules using the Client Object Model directly. Here is an example function for adding a Web:

AddWebFunction

At this point in time the Client Object Model does lack functionality when compared to its server-side counterpart. However, this is improving all the time with new methods being added in every release.

In some cases it is possible inspect the server-side object model using a tool like IL Spy http://ilspy.net and find (unsupported) ways to get the job done. For example we found a way to add links to the Search Center Navigation via this technique. I must stress that using an unsupported method should be for convenience only and you should have a backup plan should it fail. We normally write this backup plan into our deployment documentation and it’s usually just a manual way to achieve the same thing albeit more slowly.

I am now also seeing lots of discussion and examples around HTTP Remote operations to help fill the gaps in the Client Object Model. This is of course also unsupported but can be effective as a convenience and time-saver. We’ve used this effectively to map Search Crawled Properties to the Refinable Managed Properties in SharePoint Online. This is not supported by the Client Object Model and can take a huge amount of time so is ripe for automating. Here is a snippet showing how we call a function to update RefinableString00 with Crawled Properties:

UpdatingRefinableManagedProperties2

In conclusion, automation using scripted deployment can be an extremely versatile and effective way to support your DevOps for SharePoint Online. At Perficient, SCRUM has proven to be a very effective methodology for SharePoint Online projects. Typically we are making the scripted deployment of any new feature part of the ‘Done Criteria’ for any development work. Scripting the deployment then very much becomes part of feature development and will be effectively tested in development environments before progressing to QA and Production.

Could Yammer Supplant Your Intranet?

We see a lot of scenarios where clients are moving their intranets successfully to the Office 365 cloud with SharePoint Online.  This is the easiest, smoothest path to an social intranet on the Microsoft platform, due largely to the ever-closer relationship between Yammer and the rest of the services in Office 365.

That said,there are still plenty of enterprises out there who prefer to either keep their intranet on-premises, or not upgrade / migrate just yet.  Many of those organizations would still like to get their bang for the buck with Yammer, however, and need to figure out a solution for integrating those social features into their on-premises solution.

By far the most common way to accomplish this right now is through the use of the Yammer Embed functionality (or specifically for SharePoint, the Yammer app for SharePoint) to embed specific news feeds on specific sites.  This is easily the most obvious way to “socialize” an on-premises SharePoint intranet with Yammer.

That works, sure.  But it’s not all that elegant.  Too, if you’re using the Yammer app for SharePoint, this approach forces you to go in and update every Yammer feed when they update the app (which is a pain).

A more forward-thinking, less common but emerging approach to a social intranet is to actually use Yammer as the intranet home.

This is an example of truly embracing enterprise social and may require a complete rethink from a lot of organizations as to how they approach an intranet, but it’s the direction things seem to be going.  You make the social network your home, and instead of augmenting informational sites with social feeds, you augment social groups with links to informational sites using Pins and the Info window’s rich text / HTML editor feature.

 

 

 

 

 

 

 

 

 

 

 

Think about it.  Here at Perficient, we’re in the midst of rolling out a new platform for time tracking, financials, and other fun line-of-business activity and reporting.  We have both a Yammer group stood up to support that rollout, and a more traditional SharePoint intranet site.

What we’ve found in this scenario is that the Yammer feed has actually supplanted the informational site because it’s a much faster and more responsive way for people to get answers and collaborate.  Links embedded in the Yammer page direct users back to SharePoint for the informational / non-collaborative content they need, but the social discussion and interaction is now the focus.

Of course, Yammer in general resists (i.e., doesn’t allow) any but the most basic customization.  Fonts, styles, navigation etc., are all locked in “as is”.  The only thing you can really change in Yammer is the header atop your page.  That means we lose some control over branding, but gain quite a bit in interaction and employee engagement.  For this use case, it’s a smashing success.

The question then becomes, “Can this approach work for an entire intranet, and not just one use case?”

To some extent, that depends on the users.  At the end of the day, it all depends on where they go when they log on in the morning.  Email?  The intranet?  Or their social network?  Get the ball rolling with enterprise social and people will start skipping over the intranet– it’s almost a given.  Use social to surface intranet content and the line starts to blur… which is a lot closer to where things are going in the cloud than it is to a hodgepodge of on-prem intranet sites with embedded social feeds.

ProHealth Care’s BI Program, Data Governance & BICC: Part I

This is Part I in a two part series on how Perficient helped to support ProHealth Care in operationalizing their BI program, data governance, and the Business Intelligence Competency Center. Here, I’ll focus on the workstreams and the road map. In Part II, I’ll cover the members of the data governance steering committee as well as the initiation of data governance and data governance priorities.

I’d first like to share the approach ProHealth Care and Perficient took to operationalize ProHealth Care’s BI program, initiate some of the data governance activities, and help to operationalize the Business Intelligence Competency Center (BICC).

As you can see below, we applied Perficient’s Enterprise Information Management framework to focus our activities in developing the road map for ProHealth’s BI Program. We were principally concerned with four discreet work streams to stand up the program, in addition to the core work that’s been undertaken to actually deliver the population health analytics to support the Accountable Care Organization (ACO). Read the rest of this post »

Office 365 Information Recovery

o365dcOne of the many advantages of using Office 365 is freedom from a myriad of worries at the data center (watch this video for an interesting glimpse at Microsoft data centers), server, and application levels.

 One of the most important concerns for any enterprise is that of “business continuity” – making sure a service is both available and, in the event of trouble, restorable with minimum information loss. Two metrics are traditionally used to help define business continuity goals — Recovery Point Objective (“How much data can I afford to lose?”) and Recovery Time Objective (“How long can I wait for the service to be available?”). While Microsoft publishes “financially backed SLAs for up-time” (see   Office 365 Trust Center), it does not provide specific RPO and RTO guarantees. RPO and RTO are, however, published for related services (see SharePoint Online Dedicated and Exchange Online Dedicated service descriptions.)

For most organizations, these ranges of RPOs and RTOs are likely to be acceptable. If they are NOT, the organization will need to design processes to meet the more stringent objectives. It is important to keep in mind that scenarios other than outright Office 365 failure may result in information loss (e.g. accidental document deletion). Some of these scenarios are well supported by the application (e.g. SharePoint recycle bin, versioning, etc.), but others are not (historical file deletions, file corruption, version overwrite, etc.)

For on-premises deployment, a number of 3rd party vendors have developed tools to support a wide variety of information loss and recovery scenarios. For Office 365, tools and technologies are beginning to appear — some of these tools are extension of on-premises technology while others are cloud-only implementation. Here are a few available options:

When looking at these products and services, consider your specific use cases as well as the following:

  • Full platform support – Does the product/service support Exchange, One Drive for Business, Lync, Yammer, AND SharePoint?
  • Integrated tool suite – some of these tools support other Office 365 needs (e.g. governance, data migration). For a larger and/or more complex implementation, a suite will likely prove more valuable than a singular solution
  • Archiving  – does the tool provide support for removal of data meeting certain requirements, or only recovery of missing/corrupt information?
  • On-Premises AND Office 365 – if your organization is transitioning from an on-premises implementation, a solution that seamlessly supports both platforms is ideal
  • Backup Location – some of these solutions use cloud storage exclusively while others provide a variety of targets
  • Target User – some of the solutions are targeted toward business users, while most are for IT professionals

As always, a well formulated set of requirements based upon business needs will make the decision making process easier. Technology in this area is rapidly changing, so always check for new developments.

Why Does Data Warehousing Take So Long?

A common complaint about data warehousing/BI has been time to market.   The investment in real months required to stand up analytics is just too large. Descriptions of the actual time required vary (depending on who you ask, and what their interests are) from a year to 24 months. The numbers are open to debate, but let’s go ahead and stick with the conventional wisdom that Data Warehousing typically Why does Data Warehousing take so long?requires a significant timeline to see results. This assumption then begs two questions:

  1. Does it have to take that long?
  2. If it has to, will it be worthwhile?

Looking at the second question first, there’s a very simple answer: YES. Successful DW/BI projects can utterly revolutionize an organization’s processes and even their outlook. They can shine light on problems, point the way to new opportunities, and improve the daily work lives of employees at almost any level. I consider it a foregone conclusion that there is tremendous value in well-built DW/BI systems.

Of course, the caveat there is the whole “well-built” part. That’s where the delays creep in, and where the real timeline resides. In addition to the experience and expertise brought to the design and construction these systems, the degree of involvement of the business also plays a very large role in how successful the solution will be. Too many gaps or failures on either side can result in less-than-satisfactory outcomes being reached after spending a lot of time and effort.

So that leads us back to the first question above: does it have to take that long? I mean, upwards of 2 years to build a decent Business Intelligence solution? This answer is not nearly as easy because the factors that contribute to extending timelines in DW/BI projects are numerous and varied. For instance:

  • Are the builders of the system planning development closely with the consuming org? If not, extend the timeline.
  • Is the business committed to providing solid requirements on an ongoing basis? If not, extend the timeline.
  • Is the development team sufficiently experienced and under solid technical leadership? If not, extend the timeline.

You get the point. The development work in and of itself is not necessarily what takes a long time. What takes a long time is when business needs are misunderstood or disregarded, when expectations aren’t managed, when the chosen technology platform is not well-aligned to business requirements — basically, when either side doesn’t fully understand what they are getting into, and there is misalignment in that area.

In the next few posts, I’ll go over various tools and techniques currently in the market that offer some kind of acceleration of the data warehousing process, and see what paths are available to speed up time-to-analytics. I will include the tool class sometimes variously referred to as either “frameworks” or “accelerators”. I’ll talk about iterative development and the potential risks and benefits of using Agile methodologies. And I’ll discuss possible ways that planning in itself can help deliver results sooner rather than later.

Next time: Accelerators and Frameworks. Hope to see you then!