I was recently preparing a presentation for a Chicago SharePoint Saturday. As I built out my slides explaining some O365 DevOps best practice it struck me that an Agile methodology could be the only viable methodology to deliver and maintain SharePoint Online projects. Here’s why…
At Perficient we have embraced SCRUM for many SharePoint projects and it has proven to be very successful. I took the SCRUM Master Course and certification to solidify my understanding of SCRUM. I recall the tutor saying that the largest part of adopting Agile is to think in an agile way. Quite simply I have modified the way I think about projects and I think this has helped a great deal leading more projects in the cloud.
To contrast, I began to think about how hard it would be to deliver SharePoint Online projects using a more traditional waterfall methodology. When you consider the ‘Evergreen’ service and how quickly we are seeing new features appear it’s a paradigm shift in my field of work as a SharePoint Architect.
I have made it part of my weekly routine to check the Office 365 public roadmap to assess features being rolled out as well as those on the horizon. This helps me understand, from a feature perspective, what I need to keep a close eye on in coming weeks.
In conjunction I also ensure that our development and QA tenants are signed up for ‘First Release’ (under O365 Service Settings). This enables me to see the features being rolled out at least two weeks prior to general availability and the change hitting our production tenants. This gives first sight of potential issues as well as identifying new feature opportunities.
Whether it’s the desire to work with a new feature or the need to respond to a change you’ll have a minimum of two weeks to respond. There is no longer the option to hold off a service pack or ‘hang five’ on that security update as we may have done on-premises.
How would your project handle the need to change, test and deploy within a two week period? Most likely, if you are following a traditional waterfall approach, this will be very difficult. If the service changes during a Build phase, how would you change direction and redesign? If you are a consultant, how would this affect scope and budget? What about your release cycle? Is it frequent enough to keep pace?
Our SharePoint Online SCRUM projects are typically running on a 1-2 week Sprint cycle. We usually start out with a 2 week cycle but then accelerate to a 1 week during a stabilization phase, when we do less new development and enter early support and maintenance. This enables us to achieve 1-2 releases during this critical window and keep pace with the service.
Is your methodology agile enough to keep pace in the cloud?
In my last blog post about DevOps for SharePoint Online the process I presented relied a lot upon scripted deployment to SharePoint Online (O365). I wanted to expand upon that a little and explain in a little more detail about how Perficient is using PowerShell to manage our deployments for our Development, QA and Production environments.
Automating any task which is repeated can be a productivity benefit providing the time invested in developing the automation takes less time than repeating the task itself. Automation also significantly reduces chance of ‘human error’.
Automating deployments is of little benefit to light users of SharePoint who do minimal customization of SharePoint in a single O365 tenant. However, as you begin to customize more and introduce the need for testing cycles then automation starts to become valuable. When you add multiple tenants into your DevOps and add multiple developers or administrators then automated deployment can really pay huge dividends.
I think it is fair to say we are in a period of emerging standards for deployment of customizations to SharePoint Online. When we worked on-premises with SharePoint the WSP provided great deployment options especially when you consider Feature stapling. This is basically off the table with O365 and we’re looking for new best practice.
I think that the combination of PowerShell and the SharePoint Server 2013 Client Components SDK is a strong candidate for best practice automation of deployment to SharePoint Online. PowerShell gives us the lightweight scripting we need in order to move rapidly through automated builds and deployments. The Client Components SDK gives us the full Client Object Model on the administrator’s desktop allowing them to execute on a huge variety of scripted tasks. Here are a couple of useful resources on this topic, one from my colleague Roydon Gyles-Bedford whom I credit with a lot of Perficient’s thought leadership in this area:
At Perficient we have invested in PowerShell Modules which use XML configuration to drive deployment of items such as:
The XML configuration files are pseudo-CAML (Collaborative Application Markup Language!) which is wrapped in our own markup to help the Modules know what to do with it. The nice thing about CAML is that it is already defined and baked into SharePoint. We will often use the Client Browser Tool http://spcb.codeplex.com to browse existing artifacts like Content Types to understand how to define Content Types from scratch. E.g.
Aside from configuration defined in XML we also simply drive configuration through PowerShell modules using the Client Object Model directly. Here is an example function for adding a Web:
At this point in time the Client Object Model does lack functionality when compared to its server-side counterpart. However, this is improving all the time with new methods being added in every release.
In some cases it is possible inspect the server-side object model using a tool like IL Spy http://ilspy.net and find (unsupported) ways to get the job done. For example we found a way to add links to the Search Center Navigation via this technique. I must stress that using an unsupported method should be for convenience only and you should have a backup plan should it fail. We normally write this backup plan into our deployment documentation and it’s usually just a manual way to achieve the same thing albeit more slowly.
I am now also seeing lots of discussion and examples around HTTP Remote operations to help fill the gaps in the Client Object Model. This is of course also unsupported but can be effective as a convenience and time-saver. We’ve used this effectively to map Search Crawled Properties to the Refinable Managed Properties in SharePoint Online. This is not supported by the Client Object Model and can take a huge amount of time so is ripe for automating. Here is a snippet showing how we call a function to update RefinableString00 with Crawled Properties:
In conclusion, automation using scripted deployment can be an extremely versatile and effective way to support your DevOps for SharePoint Online. At Perficient, SCRUM has proven to be a very effective methodology for SharePoint Online projects. Typically we are making the scripted deployment of any new feature part of the ‘Done Criteria’ for any development work. Scripting the deployment then very much becomes part of feature development and will be effectively tested in development environments before progressing to QA and Production.
We see a lot of scenarios where clients are moving their intranets successfully to the Office 365 cloud with SharePoint Online. This is the easiest, smoothest path to an social intranet on the Microsoft platform, due largely to the ever-closer relationship between Yammer and the rest of the services in Office 365.
That said,there are still plenty of enterprises out there who prefer to either keep their intranet on-premises, or not upgrade / migrate just yet. Many of those organizations would still like to get their bang for the buck with Yammer, however, and need to figure out a solution for integrating those social features into their on-premises solution.
By far the most common way to accomplish this right now is through the use of the Yammer Embed functionality (or specifically for SharePoint, the Yammer app for SharePoint) to embed specific news feeds on specific sites. This is easily the most obvious way to “socialize” an on-premises SharePoint intranet with Yammer.
That works, sure. But it’s not all that elegant. Too, if you’re using the Yammer app for SharePoint, this approach forces you to go in and update every Yammer feed when they update the app (which is a pain).
A more forward-thinking, less common but emerging approach to a social intranet is to actually use Yammer as the intranet home.
This is an example of truly embracing enterprise social and may require a complete rethink from a lot of organizations as to how they approach an intranet, but it’s the direction things seem to be going. You make the social network your home, and instead of augmenting informational sites with social feeds, you augment social groups with links to informational sites using Pins and the Info window’s rich text / HTML editor feature.
Think about it. Here at Perficient, we’re in the midst of rolling out a new platform for time tracking, financials, and other fun line-of-business activity and reporting. We have both a Yammer group stood up to support that rollout, and a more traditional SharePoint intranet site.
What we’ve found in this scenario is that the Yammer feed has actually supplanted the informational site because it’s a much faster and more responsive way for people to get answers and collaborate. Links embedded in the Yammer page direct users back to SharePoint for the informational / non-collaborative content they need, but the social discussion and interaction is now the focus.
Of course, Yammer in general resists (i.e., doesn’t allow) any but the most basic customization. Fonts, styles, navigation etc., are all locked in “as is”. The only thing you can really change in Yammer is the header atop your page. That means we lose some control over branding, but gain quite a bit in interaction and employee engagement. For this use case, it’s a smashing success.
The question then becomes, “Can this approach work for an entire intranet, and not just one use case?”
To some extent, that depends on the users. At the end of the day, it all depends on where they go when they log on in the morning. Email? The intranet? Or their social network? Get the ball rolling with enterprise social and people will start skipping over the intranet– it’s almost a given. Use social to surface intranet content and the line starts to blur… which is a lot closer to where things are going in the cloud than it is to a hodgepodge of on-prem intranet sites with embedded social feeds.
This is Part I in a two part series on how Perficient helped to support ProHealth Care in operationalizing their BI program, data governance, and the Business Intelligence Competency Center. Here, I’ll focus on the workstreams and the road map. In Part II, I’ll cover the members of the data governance steering committee as well as the initiation of data governance and data governance priorities.
I’d first like to share the approach ProHealth Care and Perficient took to operationalize ProHealth Care’s BI program, initiate some of the data governance activities, and help to operationalize the Business Intelligence Competency Center (BICC).
As you can see below, we applied Perficient’s Enterprise Information Management framework to focus our activities in developing the road map for ProHealth’s BI Program. We were principally concerned with four discreet work streams to stand up the program, in addition to the core work that’s been undertaken to actually deliver the population health analytics to support the Accountable Care Organization (ACO). Read the rest of this post »
One of the many advantages of using Office 365 is freedom from a myriad of worries at the data center (watch this video for an interesting glimpse at Microsoft data centers), server, and application levels.
One of the most important concerns for any enterprise is that of “business continuity” – making sure a service is both available and, in the event of trouble, restorable with minimum information loss. Two metrics are traditionally used to help define business continuity goals — Recovery Point Objective (“How much data can I afford to lose?”) and Recovery Time Objective (“How long can I wait for the service to be available?”). While Microsoft publishes “financially backed SLAs for up-time” (see Office 365 Trust Center), it does not provide specific RPO and RTO guarantees. RPO and RTO are, however, published for related services (see SharePoint Online Dedicated and Exchange Online Dedicated service descriptions.)
For most organizations, these ranges of RPOs and RTOs are likely to be acceptable. If they are NOT, the organization will need to design processes to meet the more stringent objectives. It is important to keep in mind that scenarios other than outright Office 365 failure may result in information loss (e.g. accidental document deletion). Some of these scenarios are well supported by the application (e.g. SharePoint recycle bin, versioning, etc.), but others are not (historical file deletions, file corruption, version overwrite, etc.)
For on-premises deployment, a number of 3rd party vendors have developed tools to support a wide variety of information loss and recovery scenarios. For Office 365, tools and technologies are beginning to appear — some of these tools are extension of on-premises technology while others are cloud-only implementation. Here are a few available options:
When looking at these products and services, consider your specific use cases as well as the following:
As always, a well formulated set of requirements based upon business needs will make the decision making process easier. Technology in this area is rapidly changing, so always check for new developments.
A common complaint about data warehousing/BI has been time to market. The investment in real months required to stand up analytics is just too large. Descriptions of the actual time required vary (depending on who you ask, and what their interests are) from a year to 24 months. The numbers are open to debate, but let’s go ahead and stick with the conventional wisdom that Data Warehousing typically requires a significant timeline to see results. This assumption then begs two questions:
Looking at the second question first, there’s a very simple answer: YES. Successful DW/BI projects can utterly revolutionize an organization’s processes and even their outlook. They can shine light on problems, point the way to new opportunities, and improve the daily work lives of employees at almost any level. I consider it a foregone conclusion that there is tremendous value in well-built DW/BI systems.
Of course, the caveat there is the whole “well-built” part. That’s where the delays creep in, and where the real timeline resides. In addition to the experience and expertise brought to the design and construction these systems, the degree of involvement of the business also plays a very large role in how successful the solution will be. Too many gaps or failures on either side can result in less-than-satisfactory outcomes being reached after spending a lot of time and effort.
So that leads us back to the first question above: does it have to take that long? I mean, upwards of 2 years to build a decent Business Intelligence solution? This answer is not nearly as easy because the factors that contribute to extending timelines in DW/BI projects are numerous and varied. For instance:
You get the point. The development work in and of itself is not necessarily what takes a long time. What takes a long time is when business needs are misunderstood or disregarded, when expectations aren’t managed, when the chosen technology platform is not well-aligned to business requirements — basically, when either side doesn’t fully understand what they are getting into, and there is misalignment in that area.
In the next few posts, I’ll go over various tools and techniques currently in the market that offer some kind of acceleration of the data warehousing process, and see what paths are available to speed up time-to-analytics. I will include the tool class sometimes variously referred to as either “frameworks” or “accelerators”. I’ll talk about iterative development and the potential risks and benefits of using Agile methodologies. And I’ll discuss possible ways that planning in itself can help deliver results sooner rather than later.
Next time: Accelerators and Frameworks. Hope to see you then!
Jamie Stump, Parshva Vora, myself, and others from the Perficient family attended Sitecore Symposium this past week. We absorbed a lot of knowledge about what is upcoming with Sitecore 7.5 and Sitecore 8. The cadence communicated from Sitecore is around “experience”. The building blocks are being put in place for you, our clients, to help your customers have a custom and personal experience while we as a partner provide solutions that allow you to place “experience before content”.
Sitecore is moving forward to be a top-tier provider for your marketing and communication goals through the Sitecore Experience Platform. With real-time marketing, growing demand through multiple channels, and customizing your content to your individual customers, you have the abilities to have dedicated customers for life through an enriching experience for them through you.
Now trying to empathize by putting myself in your shoes after reading those paragraphs, I would probably think – “Well that is a great bunch of words, but what does that really entail for me”.
Well it means a new approach to analytics data. Sitecore’s xDB, using MongoDB, provides the ability to store large amounts of data about your customers. It plays well with your current infrastructure and is extensible to your profile needs as well as scalable through various database design principles and patterns. Along with this, reporting of the analytics information is improved to use all of that collected information.
What a week it was! I am referring to the last week spent at Sitecore Symposium North America and Annual MVP Summit that took place in Las Vegas. There was plenty to absorb with as much as seven sessions in progress at the same time. Sessions were divided into three different tracks: Product, Business and Developer. Obviously I couldn’t make it to all, but I did attend a good mix of them. All sessions were diverse in terms of subject matter. However, from opening keynote to closing keynote, the emerging theme was clear, and it was the Connected Consumer Experience!
Well, the concept of the consumer experience is not entirely brand new. At the symposium, stronger emphasis was placed on the term ‘connected’. The digital marketing landscape is continuously shifting as customers are engaging in doing business across several channels – email, website, mobile sites, apps. social media, CRM etc., and it poses at least two immediate questions for any organization that takes their customers seriously.
Ok, I’ve got to admit I really meant to say “Almost everything you need to know in first Release.”
The more you share, the more you get. Believe in that? Office 365 community does and as a result , this week Microsoft hosted “Delve Yamjam” to coincide with the launch of the new Office 365 product called “Delve”. (If you are new to I highly recommend reading earlier articles here and here to get to know your new friend Delve). Look at a screenshot of Delve from my demo tenant, looks pretty cool, huh?
Some great questions asked some great thoughts shared. I summarize here for the larger community. Microsoft responses were from Christophe Fiessinger, Kady Dundas, Josh Stickler, Mark Kashman, Cem Aykan and on the phone Ashok Kuppusamy, Stefan Debald, Fredrik Holm, John Toews, and Robin Miller.
Hope this provides some insights around how Office Graph captures and renders signals. Check back for more details as I dive more into Delve.
Microsoft position as a Leader in Gartner’s 2014 Magic Quadrant for Social Software in the Workplace has moved to the top. Read the rest of this post »