Microsoft

Blog Categories

Subscribe to RSS feed

Archives

Follow Microsoft Technologies on Pinterest

Matthew Morse

Posts by this author: RSS

SharePoint Conference 2011– Day 1 Recap

Yesterday concluded day one of the 2011 Microsoft SharePoint Conference. While the 2009 SharePoint Conference unveiled SharePoint 2010 – and we learned today that the 2012 conference will reveal the next version – this year’s conference focuses on how businesses are using SharePoint, best practices and ultimately, how to take SharePoint to the next level and drive user adoption.

Watch this quick video recap, shot from the friendly confines of the PointBridge booth:

SharePoint 2010 – Random Tidbits, Part 1

I’m attending the Microsoft SharePoint Conference 2009 in Las Vegas where Microsoft has just taken the wraps off of SharePoint 2010. Since it’s a new product release and Microsoft has done a pretty good job of keeping the lid on the functionality in the new product until now, there’s a very high level of excitement and interest.

The individual breakout sessions have generally been good. I plan to write about a number of topics going forward (as I’m sure many other folks do, as well), but here are a couple of random notes that I’ve jotted down so far.

  • Microsoft announced its intent to support CMIS as part of SharePoint 2010. Specifically, this promises simpler interoperability with other ECM systems (MS mentioned EMC’s Documentum and IBM’s FileNet by name), which is good news for enterprises with heterogeneous content systems. The demonstration showed a screenshot of a SharePoint “document library” that was actually surfacing information coming from a FileNet repository. There have been solutions for this type of interop in the past that are product-specific, but not anything standards-based or built directly into the product. Microsoft is clearly trying to carve out a larger niche for itself in the ECM world.
  • A new concept of “service applications” replaces the shared services model that is used on MOSS 2007. (Andrew Connell has a good write-up on the details here; so does a MS team from the UK.) This architectural shift opens up a number of new topological options in SharePoint farm design, as an architect can decide on a service-by-service basis what should be shared between web applications or farms. For example, two farms might want to share the same user profile service, but have different search providers. In MOSS, you’d have to configure user profile replication between two separate SSPs; in 2010, it will be possible to simply have the farms share a user profile service application and have separate search service apps.
  • One more item on service applications: Microsoft has made the service application a point of extensibility (which it really wasn’t in 2007). And it’s even available in SharePoint Foundation (the new name of Windows SharePoint Services – the version of SharePoint licensed with Windows Server).
  • A couple of small (but key, in my opinion) updates to SharePoint web content management (publishing) capabilities:
    • Font lockout – Almost everyone who is first introduced to the WCM features of MOSS 2007 asks this question when they see the rich text editor for page content: “Can I keep users from changing the fonts/sizes/colors?” And now in SharePoint 2010, the answer to that is “yes.” It’s possible to restrict the ability for that type of formatting – and not only that, but it’s possible to predefine styles that may be used by content authors. It’s a small feature, but one that will go a long way for making organizations feel more comfortable with a delegated authoring environment.
    • CQWP dynamic filtering – Many WCM sites in SharePoint have “landing pages” that aggregate content of a specific type or category using the CQWP (or a variation of the CQWP). (An example of this is on the PointBridge web site: look at the SharePoint solution page there and you’ll see that there’s a blog post, a person, and a case study all related to our SharePoint practice; visit the Exchange solution pageand you’ll see those page areas change to match the change in solution area.) In MOSS, you configure these types of pages by creating a page layout with a web part zone, creating a page using that page layout, then dropping a CQWP into the zone and configuring it to roll up the appropriate content based on the desired context.In SharePoint 2010, the CQWP can be configured to be filtered by a dynamic token that can read a page field value from the page – meaning that the CQWP can even be integrated into the page layout itself and eliminate the need for having to manually create those rollups each time.

Overview of Records Management in SharePoint Server 2010

Note: The contents of this post are based on the SharePoint Server 2010 technical preview, and are subject to change without notice.

Microsoft has invested in a number of areas in SharePoint Server 2010, and many of those investments are aimed at making SharePoint an even more serious contender in the ECM space. This post will be an overview of the new records management capabilities of SP2010; I will follow this with a detailed treatment of a number of the areas that are mentioned below.

A Brief Look Back

Microsoft introduced records management with MOSS 2007, and the functionality is generally fine. There is a basic but powerful routing mechanism for classification of incoming records based on content type. The mechanism for handling record metadata works well. The interface to the record center is exposed as an easy-to-use web service, and the API provides a number of straightforward hooks for extensibility (e.g. the ability to create a custom record router). Finally, the ability to put documents on a hold (or multiple holds) and have those documents exempted from any expiration or destruction policies works well.

However, the features are critically limited in a few significant areas. First, in its default configuration, a SharePoint farm supports a connection to only one record center – which, depending on data volume, can be a significant scalability problem. In addition, documents submitted to a record center make a copy of the source document; the user doesn’t have the option of removing the document from the source location. Finally, the options for declaration of a record are limited; a user can choose to manually send a document to a record center, and it’s possible to send records using custom workflows, but there’s not a way to create rules regarding record declarations (e.g. that every Contract stored in a system must automatically be a record).

As noted, the record management API works well, and it’s possible to develop rich solutions on MOSS 2007 that compensate for some of the functionality that’s not provided natively. What’s exciting about SharePoint 2010 is that Microsoft has addressed many of these limitations and the out-of-the-box experience for records managers will present a very interesting case for use of the new platform.

And now on to the new features!

In-Place Record Declaration

One of the very significant changes in approach for SharePoint RM is the addition of the capability to declare records “in-place” without moving them to a record center. This is a feature that may be activated at the site collection level. Once activated, the in-place records management feature adds capabilities to the site collection and to document libraries that allow documents to be declared as records.

declaring a record in place

Once declared a record, a doc is subject to a different set of policies (more on this below), a different set of permissions (e.g. which users may edit/delete documents), and to the standard record hold functionality.

Multiple Connections per Farm

SharePoint 2010 allows farm administrators to define multiple connections to record centers per farm – and not simply to define the connections, but to specify more detailed parameters about the behavior of those connections. In addition to the behavior that MOSS supported (which was to define an entry on the “Send To” menu for a document, then copy that document to the specified record center), SP 2010 provides the ability to move the document to the record center, or to move the document, but leave a link in the original location. It’s also possible to define a connection that does not show up on the manual “Send To” menu – but which can be used in automated record submission scenarios. The details of setting up these connections look like this:

connection

Better Integration with Information Management Policies

One thing that stands out about the new features provided by Microsoft for records management is their pervasive nature. While in MOSS records management sort of sat off to the side (with a connection only via the ‘”Send to” menu), in SharePoint 2010, records management functionality can be found sprinkled throughout the product. This is good thinking on the part of Microsoft, as it is reflective of the perspective of records managers at organizations. Solid records management isn’t an afterthought: it’s something that must be incorporated into day-to-day business processes.

Perhaps this is seen most clearly in the incorporation of records rules into SharePoint’s information management policies – and specifically the policy around retention (renamed from “expiration” in MOSS). (Note: the retention policy logic itself has been vastly upgraded in SP 2010. That’s beyond the scope of this specific post, but it’s worth exploring if you’re interested in automated retention rules.) We see the change in direction first in that retention policies now allow two separate rule sets – one for documents that are not declared as records, and one for documents that are declared records.

retention-nonrecords

The second major integration point with information management policy is that there are now pre-defined actions related to records management. In the example below, I can configure my policy to automatically submit a document to a record center at a specific time – in this case, two years after the Contract Date.

retention-move

There’s also an action that will declare a document as a record without submitting it to a record center, leveraging the new in-place RM features.

In short, it’s now possible to merge a record declaration and organization strategy with the retention and management policies of the front-end systems. Like many of the other items mentioned here, this is functionality that is possible in MOSS only with significant development effort.

Routing Rule Processing in Record Center Sites

There are a number of improvements in the processing of routing rules within record center sites, but I will focus on three here.

First, there’s now an ability to have a routing rule match on a set of logic, rather than just the document type itself. For example, you might want to send Contracts for a specific vendor to a different location than Contracts for anoth
er vendor. The only way to have done this in the past was through the use of separate content types. Now, the routing rule configuration in the record center allows for rules to be evaluated on the metadata of the item being received. In the example below, there’s an “Amount” column on the incoming content type. This rule will only be applied if the value in that column is greater than or equal to 10,000.

condition

A second significant improvement to the routing rule configuration is the ability to route to locations outside of the current record center site. In MOSS, default routing rules can only send documents to libraries within the current site. (You can extend that functionality by creating a custom record router, but that involves some development.) In SP 2010, the destination location may still be a local document library – or it may be any other site within the site collection that has a content organizer specified. (More on content organization in a future post.) Note the “Browse” button in the image below.

destination

The third significant change is the ability to have more control over the folder naming when documents are placed into document libraries. In MOSS, folders are named somewhat randomly based on the date of submission to the record center. This ensures that the number of documents in a given view is kept low, but doesn’t provide a logical structure for navigation. In the image above, you can see the option to specify a foldering strategy based on the values provided in the metadata. So in the Contract example, I can choose to have all contracts with the same client end up in the same folder; when a new client contract comes along, a new folder with that client’s name will be created automatically. This has positive implications not only for visual navigation, but for search relevance, as well.

And there’s more…

There are many additional detailed features related to records management. I believe I’ve hit the highest points here, but there are a number of additional details

  • Ability to serialize documents by a unique document ID, then find documents by that ID.
  • Ability to declare sets of documents together as a record. (See an introduction to document sets here.)
  • Ability to export a summary file plan spreadsheet from a record center summarizing the rules in place there.
  • Ability to copy documents on a hold to a specific SharePoint location.

I’ll cover many of these details in future posts.

Summary

Microsoft has invested heavily in Records Management in SharePoint 2010, and organizations who have an investment in SharePoint would do well to evaluate its capabilities. I don’t think it will outperform point solutions on a feature-by-feature basis in the RM space, but there is significantly more functionality than was present in the SharePoint 2007 version – enough that I think it will meet the standard needs of many companies. After all, one significant benefit of SharePoint is precisely that it’s not a point solution, but rather a platform that provides a breadth of solutions common to many organizations. It may not be the strongest option in each specific area, but the sum of the options that it provides makes it a great value for small offices and enterprises alike.

CAS challenge with application pages: LayoutsPageBase class requires Full Trust

If you’ve been developing with SharePoint (or .NET web apps) for awhile, you’re likely aware that it’s generally a good practice to deploy custom code to the BIN directory of the web application and explicitly specify the least set of permissions required to execute the code contained in your assembly. (If you’re interested, here’s a write-up with a link to a slide deck covering this topic in more detail.)

On a recent project, I created a SharePoint application page to be hosted in the layouts directory. The page class inherited from LayoutsPageBase, and the ASPX file was set to inherit from this class/assembly. Pretty straightforward.

As noted above, I wanted to deploy this assembly to the application BIN directory (not GAC) and use a CAS permission policy to allow it to execute within SharePoint. However, when I did that and tried to browse to the page, I got this exception:

Request failed. at System.Reflection.Assembly._GetType(String name, Boolean throwOnError, Boolean ignoreCase) at System.Web.UI.TemplateParser.GetType(String typeName, Boolean ignoreCase, Boolean throwOnError) at System.Web.UI.TemplateParser.ProcessInheritsAttribute(String baseTypeName, String codeFileBaseTypeName, String src, Assembly assembly) at System.Web.UI.TemplateParser.PostProcessMainDirectiveAttributes(IDictionary parseData)

In most other cases in which I’ve gotten permissions errors in the past, the exception has indicated which specific permission I was missing (e.g. SqlClientPermission or FileIOPermission – which is very helpful, as you then know what to fix), but I knew that the issue was security-related, as it worked fine if I deployed to the GAC.

After some head-banging, I started Reflector and took a look at the LayoutsPageBase class (which I should have done much sooner…). The issue was immediately apparent:

   1: [PermissionSet(SecurityAction.InheritanceDemand, Name="FullTrust"), PermissionSet(SecurityAction.LinkDemand, Name="FullTrust")]
   2: public class LayoutsPageBase : UnsecuredLayoutsPageBase

Note the security attributes on the class. This post has a nice write-up on the specifics of InheritanceDemand and LinkDemand if you’re interested, but the upshot is that you can’t directly call into this class nor inherit from it and call inherited methods without having full trust. A step up the inheritance hierarchy to UnsecuredLayoutsPageBase shows the same attributes there, too.

So here’s the point of this post: you can’t create application pages that inherit from LayoutsPageBase (or UnsecuredLayoutsPageBase) if you want your code to run without full trust.

As I see it, here are the options:

  1. Deploy to the GAC. The is the easy way out, but for my project (which runs in a shared environment) was not option.
  2. Deploy to the _app_bin directory. SharePoint ships with a policy file that grants code in the _app_bin directory full trust. Perhaps an option, but it kind of defeats the purpose of CAS, doesn’t it? The point is that I don’t want my code to run fully trusted.
  3. Inherit from System.Web.UI.Page and manage security yourself. This was the option I chose in the end. You can inherit from the standard ASP.NET page class, but you may simply have to do some of the security things that the layouts base classes were doing for you. For example, the LayoutsPageBase class has a Boolean property called RequireSiteAdministrator, which, if true, ensures that the executing user has site collection admin rights in order to view the page. You can do this type of check yourself; it just takes a little more work. When inheriting from Page, you simply need to ensure that you add the AspNetHostingPermission (with a “Minimal” level) to your CAS declaration and deployment to the BIN with a custom CAS policy works fine.

The Almost-Forgotten Role: Business Search Administrator

Most organizations that use MOSS make use of SharePoint search in one form or another, but I’ve seen few that really get their dollar’s worth out of SharePoint’s search capability. The most common configuration that I’ve personally observed is the one that SharePoint configures automatically: all SharePoint sites are included in a single content source, and some kind of straightforward indexing schedule gets applied to make it “just work.”

And it does “just work.” The out-of-the-box relevancy engine is pretty good.

But it can be much, much better. Enter the role of the business search administrator. Burt touched on this idea in his blog a few weeks ago, and I’d like to expound a little.

First, I say business search administrator because I think many organizations think of search administration as a primarily technical role and approach it as such from a staffing perspective. (There is certainly a technical component to administering SharePoint’s search, but that’s not the subject of this particular post.) Because of the divide between business and IT in many companies, this means that search may be measured based on availability, its impact on system resources, and breadth of content sources rather than by the relevance of its results or its value to business users.

Here are some ways I think a business search administrator (BSA) can help increase the value that an organization gets out of SharePoint search.

  • Understand search. Maybe this one’s obvious, but the BSA needs to know something about how search works. They don’t need to know all the technical details, but a working understanding of content sources, scopes, crawl rules and schedules, and the basics of how SharePoint’s search relevancy works are good starting points.
  • Authoritative pages and demoted sites. Once the BSA understands some of how SharePoint calculates search relevance, he or she can start adding authoritative pages and demoted sites. Each search result is given a rank at index time based on its “click distance” from an authoritative page. The demoted site list (called non-authoritative pages in the administration pages) allows the BSA to manually decrease the relevance of specific content.
  • Managed properties. Creating a managed property in search enables a number of additional capabilities beyond the standard full text indexing (including property-based advanced searches). But even without leveraging those additional features, managed properties are considered more relevant than unmanaged ones by SharePoint’s relevancy calculation. For example, if you associate documents with vendors and capture the vendor name in a piece of metadata, you can make that a managed property such that when you search on the vendor name, the relevance of those documents with that vendor’s name associated will be higher.
  • Scope definitions. Search scopes allow different views on the SharePoint search index based on specific inclusion and exclusion rules. For example, you can create a scope that restricts results for a search only to certain financial documents (based on your own criteria). When a user is looking for a specific financial document and searches using that scope, he or she won’t have to dig through extraneous documents from HR looking for the right file. Determination of what scopes should be created and the rules that define those scopes is a critical differentiator between a run-of-the-mill search implementation and an excellent one.
  • Thesaurus and noise word maintenance. SharePoint ships with a standard thesaurus and noise word file: it knows the basics of the language and does an adequate job of equating similar concepts and filtering out common or less meaningful words. What it doesn’t know is the details of your business. There’s no way for SharePoint to know that what you refer to as “WCI” is really the same as “Widget Capacity Index,” and that “widget” is what you call it now, but last year you called it a “whutsit.” And then there’s noise: if all your documents have your company name in them, maybe that’s really just “noise” when it comes to search.
  • Keywords/best bets. While keywords don’t impact the relevancy of the search ranking itself (see note in Brian Wilson’s blog), they do affect user perception of the relevance and freshness of the search results. In addition, while many of the settings mentioned here are common to the SharePoint farm or SSP, keywords are stored at a site collection level. This means that the keywords and best bets can be different by site collection and can return information targeted by the business purpose of each site collection. For example, you might specify a different best bet when someone searches on the words “group policy” in your HR site collection than when they search within your IT site collection.
  • Search usage analysis. Finally, search can improve with age – in fact, it can improve a lot if there’s an administrator paying close attention to the search usage information that SharePoint provides. What terms are users using to search? And what are they expecting to find when they search on those terms? What queries are not bringing back any results at all? What scopes are users using and not using? Armed with this information, the BSA can make calculated changes to authoritative pages, managed properties, thesaurus, keywords, etc. to improve the search experience. Careful usage analysis is the most critical ongoing responsibility of the BSA role.

So to the original point: why make this a business role? Because it’s likely that someone in the business knows best when content is stale or irrelevant and should be demoted. Someone in the business is often best suited to know which page(s) link to the most relevant and timely content and should be listed as authoritative. Someone in the business best knows the way users think about information and the scopes that would support that thinking. Someone in the business understands the relevant keywords, their synonyms, and the page or site to which users should be directed when search for specific items. There’s technical competence required, but to my eye, it’s a business-focused job.

See if you can take your own search installation to the next level. Try implementing this role and see how the value of your investment in SharePoint search grows over time.

Scope your configuration to match your feature

This is a quick post with a message that may seem obvious, but I’ve seen plenty of code that makes me think maybe it isn’t. So here’s my suggestion: store your application configuration such that it matches the scope of your SharePoint feature.

To elaborate, I’ve mentioned before that SharePoint developers who come from a custom .NET development background are generally inclined to put application configuration settings in the web.config – which works fine for some things. However, what if you have a setting that needs to differ by site collection and you have many site collections in your web application? You could certainly create a site collection-specific AppSettings key for each site collection in your web.config, but that seems messy to me.

Here are my suggestions for where you should think about storing your configuration information, based on the scope of your feature:

Feature Scope Configuration
Farm
  • SPConfigStore
Web Application
  • SPConfigStore
  • web.config
Site Collection (SPSite)
  • SPConfigStore
  • List in root web of site collection
  • Properties collection on root web
Site (SPWeb)
  • SPConfigStore
  • List in web
  • Properties collection on web

The references to SPConfigStore here refer to this CodePlex project submitted by Chris O’Brien. Note that items in the store are grouped by “Category,” so use of the SPConfigStore approach requires some thought about naming conventions and how the categories will be assigned.

The code below shows an example of the use of the Properties collection on an SPWeb:

   1: /// <summary>
   2: /// Returns a property value stored in an <see cref="SPWeb" /> instance
   3: /// </summary>
   4: /// <param name="web"></param>
   5: /// <param name="name">the name of the property to retrieve</param>
   6: /// <returns>the value for the property; empty string if property doesn't exist</returns>
   7: private static string GetWebSetting(SPWeb web, string name)
   8: {
   9:     if (web.Properties.ContainsKey(name))
  10:     {
  11:         return (string)web.Properties[name];
  12:     }
  13:     else
  14:     {
  15:         return string.Empty;
  16:     }
  17: }

Next time you fill out the “Scope” attribute of your feature.xml, make sure your plan for configuration management matches your application.

Custom timer job locking on multi-WFE farms

Creating custom timer jobs within SharePoint enables a SharePoint developer to add a scheduled component to a SharePoint application. (Looking to get started with timer jobs? Andrew Connell has a good write-up on MSDN.)

On a recent project, I created a couple of custom timer jobs and ran into some interesting behavior related to the job locking that I don’t think is documented very well at the moment.

When you create a custom timer job, you inherit from SPJobDefinition. A couple of the constructors for the job allows you to specify a value for the lock type used by the job. This value is set as an enum of type SPJobLockType.

The MSDN documentation details the available values for the SPJobLockType enum as follows:

ContentDatabase Locks the content database before processing.
Job Locks the job to prevent it from running on more than one machine.
None No locks.

These values work about as you’d expect (if you can tell what to expect from the descriptions) for single WFE farms.

However, for farms with multiple WFEs, I think this information is sketchy at best and inaccurate at worst. Here’s how I’d modify the documentation based on my own experience:

ContentDatabase Locks the content database before processing. If more than one WFE is part of the farm, only schedules the job to be executed on a single WFE. If that WFE is removed from the farm, the job is not scheduled on any other WFE automatically, so the job will no longer be executed until 1) the WFE is added back to the farm or 2) the feature that registered the job is deactivated and reactivated.
Job Locks the job to prevent multiple instances from running on the same machine. Job will be scheduled on every WFE in the farm and execute on all of them. (Note that this is quite different from the MSDN description; Andrew Connell’s article alludes to this same definition I’ve given.)
None No locks. Job will be scheduled on every WFE in the farm and execute on all of them without checking for running instances.

I think you’ll find it a challenge if your desire is to have design in which the job is managed by the farm in such a way that it executes from only one machine and will be fault-tolerant such that if one WFE is not in the farm, it will simply execute on another WFE. It appears to me that the only option for accomplishing this scenario is to use the “Job” or “None” lock types. This schedules the timer job to execute on each WFE server. You can then implement custom communication between WFEs (via the database, remoting, or some other method) to have the WFEs determine that only one of them will execute at any one time. It’s not a great solution – and it’s one that feels like it should have been done already as part of the SharePoint farm infrastructure – but it’s functional.

To SPD or not to SPD, that is the question

On several of my recent projects, the question has come up about the appropriate use of SharePoint Designer 2007 (or SPD) within an organization’s SharePoint installation. The question is sometimes around its role and scope; other times, it’s point-blank: should we use SharePoint Designer? And the question will only get asked more with licensing changes in the offing.

But as every good consultant knows, the answer to any clear yes-or-no question is “it depends.” :-)

In this case, it really does depend. It depends on how rigid an organization wants to be with its usage policies, how granular it wants to be with its permissions, and how its overall SharePoint install is governed. This post isn’t meant to be a comprehensive up or down answer to the question; rather, I’m simply sharing some of the things we often discuss with our clients when making a decision about the role SPD will have within their SharePoint implementations. If you have experience to share, please feel free to add it within the comments below.

The sections below offer some ideas to consider as you make a choice for your organization or client.

Branding

Everyone wants SharePoint – but they don’t want it to look like SharePoint, or at least not without a personal touch. And the easiest way to modify how SharePoint looks is to crack open SPD and start making changes to master pages, page layouts, etc.

This approach works well in small implementations where a small group (or one!) of SharePoint admins have access to SPD and control the entire environment. However, my suggestion is that it doesn’t scale very well to large enterprises. (It’s not just my idea: see best practice #9 in this MSDN article.) In large environments, there’s often a desire to have some visual consistency throughout the SharePoint site, and the best way to maintain this is through a limited set of predefined master pages and page layouts – items that site owners throughout the organization are not allowed to modify. The best way NOT to maintain visual consistency is to deploy SharePoint Designer to numerous individuals and watch them exercise their powers of design.

In addition to consistency concerns, you should consider the ramifications of customizing (unghosting, if you prefer – the explanation of both is here) pages for your specific situation.

Workflow

Another of the very useful features of SharePoint Designer is its ability to create no-code workflows. These workflows can be attached to a document library and provide a great deal of flexibility. The primary constraints of SPD workflows are that they may only be attached to a single document library and that the author is limited to functionality provided by the available activities. (Note: it is possible to augment the activities available for SPD workflow, and there are some pretty useful CodePlex projects out there – like this one.) (Another note: there’s another less well-understood constraint of SPD workflows around user context. Take a look at this post for details. Thanks, Travis!)

The limitation around attachment to a specific document library is not huge in small organizations, but in large ones it can present a challenge. SharePoint use tends to be viral and usage patterns spread between teams and workgroups as colleagues are exposed to how others are solving business problems. In these cases, there’s no great way for other teams to leverage someone else’s SPD-authored workflow, so multiple copies end up being created all throughout a SharePoint farm – which is fine until something needs to be updated.

My own recommendation for mid- to large-sized organizations when it comes to workflow is to take a careful look at the products offered by K2 and Nintex. These products are reasonably-priced and offer the same ability for users in the community to create their own workflows. However, they also allow the workflows to be managed at a higher level and to be shared across sites, document libraries, etc.

(Note: yes, I know you could write a workflow in Visual Studio, too. However, most people turn to SharePoint Designer because they don’t want to undertake the hassle or cost of development.)

Data Integration

SharePoint Designer has the ability to create web parts that can’t be created using the SharePoint user interface, namely data form web parts. These web parts allow customized presentation of the data from SharePoint lists (or other data sources, e.g. SQL Server). (This site has a nice write-up on the configuration of a data form web part.) In my opinion, this is one of the most useful features of SharePoint Designer.

There are alternatives, however. Products from Bamboo, CorasWorks, and Lightning Tools (just to name a few) provide means for pulling information from outside data sources and presenting it in SharePoint. In Bamboo’s solution, you can actually configure a data form web part without using SharePoint Designer.

Summary

It’s difficult to summarize a product as large as SharePoint Designer (and SharePoint) in a few paragraphs. The issues I’ve presented here are the ones that I end up talking about the most often, but they are certainly not a thorough treatment. However, hopefully I’ve shared something that’s helpful for your situation and provides some context for my recommendations below.

Generally-speaking, my personal recommendation is that small to mid-sized organizations (e.g. those with one or a few SharePoint administrators and a handful or fewer site collections) use SharePoint Designer in a very limited capacity. SharePoint admins can make good use of it, but the organization would be well-served to get them some training first.

For large-sized organizations, my recommendation is that SharePoint Designer not be used in a production environment and that the company ought to rely on other means for putting enhanced functionality in the hands of their users. This means good master page and page layout options for publishing sites and consideration of third-party tools for workflow and data integration.

Those are my thoughts. What are yours?

HOW-TO: Group search results by a property value in SharePoint

In my last post, I described the process that I follow when customizing search results in SharePoint using the Core Search Results web part that ships with MOSS. If you haven’t seen it yet, I suggest you start there for some context on the rest of this post.

On a recent project, I ran across the requirement of grouping search results by a managed property. It seemed relatively straightforward from a functional perspective, and given SharePoint’s XML-based architecture in search, feasible from a technical standpoint, as well. I found it to be a little more difficult than I thought.

First, XSLT 2.0 has a number of features that enable result grouping, but as my colleague Bert recently noted, XSLT 2.0 is not supported in SharePoint. If you use it in a search result transform, you’ll get the friendly “An error has occurred in this web part” message.

Fortunately, there’s another approach: enter the Muenchian method. (If you’re interested in the theory behind it and some of the thought behind grouping and sorting in XSL, check out this useful site by Jeni Tennison.)

In practice, incorporating the approach into SharePoint’s search transformation isn’t difficult. The first step is to declare a key referencing the managed property that you wish to group by:

<xsl:key name="results-by-author" match="Result" use="author" />

.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode, .ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode pre
{font-size:small;color:black;font-family:consolas, "Courier New", courier, monospace;background-color:#ffffff;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode pre
{margin:0em;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode .rem
{color:#008000;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode .kwrd
{color:#0000ff;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode .str
{color:#006080;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode .op
{color:#0000c0;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode .preproc
{color:#cc6633;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode .asp
{background-color:#ffff00;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode .html
{color:#800000;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode .attr
{color:#ff0000;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode .alt
{background-color:#f4f4f4;width:100%;margin:0em;}
.ExternalClass0B0959307DD3486DB49CF4852DE521FE .csharpcode .lnum
{color:#606060;}

In this case, I’m specifying that I want to match on “Result” elements (the parent element of each search result item in SharePoint’s search result schema) and create a key on the values of the “author” child element.

Now that the key is created, within the XSL, I can reference results using the key:

<xsl:for-each select="Result[count(. | key('results-by-author', author)[1]) = 1]">
    <xsl:sort select="author" />
    <div class="searchCategory">
        Author: <b><xsl:value-of select="author" /></b>
    </div>
    <xsl:for-each select="key('results-by-author', author)">
        <xsl:variable name="id" select="id"/>
        <xsl:variable name="url" select="url"/>
        <!--- individual result treatment omitted -->
    </xsl:for-each>
</xsl:for-each>

Note that there’s an outer for-each statement that iterates through the items I’ve grouped on (the distinct authors, in this case), and an inner for-each that iterates through all the results with that author.

The output looks something like this:

ScreenShot00050

If you’re interested, the full XSL for this output is available for download here.

This approach should work effectively on any managed property, as well – simply change the key definition as desired for your situation.

HOW-TO: Customize SharePoint’s Core Search Results Web Part

The purpose of this post is to share the process that I’ve begun to follow in my own work related to search customization and to provide an opportunity for you to share what’s worked for you, as well.

The post assumes that you’re generally familiar with how search results are customized using the SharePoint search web parts. There are numerous resources available for the mechanics of search result customization within SharePoint. For example, Matt McDermott has a good series of posts giving an example of customization of search results for images (see parts 1, 2, 3, and 4). His series does a good job of explaining how the various components of SharePoint’s search architecture (e.g. managed properties, scopes, etc.) fit together and are configured. There are also numerous examples of the XSL that you can use in various instances to achieve a desired result.

Here’s my process:

Step 1: Ditch the OOB “XSL Editor” button

The core search results web part has a nice big friendly XSL Editor button that pops up a text editor to modify the XSL used to modify the search results.

ScreenShot00047

The button is so big and dominant on the admin panel it practically begs you to click it. My advice? Don’t. It will waste your time and annoy you with its clunky editing and slow refreshes. Instead, use the XSL Link option found in the Miscellaneous properties and provide a reference to your XSL transformation stored in an external file:

ScreenShot00045

So where do you put the actual XSL files? I suggest you store your XSL transformations in a document library. You can use one of the existing ones (e.g. the XSL Style Sheets document library if you’re using the publishing features) or you can create your own. There are several benefits to storing the XSL in a document library instead of embedded within the web part:

  • Better reusability. If you’re going to have multiple search interfaces throughout your site and want to customize all of them, it makes sense to have them all refer to a common XSL file. No sense updating all those web parts for every little XSL tweak – unless you REALLY enjoy web part administration.
  • Versioning and content approval. Web part configuration information – that is, an instance’s properties stored as XML – is not versioned along with a page. Even if you’ve got publishing turned on or are using web part pages stored in a document library with versioning turned on, changes to web parts stored in the page are not versioned along with the page (unlike changes to the page’s metadata via field controls). You can mitigate some of the impact of this on your search configuration by storing your XSL transformations in a document library with versioning enabled. In addition, the OOB content approval features will also carry through to your search results: that is, if content approval is enabled on your document library and an unapproved change is made to the XSL, the search results for all users who do not have permissions to view drafts in that DL will be unaffected.
  • Development experience. Maybe this one is obvious, but if you choose to store the XSL in the web part itself, you’ll be fighting the OOB text editor for the XSL, or at least end up doing plenty of cutting/pasting into it. The postbacks it requires are also slow. If you store the XSL in a file in a document library, you can open the DL in Explorer view and simply open the XSL directly in your editor of choice, then refresh your search results page after you save the file to see the effect of your changes.
  • Deployment options. If it bugs you to have to hit the database to access the XSL file, deploy it as a module in a feature and have it be a ghosted file in the document library. This gives you the performance boost of a file-system-based customization while enabling the items mentioned above.

Step 2: Start with a canned XSL file

I have three template XSL files that I always use to start the customization process, each with a different purpose.

oobtransform.xsl (download)

This file contains the out-of-the-box XSL transformation provided by SharePoint. I simply pulled it out into a file for cases when the project I’m working on involves mostly small modifications to the OOB configuration.

rawxml.xsl (download)

This file contains no transformation at all, and instead simply emits the XML that is returned by the search query. MSDN’s documentation refers to this approach. This file is valuable when you want to see the exact information (including markup within elements) that is returned.

showcolumns.xsl (download)

This file contains XSL that displays the information returned from the search query in a tabular format. This is probably the file I use the most when making sure that all managed properties are being returned correctly, etc. The output from search when this XSL is used looks something like this:

ScreenShot00046

Step 3: Ensure any additional properties are included

The next step I typically follow is to configure the search results to include all necessary managed properties. These may be properties that I’ve configured myself, or they may be ones that SharePoint provides automatically. In either case, inclusion of the properties is specified using the Selected Columns property of the core results web part:

ScreenShot00049

Clicking the “…” button launches a dialog that displays the information a little better:

ScreenShot00048

To add columns to your results, simply add additional Column elements as children of the Columns element. Remember that since you’re dealing with XML, the column names are case-sensitive and must match exactly how the managed property is defined in the SSP.

When I’m adding columns, I typically set the core results web part to use the showcolumns.xsl file mentioned above. When I add the reference to the additional column (or columns), I can quickly verify that they’re showing up as expected in the search results.

Step 4: Customize the XSL to your desired output

Lastly, now that you’ve got your web part set to read the XSL from a file, and are also sure that it’s bringing back the proper data, simply customize the XSL to provide the desired presentation.

This is the process I usually follow in customizing the output of the core search results web part. Hopefully it helps provide you with some thoughts as you’re doing the same.