Rise Foundation

Blog Categories


PointBridge Blogs

Posts by this author: RSS

Lync 2010 and LLDP – off by just a bit

One of the very useful new features of Lync 2010 Phone Edition is support for LLDP (Link Layer Discovery Protocol) to allow phones to receive configuration information from the switches they plug into. If you have any experience with Cisco IP Telephony, you know that Cisco Discovery Protocol (CDP) is used for similar purposes to assign voice traffic to a separate VLAN. This works great, but really only with devices (like Cisco phones) that speak CDP.

LLDP is an open standard that’s been evolving over the years. I’m glad that Microsoft has chosen to comply with it for Lync Phone edition. Eventually LLDP will be used by the Lync soft clients on PCs – however thanks to a bug or some other limitation, LLDP doesn’t quite work in that regard yet. Nonetheless, I have found LLDP to work pretty well with the Lync Phone edition and Cisco switches.

However – I have identified a bug, confirmed by Microsoft: you can’t use LLDP to set a VLAN higher than 512 with Lync Phone Edition.

When you use LLDP to assign a voice VLAN to phones, you set the configuration on the switch itself. Below is what you configure on a Cisco switch:

(Global command)

LLDP run

(interface-specific command)

interface FastEthernet1/0/1

switchport access vlan 600

switchport mode access

switchport voice vlan 700

That’s it. When an LLDP-compliant device plugs in, the Cisco switch will tell it to use VLAN ID 700. However the Lync Phone edition, while it processes the VLAN 700 tag, it just won’t assign it to the phone. VLANs 1-512: fine. VLANs 513 and up: no go.

Doing some reading on the LLDP spec, the VLAN ID is a 2-octet field. 2 octets should yield a max of 65,000+ for the VLAN ID. Even if it were only 12 useable bits to account for padding, that would still get us to 4096 as the max. The limit of 512 only makes sense if it were only using 9 bits total. So there’s the issue, we need Lync Phone edition to use more than 9 bits for VLAN ID.

As I mentioned, this is a confirmed bug and should be addressed soon with a Cumulative Update.

Take Your Day Back With PowerPivot

Microsoft recently released a free Excel 2010 add-in which uses PowerPivot to analyze your schedule from your Outlook calendar. The add in, named Calendar Analytics Tool, has two main benefits. The first is that it is extremely useful (and cool). By using the familiar Excel pivot table functionality, including Slicers, you have the ability to graphically display all of the meetings for a given period of time by: meeting topic, who was involved, categories and the details behind the high level statistics. You now have evidence that the meeting about the upcoming meeting and the following debriefing meeting is taking up too much of your valuable time.

The second, and possibly more valuable benefit, is you now have a practical application to promote self-service BI and PowerPivot within your organization. Everyone understands the concepts around meetings and time; this exercise could be a real eye opener to how one makes use of their schedule and time. It does not take a power user or super analyst to use this tool and understand its benefits. Rather, this will help to make the everyday user start to think about what else could be possible through self-service BI and analytics at the speed of thought.


Technorati Tags: Business Intelligence Consulting,,Self-Service BI,

SharePoint as a Digital Marketing tool

In Digital Marketing one of the most commonly used functionality is the catalog. I recently worked on a solution where SharePoint’s unique features helped build a flexible, robust information request/catalog module. One of our main goals was to ensure that the information in the catalogs can be maintained by non-IT staff.

We implemented a solution capable of handling multiple catalogs from a single list. A SharePoint list was created to store information about the items such as product description, features, cost etc. Let’s call this list "Catalog list" and each item in this list was classified by category. Each category represents a separate Catalog. In order to put things in perspective, let’s consider an example of building an e-commerce site for a high end automobile dealer. Possible examples for Category would be Cars, Trucks and Motorcycle. We added an additional column called the Sub Category. This allowed for each Item to be further classified into Sub Categories, providing us with additional capability to better organize information in the catalogs. Staying with our example of a e-commerce site for an automobile dealer each category could be further classified based on vehicle size such as compact, Standard and Full Size. Using the Sub Category is optional and it is used only in scenarios where it helps better organize items when displayed on the web page. In our solution we designed it in such a way that the items belonging to a certain sub category within a category will be displayed in a single tab on the Catalog web page.

The Catalog list contains the information about the item, brochures(pdf files) for the item are contained in a document library and Images showcasing the item are contained in an Image Library. Lookup columns help create relationships between lists by allowing a many to many relationship between lists. This allows for more than one brochure or image to be associated with a particular item in the Catalog list. A many to many relationship means a brochure or image can also be associated with more than one item on the catalog list.

A Lookup column on the document library links documents in the document library with the corresponding item in the catalog list. Similarly a Lookup column on the image library is used to link one or more images in the image library with items on the Catalog list. Linking an item to more than one image allows the capability of having an image gallery on the catalog web page.

We used a CAML query to retrieve information based on the category from Catalog list.

newquery.Query = "<Where><Eq><FieldRef Name=’Category’ /><Value Type=’Choice’>" + strCategory + "</Value></Eq></Where><OrderBy><FieldRef Name=’SubCategory’ /><FieldRef Name=’SortOrder’ /></OrderBy>";

Brochures and Images were also retrieved through CAML queries. The query is used to retrieve items from the document or image library based on the value of the lookup column .

newquery.Query = "<Where><Eq><FieldRef Name=’LookUpColumn’ /><Value Type=’Lookup’>" + strItemName+ "</Value></Eq></Where>";

The lists are created on the site collection level. This allows the use of a single set of lists for any catalog created within the site collection. This is particularly helpful when the website contains a large number of sub sites with an equally large number of catalog lists. Creating the lists on the site collection level ensures that all the catalog information is accessible from a single location.

On the front end there are a number of ways the data can be displayed in the required format. A ASP. Net repeater control or a data table can be used in formatting the data in the required format.

SharePoint web parts can be used to create a template for the catalog page. To setup a catalog all the user has to do is add the web part to a publishing page and populate the list. SharePoint lists are easy to setup and requires only minimal technical expertise to maintain. Once the Catalog has been setup it is fairly easy to maintain the lists. SharePoint provides an intuitive user interface requiring only minimal training for non-IT personnel to be able to work with SharePoint lists.

Returning to our analogy of building an catalog module for the internet site of an auto dealer, imagine that the requirement was to provide the capability to request information on specific vehicles. In our solution we added a check box for each item on the catalog. The customer selects the vehicles he is interested in from the catalog and clicks the "Submit" button. We designed a mechanism on the OnSubmit event of the command button to store the selected items in a SharePoint list. Let’s call the list the "Information Request" list.

The customer is then taken to an information request form with fields for providing his contact information and options for either asking questions or requesting more information on the vehicles from the dealer. On clicking the submit button on the request form the user is taken to a "Thank You" page and the contact information is stored along with the selected items in the Information Request list. We also created an event receiver on the Item update event of the Information Request list to send out an email to a sales representative informing about the customer’s interest in the selected vehicles. The Information Request list acts as a sort of fallback in case the email module fails.

An Introduction to PowerShell Scripting

So, you’re interested in PowerShell, but aren’t sure if you want to jump right in to writing your own cmdlets? No problem; there are plenty of built-in ones to get us started. Let’s take a look at a few different ways to use cmdlets: calling them directly from PowerShell, stringing multiple cmdlets together using the pipeline, and writing a simple script to automate these tasks.

Say you want to rename a file. Just open PowerShell and use the Rename-Item cmdlet. It takes two parameters: the path to the input file and the new name. You can explicitly specify the names of these parameters, pass them in in the order expected by PowerShell, or enter rename-item and wait for PowerShell to request them.


I know what you’re thinking: this is unnecessarily complicated. I agree, but what if we use PowerShell to rename all the files in a folder? The Get-ChildItem cmdlet will give us a list of all the items in a directory, but it also returns these items as objects that we can manipulate.

At the most basic level, this allows you to do things like output the contents of a directory to a text file or sort them by size. On the left side of the pipe, we have a cmdlet that produces a set of objects; on the right side, a cmdlet that operates on those objects.


We’re still not doing anything super useful at this point though, but if we pull some of this functionality into a script, we can string together several commands to create a reusable piece of functionality. Let’s return to renaming files. Whether you’re appending a descriptive title to photos or the name of a project you’re working on, at some point, you’ve probably had to rename all the files in a directory. With a few lines of a PowerShell script, we can easily automate this task, saving you precious minutes that you can now use to learn about other cmdlets.

First, you’ll need to create a .ps1 file; any text editor will work, but you’ll want to check out PowerGUI if you get serious about scripting.

We’re going to pass in two parameters: path (the directory containing the files to be renamed) and prefix (the string to be added to the filename). From there, it’s just a matter of stringing together the Get-ChildItem, ForEach-Object, and Rename-Item cmdlets. To run the script, simply navigate to the directory and use “.” plus the script name.


After running the script, you can use Get-ChildItem again to verify that the prefix has been added to the filenames. With a minimal amount of effort, you could modify this script to add suffixes, rename files in subdirectories, or change extensions. Once you learn how to write your own cmdlets, the possibilities will really be endless.

JavaScript Duplicating Generated HTML in a Content Editor

Say you’ve become the office expert on food trucks. There are a lot of hungry people in your office, and you just don’t have time to keep them all updated on mobile meatballs. Why not add a Twitter list widget to your intranet site so they can track the trucks themselves?

The Content Editor web part sounds like the perfect place to insert the code that Twitter generates for you. So, you add a web part to your page, open the HTML source editor, and paste in the widget script. You’ll get a warning message, but let’s ignore that for now:

Warning: The HTML source you entered might have been modified.

Everything looks great, but let’s clean it up a little.


Obviously, you’ll want to get rid of that Content Editor title above the widget by changing the chrome type to none, but as soon as you do, the widget duplicates itself:


As it turns out, SharePoint 2010 made some changes to the way the content editor handles JavaScript in order to combat cross-site scripting. Rather than being generated on the fly, the HTML is created and saved in the source each time the web part is updated. To prevent this problem, simply upload your script to a document library and point to it in the content editor’s content link setting.


The widget will stop duplicating itself, SharePoint will stop warning you about your HTML source, and you can get back to enjoying delicious, spaceship-delivered tamales.

Implementing Resources files with SharePoint 2010

Working on a multilingual project, I had to come up with an approach for implementing resources values used across our UI solution on the SharePoint 2010 platform. Unlike for SharePoint 2007, there is more flexibility on the new 2010 platform to implement your resx files. I found that there were two approaches that can be used on order to deploy your Resource files using Visual Studio 2010 with SharePoint 2010.

Using the Empty Element

This is the approach I choose because it fit my requirements best. Using the Visual Studio templates, add a new ‘Empty Element’ to your project and call it ‘Resources’ (you can name it whatever you’d like):

In that folder, I added all my resx files that I will be using for my project:

For each resx file, make sure you update its properties. I have changed the deployment type to ‘AppGlobalResource’:

Using the mapped folder

For SharePoint specific resource files, it makes more sense to use the predefined ‘Resources’ mapped folder in the Visual Studio 2010.

Any resource files added to that mapped folder will be copied to the 14/Resources/ folder on the SharePoint server where the solution is deployed. If you wish to use that mapped approach but would like to have your resx files copied to the web application folder as well, you will need to edit the manifest.xml file and the ApplicationResourceFiles element to the <Solution> element such as:


<App_GlobalResourceFile Location=”” />

<App_GlobalResourceFile Location=”SharePointUIResourcesApplication.zh-CN.resx” />


Custom Error Pages – SPWebApplication.SPCustomPage enumeration vs. Global.asax

With SharePoint 2010, it has definitely become easier to implement your own custom error page. Using the SPWebApplication.SPCustomPage enumeration, you can set your custom error page to override the default SharePoint one. Actually, you could do the same for your ‘AccessDenied’ or ‘Login’ pages. Here is the full list of out of box pages you could easily override and replace with your own custom ones:

Check out this blog for more information on how you can implement that SharePoint friendly approach.

For our project, we had to use different custom error pages depending on the error status code returned to us. In that case, overriding the default error page would not be enough and a proper alternative solution would probably be to create a custom http module in which you will redirect requests to your custom error page depending on their error status code. On our project, we decided to take a simpler approach which consists of creating an error handler in the Global.asax file that will catch all unhandled errors while processing a request (see here for example), here is how our code looked like:

void Application_Error(object sender, EventArgs e)


// Code that runs when an unhandled error occurs

int currentStatusCode = System.Web.HttpContext.Current.Response.StatusCode;


switch (currentStatusCode)


case 404: Response.Redirect(ErrorPage404_Url);


case 500: Response.Redirect(ErrorPage500_Url);


default: Response.Redirect(ErrorPage_Url);




Start a SharePoint 2010 / FS4SP Crawl Remotely

With SharePoint 2010 and FAST Search For SharePoint 2010 (FS4SP), it’s easy to schedule crawls to run daily, hourly, or according to any other frequency. For most scenarios, scheduled crawls work perfectly.
Sometimes it makes more sense to kick off a crawl based on an event. For example, perhaps your organization runs an Extract/Transform/Load process to prepare data before being crawled. If that ETL job finishes at an inconsistent time, a scheduled crawl may either run too early and miss some updated data or run too late, making queries stale.
To fix that, we’d ideally kick off a new search crawl as soon as the ETL job is done running. With PowerShell, doing so is easy.
The PowerShell Script
$userName = “DOMAINserviceAccount”
$passWord = ConvertTo-SecureString “password” -Force -AsPlainText
$indexServerName = “serverName”
# Run the following commands on the remote computer
$credential = New-Object System.Management.Automation.PSCredential($userName, $passWord)
$session = New-PSSession $indexServerName -Authentication CredSSP -Credential $credential
Invoke-Command -Session $session -scriptBlock {
Add-PSSnapin Microsoft.SharePoint.PowerShell;

$indexServiceAppName = “Search Service Index Application”;

$indexServiceApp = Get-SPServiceApplication -Name $indexServiceAppName;

$contentSource = Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $indexServiceApp


How It Works

The above script uses PowerShell remoting to issue requests on a SharePoint indexing server.
The following variables need to filled in:
  • $userName: The full username of an account with permissions to kick off a new search.
  • $passWord: The account’s password. Note that dollar signs need to be escaped with tick characters in PowerShell strings (e.g. “Pa$$word”).
  • $indexServerName: The name of a server running the index role.

An example usage is to run this script as part of a SQL job or SSIS step. The executable to call is “PowerShell.exe” with the above script saved in a “PS1” file as the command’s argument.

Because SharePoint 2010 and FAST Search for SharePoint use the same service application architecture, this approach works for either system.

Leveraging PowerPivot functionality in Excel


PowerPivot builds on existing Excel features and extends their functionality. Those familiar with Excel are aware of Pivot tables. Excel typically involves creating a pivot chart with one single data source. PowerPivot allows users to extract data from various sources. These data sources can also be linked to one another with in a PowerPivot workbook. This feature allows you to create sources which maybe from a broad list of sources and link them all in your work book.

Slicers are also new in PowerPivot. They are familiar territory for users of Excel who have worked with filters. Slicers are useful for filtering the data in Pivot charts and tables. The advantages which make slicers different from filters are the visual display of the data which is being filtered and the ability to link slicers to multiple charts and pivot tables with a single slicer if that data is linked in PowerPivot. Hence they are easier to use because they allow single point of control for filtering data from multiple sources.

We will walk through these features of linking data sources with in PowerPivot. In addition we will link slicers through the slicer options to have ability to slice our data sources from one slicer.

Installing PowerPivot

Establishing a PowerPivot environment requires deploying the PowerPivot for Excel® 2010 add-in on workstations running Microsoft Office 2010 and PowerPivot for SharePoint on SharePoint application servers.

These tasks are not dependent on each other. Users with only a web connection don’t need Excel 2010 on their machines to view PowerPivot work books deployed on the SharePoint server. However users that do need to create and deploy work books to the SharePoint infrastructure do need to have the Excel 2010 client.

Linking data within PowerPivot

My source in this example is an SSAS cube. I am pulling information on a hospital which wants to track the Average time a Surgeon will take to spend on a procedure in a year by their specialty. I am calling this table the AverageSurg_Timewith the following columns listed below:

You can see the sample data which I pull and the columns below:


We pull a second data set which is called Vendor_Cost. The VendorCost table will enable a hospital to view the Total Cost for a component which is purchased by a hospital for that particular Year and Procedure Type. The Vendor_Cost table is going to be pulled with the sample data and columns below:


Both tables have a field which we can use to filter data on for a Procedure Year. But there is no link between the two tables to know which Year we want to see both these by. If we have a slicer which we want to filter then we will need to add a separate slicer for each source. Hence we will need to link the Procedure Year in both tables to another dimension which will act as our intermediate table for linking the two sources. Both our AverageSurg_Time and the Vendor_Cost will link to this intermediate dimension. This is a ProcedureYeartable with a single column and it lists all Year values as listed below:

Procedure Year


We will create a link between the three tables in the PowerPivot Design options. First define a link with Vendor_Cost and the ProcedureYeartable as shown:


Next we define a relationship with the ProcedureYear table and the AverageSurg_Timetable.


Linking Slicers to the Pivot Tables

In the next step we will create our pivot charts. I create a few simple bar graphs which are listed below which sources data from Vendor_Cost and AverageSurg_Timeas two Pivot Charts:


The first one is Average Surg Time and the second is the Vendor Cost. Above, I have added a slicer and have used the Procedure Year as a source from the ProcedureYeartable. At this point the data in the Pivot table is linked in the underlying source properly. However we need to perform the link in the Excel presentation layer as well. The Slicer I defined is not linked to the Pivot tables in Excel. In order to link the slicer to the pivot tables we will do the following steps:

1. Select the slicer which you create above the Pivot charts and you will select the ‘Slicer Tools’ in the menu listing.

2. Select the ‘Options’ selection.

3. Then click the ‘PivotTable Connections’ option:


4. Link your Pivot tables in the PivotTable Connection screen to the slicers which you want to define the relationship for.

In my example I just have two Pivot tables which I want to link. which are being populated with the Sheet listed as
Data for PowerPivot Chart 1 and Data for PowerPivot Chart 7. The Sheet is the actual sheet in Excel which will host the PowerPivot data into Excel to build the PowerPivot table.


5. Select ‘Ok’ and you will have your data linked in the Presentation layer.

Now when you select the Procedure Year slicer it will filter down the data in the Pivot tables which I define in the slicer option.


The ability to link sources from a number of areas within the organization with relative simplicity is useful in many scenarios. In addition the ease of using slicers and linking them provides additional functionality to give your users ease of use in consuming content provided via Excel. This post illustrates just some of the ways we can leverage PowerPivot to extend our Excel workbooks. There are many other new features which extend the uses of Excel which can be delved into with more detail.

Using Custom Data to Securely Access Analysis Services Data via PerformancePoint

This method has been blogged about in various places on the web. Below is my take on a recent implementation of this method with some screen shots, helpful hints and some things to lookout for.

One approach to dynamically allowing / denying access to the data contained within an Analysis Services Cube via PerformancePoint is through Custom Data. Custom Data allows Analysis Services to include the currently authenticated user name as a parameter on the custom data field in an Analysis Services connection string from PerformancePoint.

There are many objects and steps that must be implemented in order to properly configure Custom Data including Data Warehouse objects, Analysis Services Data Source Views, Measure Groups, Dimensions, Roles and PerformancePoint data connection properties. Each step is outlined in detail below.

Data Warehouse Objects

A dimension table must be created containing the Active Directory login for each user who will be accessing the data.


A bridge, or “factless” fact table, must also be created to map the users in the dimension table to the appropriate data in the corresponding dimension table. For this example, we are using Accounts.


Analysis Services Objects

Data Source View

The newly created user dimension and security fact table need to be added to the data source view. The relationships between the objects should be represented as User –> Fact Security –> Account –> Fact Table as noted in the image below:


Measure Groups

A measure group must be created based on the Fact Security table with at least one measure defined. Both the Measure Group and Measure should be hidden as those browsing the cube will not need access to this information.



A dimension based on the user login table is also necessary. This dimension and corresponding attributes should also be hidden.


Add the dimension to the cube and ensure that the dimension usage is represented similar to this:



Create a new role with read access to the database, data source, cube and dimensions. On the dimension data tab, under advanced, specify the dynamic MDX expression to either grant or deny access to the dimension data.


Note that CustomData() MDX function in the above string will retrieve the authenticated user name from the PerformancePoint connection string.

PerformancePoint Data Connection

In the data connection editor within Dashboard Designer, the role used to access the cube as well as the option for “Unattended Service Account and add authenticated user name in connection string” must be specified.



Start SQL Server Profiler prior to accessing the Dashboard. Once the page is accessed, in SQL Server Profiler, the unattended service account will be reflected in the NTUserName and the authenticated user login will be reflected in the Custom Data property.


So that’s that. A few things to point out are:

1. An interface and process should be created to allow appropriate users to map active directory logins to dimension attributes for which they should have access (SharePoint list is a great candidate for this).

2. A security fact table must be created for each measure group that requires restricted access.

3. The dynamic attribute security must be created for each attribute that requires restricted access.

Happy securing!

Technorati Tags: Business Intelligence Consulting,Custom Data,,PerformancePoint,