PointBridge Blogs, Author at Perficient Blogs https://blogs.perficient.com/author/pointbridgeblogs/ Expert Digital Insights Mon, 14 May 2018 15:42:28 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png PointBridge Blogs, Author at Perficient Blogs https://blogs.perficient.com/author/pointbridgeblogs/ 32 32 30508587 Lync 2010 and LLDP – off by just a bit https://blogs.perficient.com/2011/05/10/lync-2010-and-lldp-off-by-just-a-bit/ https://blogs.perficient.com/2011/05/10/lync-2010-and-lldp-off-by-just-a-bit/#respond Tue, 10 May 2011 18:36:00 +0000 http://blogs.pointbridge.com/Blogs/mcgillen_matt/Pages/Post.aspx?_ID=83
One of the very useful new features of Lync 2010 Phone Edition is support for LLDP (Link Layer Discovery Protocol) to allow phones to receive configuration information from the switches they plug into. If you have any experience with Cisco IP Telephony, you know that Cisco Discovery Protocol (CDP) is used for similar purposes to assign voice traffic to a separate VLAN. This works great, but really only with devices (like Cisco phones) that speak CDP.
LLDP is an open standard that’s been evolving over the years. I’m glad that Microsoft has chosen to comply with it for Lync Phone edition. Eventually LLDP will be used by the Lync soft clients on PCs – however thanks to a bug or some other limitation, LLDP doesn’t quite work in that regard yet. Nonetheless, I have found LLDP to work pretty well with the Lync Phone edition and Cisco switches.
However – I have identified a bug, confirmed by Microsoft: you can’t use LLDP to set a VLAN higher than 512 with Lync Phone Edition.
When you use LLDP to assign a voice VLAN to phones, you set the configuration on the switch itself. Below is what you configure on a Cisco switch:

(Global command)

LLDP run

(interface-specific command)

interface FastEthernet1/0/1

switchport access vlan 600

switchport mode access

switchport voice vlan 700

That’s it. When an LLDP-compliant device plugs in, the Cisco switch will tell it to use VLAN ID 700. However the Lync Phone edition, while it processes the VLAN 700 tag, it just won’t assign it to the phone. VLANs 1-512: fine. VLANs 513 and up: no go.
Doing some reading on the LLDP spec, the VLAN ID is a 2-octet field. 2 octets should yield a max of 65,000+ for the VLAN ID. Even if it were only 12 useable bits to account for padding, that would still get us to 4096 as the max. The limit of 512 only makes sense if it were only using 9 bits total. So there’s the issue, we need Lync Phone edition to use more than 9 bits for VLAN ID.
As I mentioned, this is a confirmed bug and should be addressed soon with a Cumulative Update.

]]>
https://blogs.perficient.com/2011/05/10/lync-2010-and-lldp-off-by-just-a-bit/feed/ 0 223875
Take Your Day Back With PowerPivot https://blogs.perficient.com/2011/04/29/take-your-day-back-with-powerpivot/ https://blogs.perficient.com/2011/04/29/take-your-day-back-with-powerpivot/#respond Fri, 29 Apr 2011 15:23:45 +0000 http://blogs.pointbridge.com/Blogs/ringley_brian/Pages/Post.aspx?_ID=11
Microsoft recently released a free Excel 2010 add-in which uses PowerPivot to analyze your schedule from your Outlook calendar. The add in, named Calendar Analytics Tool, has two main benefits. The first is that it is extremely useful (and cool). By using the familiar Excel pivot table functionality, including Slicers, you have the ability to graphically display all of the meetings for a given period of time by: meeting topic, who was involved, categories and the details behind the high level statistics. You now have evidence that the meeting about the upcoming meeting and the following debriefing meeting is taking up too much of your valuable time.
The second, and possibly more valuable benefit, is you now have a practical application to promote self-service BI and PowerPivot within your organization. Everyone understands the concepts around meetings and time; this exercise could be a real eye opener to how one makes use of their schedule and time. It does not take a power user or super analyst to use this tool and understand its benefits. Rather, this will help to make the everyday user start to think about what else could be possible through self-service BI and analytics at the speed of thought.
image

Technorati Tags: Business Intelligence Consulting,,Self-Service BI,
]]>
https://blogs.perficient.com/2011/04/29/take-your-day-back-with-powerpivot/feed/ 0 223681
SharePoint as a Digital Marketing tool https://blogs.perficient.com/2011/04/15/sharepoint-as-a-digital-marketing-tool/ https://blogs.perficient.com/2011/04/15/sharepoint-as-a-digital-marketing-tool/#respond Fri, 15 Apr 2011 18:35:00 +0000 http://blogs.pointbridge.com/Blogs/ram_ajay/Pages/Post.aspx?_ID=1

In Digital Marketing one of the most commonly used functionality is the catalog. I recently worked on a solution where SharePoint’s unique features helped build a flexible, robust information request/catalog module. One of our main goals was to ensure that the information in the catalogs can be maintained by non-IT staff.

We implemented a solution capable of handling multiple catalogs from a single list. A SharePoint list was created to store information about the items such as product description, features, cost etc. Let’s call this list "Catalog list" and each item in this list was classified by category. Each category represents a separate Catalog. In order to put things in perspective, let’s consider an example of building an e-commerce site for a high end automobile dealer. Possible examples for Category would be Cars, Trucks and Motorcycle. We added an additional column called the Sub Category. This allowed for each Item to be further classified into Sub Categories, providing us with additional capability to better organize information in the catalogs. Staying with our example of a e-commerce site for an automobile dealer each category could be further classified based on vehicle size such as compact, Standard and Full Size. Using the Sub Category is optional and it is used only in scenarios where it helps better organize items when displayed on the web page. In our solution we designed it in such a way that the items belonging to a certain sub category within a category will be displayed in a single tab on the Catalog web page.

The Catalog list contains the information about the item, brochures(pdf files) for the item are contained in a document library and Images showcasing the item are contained in an Image Library. Lookup columns help create relationships between lists by allowing a many to many relationship between lists. This allows for more than one brochure or image to be associated with a particular item in the Catalog list. A many to many relationship means a brochure or image can also be associated with more than one item on the catalog list.

A Lookup column on the document library links documents in the document library with the corresponding item in the catalog list. Similarly a Lookup column on the image library is used to link one or more images in the image library with items on the Catalog list. Linking an item to more than one image allows the capability of having an image gallery on the catalog web page.

We used a CAML query to retrieve information based on the category from Catalog list.

newquery.Query = "<Where><Eq><FieldRef Name=’Category’ /><Value Type=’Choice’>" + strCategory + "</Value></Eq></Where><OrderBy><FieldRef Name=’SubCategory’ /><FieldRef Name=’SortOrder’ /></OrderBy>";

Brochures and Images were also retrieved through CAML queries. The query is used to retrieve items from the document or image library based on the value of the lookup column .

newquery.Query = "<Where><Eq><FieldRef Name=’LookUpColumn’ /><Value Type=’Lookup’>" + strItemName+ "</Value></Eq></Where>";

The lists are created on the site collection level. This allows the use of a single set of lists for any catalog created within the site collection. This is particularly helpful when the website contains a large number of sub sites with an equally large number of catalog lists. Creating the lists on the site collection level ensures that all the catalog information is accessible from a single location.

On the front end there are a number of ways the data can be displayed in the required format. A ASP. Net repeater control or a data table can be used in formatting the data in the required format.

SharePoint web parts can be used to create a template for the catalog page. To setup a catalog all the user has to do is add the web part to a publishing page and populate the list. SharePoint lists are easy to setup and requires only minimal technical expertise to maintain. Once the Catalog has been setup it is fairly easy to maintain the lists. SharePoint provides an intuitive user interface requiring only minimal training for non-IT personnel to be able to work with SharePoint lists.

Returning to our analogy of building an catalog module for the internet site of an auto dealer, imagine that the requirement was to provide the capability to request information on specific vehicles. In our solution we added a check box for each item on the catalog. The customer selects the vehicles he is interested in from the catalog and clicks the "Submit" button. We designed a mechanism on the OnSubmit event of the command button to store the selected items in a SharePoint list. Let’s call the list the "Information Request" list.

The customer is then taken to an information request form with fields for providing his contact information and options for either asking questions or requesting more information on the vehicles from the dealer. On clicking the submit button on the request form the user is taken to a "Thank You" page and the contact information is stored along with the selected items in the Information Request list. We also created an event receiver on the Item update event of the Information Request list to send out an email to a sales representative informing about the customer’s interest in the selected vehicles. The Information Request list acts as a sort of fallback in case the email module fails.

]]>
https://blogs.perficient.com/2011/04/15/sharepoint-as-a-digital-marketing-tool/feed/ 0 223687
An Introduction to PowerShell Scripting https://blogs.perficient.com/2011/04/06/an-introduction-to-powershell-scripting/ https://blogs.perficient.com/2011/04/06/an-introduction-to-powershell-scripting/#respond Thu, 07 Apr 2011 03:07:49 +0000 http://blogs.pointbridge.com/Blogs/hasz_stephanie/Pages/Post.aspx?_ID=4

So, you’re interested in PowerShell, but aren’t sure if you want to jump right in to writing your own cmdlets? No problem; there are plenty of built-in ones to get us started. Let’s take a look at a few different ways to use cmdlets: calling them directly from PowerShell, stringing multiple cmdlets together using the pipeline, and writing a simple script to automate these tasks.

Say you want to rename a file. Just open PowerShell and use the Rename-Item cmdlet. It takes two parameters: the path to the input file and the new name. You can explicitly specify the names of these parameters, pass them in in the order expected by PowerShell, or enter rename-item and wait for PowerShell to request them.

powershell1

I know what you’re thinking: this is unnecessarily complicated. I agree, but what if we use PowerShell to rename all the files in a folder? The Get-ChildItem cmdlet will give us a list of all the items in a directory, but it also returns these items as objects that we can manipulate.

At the most basic level, this allows you to do things like output the contents of a directory to a text file or sort them by size. On the left side of the pipe, we have a cmdlet that produces a set of objects; on the right side, a cmdlet that operates on those objects.

powershell2

We’re still not doing anything super useful at this point though, but if we pull some of this functionality into a script, we can string together several commands to create a reusable piece of functionality. Let’s return to renaming files. Whether you’re appending a descriptive title to photos or the name of a project you’re working on, at some point, you’ve probably had to rename all the files in a directory. With a few lines of a PowerShell script, we can easily automate this task, saving you precious minutes that you can now use to learn about other cmdlets.

First, you’ll need to create a .ps1 file; any text editor will work, but you’ll want to check out PowerGUI if you get serious about scripting.

param($path, $prefix)
Get-ChildItem -Path $path |
Foreach-Object -Process
{
    $newName = [System.String]::Format("{0}{1}", $prefix, $_.name)
    Rename-Item -Path $_.FullName -NewName $newName
}

We’re going to pass in two parameters: path (the directory containing the files to be renamed) and prefix (the string to be added to the filename). From there, it’s just a matter of stringing together the Get-ChildItem, ForEach-Object, and Rename-Item cmdlets. To run the script, simply navigate to the directory and use “.” plus the script name.

rename1

After running the script, you can use Get-ChildItem again to verify that the prefix has been added to the filenames. With a minimal amount of effort, you could modify this script to add suffixes, rename files in subdirectories, or change extensions. Once you learn how to write your own cmdlets, the possibilities will really be endless.

]]>
https://blogs.perficient.com/2011/04/06/an-introduction-to-powershell-scripting/feed/ 0 223697
JavaScript Duplicating Generated HTML in a Content Editor https://blogs.perficient.com/2011/03/31/javascript-duplicating-generated-html-in-a-content-editor/ https://blogs.perficient.com/2011/03/31/javascript-duplicating-generated-html-in-a-content-editor/#comments Thu, 31 Mar 2011 20:49:32 +0000 http://blogs.pointbridge.com/Blogs/hasz_stephanie/Pages/Post.aspx?_ID=3

Say you’ve become the office expert on food trucks. There are a lot of hungry people in your office, and you just don’t have time to keep them all updated on mobile meatballs. Why not add a Twitter list widget to your intranet site so they can track the trucks themselves?

The Content Editor web part sounds like the perfect place to insert the code that Twitter generates for you. So, you add a web part to your page, open the HTML source editor, and paste in the widget script. You’ll get a warning message, but let’s ignore that for now:

Warning: The HTML source you entered might have been modified.

Everything looks great, but let’s clean it up a little.

contentEditor

Obviously, you’ll want to get rid of that Content Editor title above the widget by changing the chrome type to none, but as soon as you do, the widget duplicates itself:

duplicates

As it turns out, SharePoint 2010 made some changes to the way the content editor handles JavaScript in order to combat cross-site scripting. Rather than being generated on the fly, the HTML is created and saved in the source each time the web part is updated. To prevent this problem, simply upload your script to a document library and point to it in the content editor’s content link setting.

link

The widget will stop duplicating itself, SharePoint will stop warning you about your HTML source, and you can get back to enjoying delicious, spaceship-delivered tamales.

]]>
https://blogs.perficient.com/2011/03/31/javascript-duplicating-generated-html-in-a-content-editor/feed/ 1 223703
Implementing Resources files with SharePoint 2010 https://blogs.perficient.com/2011/03/30/implementing-resources-files-with-sharepoint-2010/ https://blogs.perficient.com/2011/03/30/implementing-resources-files-with-sharepoint-2010/#respond Wed, 30 Mar 2011 22:27:00 +0000 http://blogs.pointbridge.com/Blogs/dupont_thomas/Pages/Post.aspx?_ID=44
Working on a multilingual project, I had to come up with an approach for implementing resources values used across our UI solution on the SharePoint 2010 platform. Unlike for SharePoint 2007, there is more flexibility on the new 2010 platform to implement your resx files. I found that there were two approaches that can be used on order to deploy your Resource files using Visual Studio 2010 with SharePoint 2010.
Using the Empty Element

This is the approach I choose because it fit my requirements best. Using the Visual Studio templates, add a new ‘Empty Element’ to your project and call it ‘Resources’ (you can name it whatever you’d like):

In that folder, I added all my resx files that I will be using for my project:

For each resx file, make sure you update its properties. I have changed the deployment type to ‘AppGlobalResource’:

Using the mapped folder

For SharePoint specific resource files, it makes more sense to use the predefined ‘Resources’ mapped folder in the Visual Studio 2010.



Any resource files added to that mapped folder will be copied to the 14/Resources/ folder on the SharePoint server where the solution is deployed. If you wish to use that mapped approach but would like to have your resx files copied to the web application folder as well, you will need to edit the manifest.xml file and the ApplicationResourceFiles element to the <Solution> element such as:

<ApplicationResourceFiles>

<App_GlobalResourceFile Location=”SharePointUIResourcesApplication.fr-FR.resx” />

<App_GlobalResourceFile Location=”SharePointUIResourcesApplication.zh-CN.resx” />

</ApplicationResourceFiles>
]]>
https://blogs.perficient.com/2011/03/30/implementing-resources-files-with-sharepoint-2010/feed/ 0 223711
Custom Error Pages – SPWebApplication.SPCustomPage enumeration vs. Global.asax https://blogs.perficient.com/2011/03/30/custom-error-pages-spwebapplication-spcustompage-enumeration-vs-global-asax/ https://blogs.perficient.com/2011/03/30/custom-error-pages-spwebapplication-spcustompage-enumeration-vs-global-asax/#respond Wed, 30 Mar 2011 22:23:34 +0000 http://blogs.pointbridge.com/Blogs/dupont_thomas/Pages/Post.aspx?_ID=43
With SharePoint 2010, it has definitely become easier to implement your own custom error page. Using the SPWebApplication.SPCustomPage enumeration, you can set your custom error page to override the default SharePoint one. Actually, you could do the same for your ‘AccessDenied’ or ‘Login’ pages. Here is the full list of out of box pages you could easily override and replace with your own custom ones:



Check out this blog for more information on how you can implement that SharePoint friendly approach.

For our project, we had to use different custom error pages depending on the error status code returned to us. In that case, overriding the default error page would not be enough and a proper alternative solution would probably be to create a custom http module in which you will redirect requests to your custom error page depending on their error status code. On our project, we decided to take a simpler approach which consists of creating an error handler in the Global.asax file that will catch all unhandled errors while processing a request (see here for example), here is how our code looked like:


void Application_Error(object sender, EventArgs e)

{


// Code that runs when an unhandled error occurs


int currentStatusCode = System.Web.HttpContext.Current.Response.StatusCode;

Server.ClearError();


switch (currentStatusCode)

{


case 404: Response.Redirect(ErrorPage404_Url);


break;


case 500: Response.Redirect(ErrorPage500_Url);


break;


default: Response.Redirect(ErrorPage_Url);


break;

}

}
]]>
https://blogs.perficient.com/2011/03/30/custom-error-pages-spwebapplication-spcustompage-enumeration-vs-global-asax/feed/ 0 223712
Start a SharePoint 2010 / FS4SP Crawl Remotely https://blogs.perficient.com/2011/03/29/start-a-sharepoint-2010-fs4sp-crawl-remotely/ https://blogs.perficient.com/2011/03/29/start-a-sharepoint-2010-fs4sp-crawl-remotely/#respond Wed, 30 Mar 2011 04:30:00 +0000 http://blogs.pointbridge.com/Blogs/johnson_bert/Pages/Post.aspx?_ID=21
With SharePoint 2010 and FAST Search For SharePoint 2010 (FS4SP), it’s easy to schedule crawls to run daily, hourly, or according to any other frequency. For most scenarios, scheduled crawls work perfectly.
 
Sometimes it makes more sense to kick off a crawl based on an event. For example, perhaps your organization runs an Extract/Transform/Load process to prepare data before being crawled. If that ETL job finishes at an inconsistent time, a scheduled crawl may either run too early and miss some updated data or run too late, making queries stale.
 
To fix that, we’d ideally kick off a new search crawl as soon as the ETL job is done running. With PowerShell, doing so is easy.
 
The PowerShell Script
 
$userName = “DOMAINserviceAccount”
$passWord = ConvertTo-SecureString “password” -Force -AsPlainText
$indexServerName = “serverName”
# Run the following commands on the remote computer
$credential = New-Object System.Management.Automation.PSCredential($userName, $passWord)
$session = New-PSSession $indexServerName -Authentication CredSSP -Credential $credential
Invoke-Command -Session $session -scriptBlock { `
Add-PSSnapin Microsoft.SharePoint.PowerShell; `
`
$indexServiceAppName = “Search Service Index Application”; `
`
$indexServiceApp = Get-SPServiceApplication -Name $indexServiceAppName; `
$contentSource = Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $indexServiceApp `
$contentSource.StartFullCrawl() `
}

How It Works

The above script uses PowerShell remoting to issue requests on a SharePoint indexing server.
 
The following variables need to filled in:
  • $userName: The full username of an account with permissions to kick off a new search.
  • $passWord: The account’s password. Note that dollar signs need to be escaped with tick characters in PowerShell strings (e.g. “Pa`$`$word”).
  • $indexServerName: The name of a server running the index role.

An example usage is to run this script as part of a SQL job or SSIS step. The executable to call is “PowerShell.exe” with the above script saved in a “PS1” file as the command’s argument.
Because SharePoint 2010 and FAST Search for SharePoint use the same service application architecture, this approach works for either system.

]]>
https://blogs.perficient.com/2011/03/29/start-a-sharepoint-2010-fs4sp-crawl-remotely/feed/ 0 223714
Leveraging PowerPivot functionality in Excel https://blogs.perficient.com/2011/03/29/leveraging-powerpivot-functionality-in-excel/ https://blogs.perficient.com/2011/03/29/leveraging-powerpivot-functionality-in-excel/#respond Tue, 29 Mar 2011 14:57:00 +0000 http://blogs.pointbridge.com/Blogs/rizwan_ali/Pages/Post.aspx?_ID=3
Introduction
PowerPivot builds on existing Excel features and extends their functionality. Those familiar with Excel are aware of Pivot tables. Excel typically involves creating a pivot chart with one single data source. PowerPivot allows users to extract data from various sources. These data sources can also be linked to one another with in a PowerPivot workbook. This feature allows you to create sources which maybe from a broad list of sources and link them all in your work book.
Slicers are also new in PowerPivot. They are familiar territory for users of Excel who have worked with filters. Slicers are useful for filtering the data in Pivot charts and tables. The advantages which make slicers different from filters are the visual display of the data which is being filtered and the ability to link slicers to multiple charts and pivot tables with a single slicer if that data is linked in PowerPivot. Hence they are easier to use because they allow single point of control for filtering data from multiple sources.
We will walk through these features of linking data sources with in PowerPivot. In addition we will link slicers through the slicer options to have ability to slice our data sources from one slicer.
Installing PowerPivot
Establishing a PowerPivot environment requires deploying the PowerPivot for Excel® 2010 add-in on workstations running Microsoft Office 2010 and PowerPivot for SharePoint on SharePoint application servers.
These tasks are not dependent on each other. Users with only a web connection don’t need Excel 2010 on their machines to view PowerPivot work books deployed on the SharePoint server. However users that do need to create and deploy work books to the SharePoint infrastructure do need to have the Excel 2010 client.
Linking data within PowerPivot
My source in this example is an SSAS cube. I am pulling information on a hospital which wants to track the Average time a Surgeon will take to spend on a procedure in a year by their specialty. I am calling this table the AverageSurg_Timewith the following columns listed below:
You can see the sample data which I pull and the columns below:
clip_image002
We pull a second data set which is called Vendor_Cost. The VendorCost table will enable a hospital to view the Total Cost for a component which is purchased by a hospital for that particular Year and Procedure Type. The Vendor_Cost table is going to be pulled with the sample data and columns below:
clip_image004
Both tables have a field which we can use to filter data on for a Procedure Year. But there is no link between the two tables to know which Year we want to see both these by. If we have a slicer which we want to filter then we will need to add a separate slicer for each source. Hence we will need to link the Procedure Year in both tables to another dimension which will act as our intermediate table for linking the two sources. Both our AverageSurg_Time and the Vendor_Cost will link to this intermediate dimension. This is a ProcedureYeartable with a single column and it lists all Year values as listed below:
Procedure Year
clip_image005
We will create a link between the three tables in the PowerPivot Design options. First define a link with Vendor_Cost and the ProcedureYeartable as shown:
clip_image006
Next we define a relationship with the ProcedureYear table and the AverageSurg_Timetable.
clip_image007
Linking Slicers to the Pivot Tables
In the next step we will create our pivot charts. I create a few simple bar graphs which are listed below which sources data from Vendor_Cost and AverageSurg_Timeas two Pivot Charts:
clip_image009
The first one is Average Surg Time and the second is the Vendor Cost. Above, I have added a slicer and have used the Procedure Year as a source from the ProcedureYeartable. At this point the data in the Pivot table is linked in the underlying source properly. However we need to perform the link in the Excel presentation layer as well. The Slicer I defined is not linked to the Pivot tables in Excel. In order to link the slicer to the pivot tables we will do the following steps:
1. Select the slicer which you create above the Pivot charts and you will select the ‘Slicer Tools’ in the menu listing.
2. Select the ‘Options’ selection.
3. Then click the ‘PivotTable Connections’ option:
clip_image010
4. Link your Pivot tables in the PivotTable Connection screen to the slicers which you want to define the relationship for.
In my example I just have two Pivot tables which I want to link. which are being populated with the Sheet listed as
Data for PowerPivot Chart 1 and Data for PowerPivot Chart 7. The Sheet is the actual sheet in Excel which will host the PowerPivot data into Excel to build the PowerPivot table.
clip_image011
5. Select ‘Ok’ and you will have your data linked in the Presentation layer.
Now when you select the Procedure Year slicer it will filter down the data in the Pivot tables which I define in the slicer option.
Conclusion
The ability to link sources from a number of areas within the organization with relative simplicity is useful in many scenarios. In addition the ease of using slicers and linking them provides additional functionality to give your users ease of use in consuming content provided via Excel. This post illustrates just some of the ways we can leverage PowerPivot to extend our Excel workbooks. There are many other new features which extend the uses of Excel which can be delved into with more detail.
]]>
https://blogs.perficient.com/2011/03/29/leveraging-powerpivot-functionality-in-excel/feed/ 0 223719
Using Custom Data to Securely Access Analysis Services Data via PerformancePoint https://blogs.perficient.com/2011/03/24/using-custom-data-to-securely-access-analysis-services-data-via-performancepoint/ https://blogs.perficient.com/2011/03/24/using-custom-data-to-securely-access-analysis-services-data-via-performancepoint/#respond Thu, 24 Mar 2011 15:46:30 +0000 http://blogs.pointbridge.com/Blogs/ringley_brian/Pages/Post.aspx?_ID=10
This method has been blogged about in various places on the web. Below is my take on a recent implementation of this method with some screen shots, helpful hints and some things to lookout for.
One approach to dynamically allowing / denying access to the data contained within an Analysis Services Cube via PerformancePoint is through Custom Data. Custom Data allows Analysis Services to include the currently authenticated user name as a parameter on the custom data field in an Analysis Services connection string from PerformancePoint.
There are many objects and steps that must be implemented in order to properly configure Custom Data including Data Warehouse objects, Analysis Services Data Source Views, Measure Groups, Dimensions, Roles and PerformancePoint data connection properties. Each step is outlined in detail below.
Data Warehouse Objects
A dimension table must be created containing the Active Directory login for each user who will be accessing the data.
image
A bridge, or “factless” fact table, must also be created to map the users in the dimension table to the appropriate data in the corresponding dimension table. For this example, we are using Accounts.
image
Analysis Services Objects
Data Source View
The newly created user dimension and security fact table need to be added to the data source view. The relationships between the objects should be represented as User –> Fact Security –> Account –> Fact Table as noted in the image below:
clip_image001
Measure Groups
A measure group must be created based on the Fact Security table with at least one measure defined. Both the Measure Group and Measure should be hidden as those browsing the cube will not need access to this information.
clip_image002
Dimensions
A dimension based on the user login table is also necessary. This dimension and corresponding attributes should also be hidden.
clip_image003
Add the dimension to the cube and ensure that the dimension usage is represented similar to this:
clip_image004
Roles
Create a new role with read access to the database, data source, cube and dimensions. On the dimension data tab, under advanced, specify the dynamic MDX expression to either grant or deny access to the dimension data.
clip_image006
Note that CustomData() MDX function in the above string will retrieve the authenticated user name from the PerformancePoint connection string.
PerformancePoint Data Connection
In the data connection editor within Dashboard Designer, the role used to access the cube as well as the option for “Unattended Service Account and add authenticated user name in connection string” must be specified.
clip_image008
Verification
Start SQL Server Profiler prior to accessing the Dashboard. Once the page is accessed, in SQL Server Profiler, the unattended service account will be reflected in the NTUserName and the authenticated user login will be reflected in the Custom Data property.
clip_image010
So that’s that. A few things to point out are:
1. An interface and process should be created to allow appropriate users to map active directory logins to dimension attributes for which they should have access (SharePoint list is a great candidate for this).
2. A security fact table must be created for each measure group that requires restricted access.
3. The dynamic attribute security must be created for each attribute that requires restricted access.
Happy securing!

Technorati Tags: Business Intelligence Consulting,Custom Data,,PerformancePoint,
]]>
https://blogs.perficient.com/2011/03/24/using-custom-data-to-securely-access-analysis-services-data-via-performancepoint/feed/ 0 223729
InfoPath: Schema Validation Found Non-Data Type Errors https://blogs.perficient.com/2011/03/23/infopath-schema-validation-found-non-data-type-errors/ https://blogs.perficient.com/2011/03/23/infopath-schema-validation-found-non-data-type-errors/#respond Wed, 23 Mar 2011 18:14:21 +0000 http://blogs.pointbridge.com/Blogs/hasz_stephanie/Pages/Post.aspx?_ID=2
We’ve all been there: you’re putting the finishing touches on a solution when suddenly, something breaks. You barely changed anything, but your previously functional code is now giving you a nondescript error that has half a dozen potential underlying causes.
I recently encountered just such a headache while building an InfoPath form. Once the form was working the way I wanted it to, I added in a few hidden fields purely for formatting purposes. The code compiled without any errors, but when I started testing the form using InfoPath Designer’s Preview feature, it threw an InvalidOperationException.
infopath
I went back to the code, commented out the string fields I had added, and everything was back to normal. The fields were present in designer, and the related code wasn’t structured any differently than that of the other fields, so, what was going wrong?
A quick search yielded this postfrom the InfoPath team explaining that there might be a nillable attribute on the node. It looked promising until I read that string fields are not among the affected data types.
Then I ran across this support articlethat referenced the error occurring with string fields. Great, right? Well, it would be, except the solution for that was making sure that my fields contain valid, nonblank values. I was explicitly setting the values of these fields in the properties and I knew they were valid string.
As a last resort, I deleted the fields from the code and InfoPath Designer and re-added them. To my surprise (and relief), my form started working again, but why? The fix was annoyingly simple: the order of the fields in InfoPath Designer needs to match the order in which you add them in the code.
infopath2 infopath3
Thus concludes “Obvious Mistakes I’ll Never Make Again: Edition 1.”
]]>
https://blogs.perficient.com/2011/03/23/infopath-schema-validation-found-non-data-type-errors/feed/ 0 223732
Mobile BI – Is It Realistic? https://blogs.perficient.com/2011/03/21/mobile-bi-is-it-realistic/ https://blogs.perficient.com/2011/03/21/mobile-bi-is-it-realistic/#respond Mon, 21 Mar 2011 21:45:45 +0000 http://blogs.pointbridge.com/Blogs/ringley_brian/Pages/Post.aspx?_ID=9
Mobile BI is gaining a lot of attention with the huge increase in the demand for mobile devices. This topic routinely comes up in conversations with our customers as they (along with everyone else) want to be able to receive the same analytics and user experience they interact with on their desktop/laptop on their smart phone/tablet. They also expect the interaction to be achieved without loss of functionality. No small task, but then again, the customer is always right. Besides the much smaller screen real estate (3in vs. 22in and even larger) and technical hurdles (security, encryption, performance), is this even possible? How can a real-time Dashboard with dozens of charts, KPI’s and information be rendered on a device the size of a deck of cards and still be meaningful?
Recently, I attended a presentation at the Illinois Technology Association where three of the market leading mobile BI companies were evaluated. MicroStrategy, PushBI and RoamBI. While each has their strengths and weaknesses (the major one from each category noted below), PushBI is the technology that seems to fit best with our offerings as a Gold Certified Microsoft Partner and our customer’s needs.

Vendor Pro Con
MicroStrategy Device / OS Agnostic Highest price point
PushBI Device / OS Agnostic Least appealing graphics
RoamBI Nicest graphics / interface Only works on iOS

We are investing heavily in R&D around these products as well as developing some of our own so that we can extend the BI our customers are accustomed to from their desktop to their Mobile device.
Stay tuned for updates.
Business Intelligence Consulting

Technorati Tags: Business Intelligence Consulting,,,
]]>
https://blogs.perficient.com/2011/03/21/mobile-bi-is-it-realistic/feed/ 0 223734