Kyle Burns, Author at Perficient Blogs https://blogs.perficient.com/author/kburns/ Expert Digital Insights Wed, 09 Oct 2013 13:26:19 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Kyle Burns, Author at Perficient Blogs https://blogs.perficient.com/author/kburns/ 32 32 30508587 Minimize the performance hit of reflows in a web application https://blogs.perficient.com/2013/10/09/minimize-the-performance-hit-of-reflows-in-a-web-application/ https://blogs.perficient.com/2013/10/09/minimize-the-performance-hit-of-reflows-in-a-web-application/#respond Wed, 09 Oct 2013 13:26:19 +0000 http://blogs.perficient.com/microsoft/?p=19915

In what at this point feels like a previous life, I did far more work automating Microsoft Excel with Visual Basic for Applications or from within Windows applications than I would care to admit. In those days the rendering performance of Excel was horrible and any time work was done that would modify the visible area of a spreadsheet in any significant way (such as populating a large number of rows), developers found that a significant performance gain could be achieved by ensuring that the document was not being displayed to the user while the operation was in progress. This caused Excel to bypass the repainting of the screen and freed up the processor to do more work that produces value.
How does this apply to application development today? I spend more time than I would care to admit using Javascript and HTML to build web applications. These web applications often have rich interactive requirements that are arguably more suited to running in a native application but business drivers cause us to build them to run in a variety of web browsers with a broad range of capability and performance characteristics. Regardless of the browser, a common characteristic is that one of the most expensive operations that can be triggered is a reflow, which is when a DOM element and its children and ancestors have their layout information recalculated. Reflows occur when the browser is resized, an elements CSS class is changed, or any property that affects the layout of an element is changed and the calculations necessary to achieve a reflow can often require a lot of resources.
The need to perform actions that cause reflow and their expense should lead developers down the path of trying to find ways to ensure that their code only causes the minimum number of reflows required to achieve a desired end result, which leads us back to the “do it offscreen” model used in VBA automation. There are many ways to achieve this end with Javascript, CSS, and the DOM, but two of the easiest include using a “working copy” and toggling visibility.
For the “working copy” method, you use the cloneNode method to create a copy of the element which requires modification, manipulate that element (which at this point is not part of the DOM), and then replace the original with the modified copy. This causes all the updates made to the element to only trigger a single reflow and is illustrated in the following code:

var elementToModify = document.getElementById("myElementToModify");
var workingCopy = elementToModify.cloneNode(true);
// update display-related elements
// append children
// etc....
elementToModify.parentNode.replaceChild(workingCopy, elementToModify);

For toggling visibility, you would make the element invisible by manipulating the display property, manipulate the element, and then set it back to visible. This would cause two reflows and is illustrated in the following code:

var elementToModify = document.getElementById("myElementToModify");
elementToModify.style.display = "none";
// update display-related elements
// append children
// etc....
elementToModify.style.display = "block";

Depending on how much time it takes to perform the updates the technique changing the visibility of the element might give a “jumpy” appearance to the page, so in most situations I would opt for the “working copy” technique.
In this posting I hope to have given information that will cause you to think more about when you’re taking action in Javascript that will trigger a reflow of all or a portion of the DOM and to have also provided some tools to help you minimize the performance impact of these actions.

]]>
https://blogs.perficient.com/2013/10/09/minimize-the-performance-hit-of-reflows-in-a-web-application/feed/ 0 224489
5 Reasons to consider Azure for your next project https://blogs.perficient.com/2013/07/08/5-reasons-to-consider-azure-for-your-next-project/ https://blogs.perficient.com/2013/07/08/5-reasons-to-consider-azure-for-your-next-project/#respond Mon, 08 Jul 2013 13:31:41 +0000 http://blogs.perficient.com/microsoft/?p=18656

I have to admit – I’m not always great at spotting whatever is going to be the next great thing.  This is evidenced by the fact that I bought an HD-DVD player because I was certain that format would beat Blu-ray, but more important to my professional life was the skepticism with which I originally approached cloud computing.  The first time that I was approached by an IT executive in my company and asked whether I thought cloud computing was right for us I was quick to respond with something to the effect of “No way.  I don’t want someone else keeping our apps and data where we can’t touch them and make sure they’re all right.”
I’ve learned a lot since that conversation and found Azure to have a wealth of advantages that made getting over my initial unease with things seeming out of my control well worth it.  In this post, I will discuss some of the compelling reasons to consider using Azure for your next project.

Give access to your app – not your network

The proliferation of connected consumer devices such as smart phones and tablets has led to business users wanting (or demanding) to use these devices to stay connected to their work, leaving network pros to figure out how to meet this demand without exposing critical pieces of network infrastructure to the world.  Along with hosted services such as Office 365, hosting targeted applications in Azure can allow businesses to make these applications to connected devices while keeping the doors locked tight on their internal network.

You don’t have to go “all in”

There are still many reasons to leave all or part of an application on-premise.  These can range from licensing and regulatory restrictions to technology concerns.  From virtual networks to service bus relay, the Azure stack provides plenty of tools to build a hybrid solution that allows you to keep the right pieces in the right places.  The Microsoft Patterns and Practices team has put together some excellent Azure guidance to include the downloadable book “Building Hybrid Applications in the Cloud on Windows Azure” that can help you understand how to build your hybrid solution.

Ease of provisioning server instances

One of the more frustrating parts of supporting an enterprise application is trying to make sure that there are “just enough” system resources available to the application.  If you have too much horsepower available then you’re wasting money on unnecessary hardware and if you have too little you’re wasting money as the users of your application sit idle during yet another outage.  To make things worse, in many organizations once you jump through the hoops of proving that another server is necessary you may find that it will take weeks or months to get on the schedule for the group that installs new servers.  When operating Azure applications, once this decision to increase capacity has been made, provisioning new servers is simply a matter of moving a slider in the dashboard or running a PowerShell script.  The ease with which adding and removing server instances can be achieved in Azure combined with billing being calculated on a per minute basis creates makes realistic the idea of managing a highly elastic environment that adds “just enough” capacity during peak load periods and scales back during lower utilization periods to maximize gains from IT investment.

Azure applications aren’t locked up in Microsoft’s data center

In the past year, Microsoft has made significant progress in removing reasons to fear being locked into a hosting agreement with them by making Azure available as a hostable solution for third party hosting providers and Enterprise customers.  Information on this offering can be found at the Microsoft Hosting page.

Many applications will “just work” hosted in Azure

Azure certainly has some great features that can and often should be leveraged to fully take advantage of its resilience and scale (such as queues and table storage), but many applications built to run on Windows Server in the past several years will run on the Azure fabric with no modifications because Azure is built with compatibility in mind.  For solutions that require a mix of components designed to run on Windows Server and those built for operating systems such as Linux, Azure’s Infrastructure as a Service (IaaS) offering can be used to spin up virtual machines running the necessary application server software.
In this post, I’ve touched on five compelling reasons to consider building your next project (or a piece of it) on Windows Azure.  There are many more reasons that could be added to this list, so I’d urge people trying to decide whether it’s time to move toward the cloud to invest a little bit of time to learn about what Azure brings to the table.

]]>
https://blogs.perficient.com/2013/07/08/5-reasons-to-consider-azure-for-your-next-project/feed/ 0 224375
Per minute billing makes Azure an easier choice https://blogs.perficient.com/2013/06/03/per-minute-billing-makes-azure-an-easier-choice/ https://blogs.perficient.com/2013/06/03/per-minute-billing-makes-azure-an-easier-choice/#respond Mon, 03 Jun 2013 21:55:39 +0000 http://blogs.perficient.com/microsoft/?p=18476

One of the keys to Microsoft’s strategy to make Azure a clear choice for customers of every size is the idea that you get massive scalability to meet demands, but only have to pay for capacity that you’re actually using at any given point in time. Until today, that statement has always come with its own asterisk because the reality was that you only pay for the capacity of which you use any portion of an hour. That might seem insignificant, but for a customer running thousands of VMs, the difference value received by 61 minutes of runtime and 120 minutes of runtime can be huge and the cost has been the same.  Microsoft announced today in this blog post and through several newsletter channels that effective today Azure will be billed by the minute – effectively making 61 minutes of runtime cost far less than the same time on competitors’ products.  These changes along with Microsoft’s commitment to maintaining competitive pricing make implementing solutions on Azure as attractive to the CFO as it is to the CIO.

]]>
https://blogs.perficient.com/2013/06/03/per-minute-billing-makes-azure-an-easier-choice/feed/ 0 224358
Sitecore 7 Data Source Changes https://blogs.perficient.com/2013/05/23/sitecore-7-data-source-changes/ https://blogs.perficient.com/2013/05/23/sitecore-7-data-source-changes/#respond Thu, 23 May 2013 12:44:34 +0000 https://blogs.perficient.com/microsoft/?p=18318

One of the things that I have been caught complaining about from time to time with the way Sitecore handles assignment of data source in presentation details is that the data source is stored as a string path, making it necessary to update every instance of presentation details that references an item when it is moved on the content tree and preventing it from showing up in things like the broken links report or inbound/outbound links for a content item. Sitecore 7 fixes this in a big way by storing the ItemID of the item selected as the data source. This allows the item to be tracked in the links database and removes the fragility of looking up by path. I guess the little things are what excite me, but this single change represents a major improvement in a product that was already strong.  For the full story on data source enhancements in Sitecore 7, check out John West’s blog entry Sitecore 7: Data Sources, Part 1: Enhancements.

]]>
https://blogs.perficient.com/2013/05/23/sitecore-7-data-source-changes/feed/ 0 256587
If at First you Don’t Succeed: Using the Windows Azure Model https://blogs.perficient.com/2013/05/01/if-at-first-you-dont-succeed/ https://blogs.perficient.com/2013/05/01/if-at-first-you-dont-succeed/#respond Wed, 01 May 2013 10:45:32 +0000 http://blogs.perficient.com/microsoft/?p=18053

With the recent movement from preview to release of features such as Virtual Machines and Networking (along with a promise to match any price drops from Amazon’s AWS offering), Microsoft’s Windows Azure has been getting a lot of attention lately. One of the keys to the success of the product has been the cost at which they can deliver on a 99.95% monthly SLA. This type of uptime is accomplished not by preventing the occurrence of hardware failure, but by ensuring that they are accounted for and can be recovered from quickly and in an automated fashion.
The Microsoft data center is filled with commodity-grade servers which are expected to fail at some point and a live copy customers’ virtual machines and data are kept alive as a “warm standby” within the data center so that when a hardware failure occurs the application fabric can almost instantly reroute traffic. In addition to copies being kept in the data center they are mirrored at another data center geographically separated from the customers’ primary data center (such as Eastern US being backed up by Western US) to mitigate the risk of catastrophic failure.
Microsoft’s planning for failure in the data center demonstrates an important concept that anyone who is designing or implementing connected systems should take to heart. Any time an application crosses boundaries (whether it’s crossing a process boundary on the same machine or passing through a maze of routers to a service on the other side of the world), there is great potential for failure that will eventually be realized. A resilient application will account for failures and, when appropriate, take steps to recover from the failure such as waiting a short time and retrying a database request when a deadlock is encountered.
One “out of the box” method for dealing with the potential for failure is built into the Windows Azure Queues (both Queue Storage and Service Bus Queue) in that messages retrieved from the queue but not subsequently deleted by the process that retrieved them will reappear on the queue after a set timeout. This is advantageous in relieving developers from having to implement their own fault tolerant behavior, but also needs to be a consideration if the processing performed on the message includes actions that should be performed exactly once (such as updating account balances in a financial system).
In addition to the support built into the queuing products, the Microsoft Patterns and Practices team has produced the Transient Fault Handling Application Block (or “Topaz”), which is available at CodePlex or as a NuGet package. This block provides functionality to consistently implement retry strategies when consuming services that may experience transient faults. You can learn more about this application block on MSDN.
With this post, I hope to have encouraged you to think about how you can design and implement systems in a way that are accepting of and resilient to failure. The next time you are getting ready to make a service call, ask yourself if something other than logging an error and terminating should happen if the service call times out.

]]>
https://blogs.perficient.com/2013/05/01/if-at-first-you-dont-succeed/feed/ 0 224331
Don’t miss the Global Windows Azure Bootcamp https://blogs.perficient.com/2013/04/13/dont-miss-the-global-windows-azure-bootcamp/ https://blogs.perficient.com/2013/04/13/dont-miss-the-global-windows-azure-bootcamp/#respond Sat, 13 Apr 2013 19:56:25 +0000 http://blogs.perficient.com/microsoft/?p=17877

I know that in my last post I promised my next post would be focused on getting runtime information from your application, but I really felt that it was important to share an event that’s coming up in a couple of weeks (I really will get around to writing the post I intended next). April 27th marks the Global Windows Azure Boot Camp. This event is hosted at many locations around the world and is a full-day in depth look at Windows Azure led by experienced trainers. Two of the best things about the event include it being held on a Saturday (so “I can’t get out of work” is usually not an excuse) and it being free. All you need to do is register and show up with a machine running Visual Studio 2012 with the Azure tools installed. I plan to go to the event in Columbus, OH so maybe I’ll see some of you there. For more information and to register for an event near you, go to http://globalwindowsazure.azurewebsites.net/ and get signed up while there are still open spots.

]]>
https://blogs.perficient.com/2013/04/13/dont-miss-the-global-windows-azure-bootcamp/feed/ 0 224318
Dealing with exceptions in .NET applications https://blogs.perficient.com/2013/03/30/dealing-with-exceptions-in-net-applications/ https://blogs.perficient.com/2013/03/30/dealing-with-exceptions-in-net-applications/#comments Sat, 30 Mar 2013 20:50:28 +0000 http://blogs.perficient.com/microsoft/?p=17794

One of the more frustrating things to deal with when supporting a production application is “flying blind” when trying to determine what went wrong and how to fix it.  In today’s post, I will discuss some commonly used techniques for handling exceptions in .NET code that prevent visibility into the real source of error and demonstrate alternative approaches to preserve and enhance exception information.
The most basic tool for dealing with exceptions in a .NET application is the try/catch construct.  This construct allows the developer to specify that if an exception occurs within a specific block of code control should be transfered to another block along with information about the exception that occured.  The form of a try/catch block is as follows:

try
{
// some code that errors out
}
catch(Exception e)
{
// handle the exception
}

One of the most common mis-uses of a try/catch block that I have observed is the tendency of developers to “pass the exception up” to calling code in a manner similar to the following code:

try
{
    return operand1 / operand2;
}
catch(Exception e)
{
    throw new Exception(e.Message);
}

For the sake of this posting we will ignore the performance implications of generating new exceptions and focus solely on the impact that this has to problem determination and resolution. The problem with this approach is that the source line at which the exception is now reported is the line containing the throw statement and all contextual information (including the actual exception type) is lost. To put a little bit of context around it, consider the following code:

static void Main(string[] args)
{
    try
    {
        Console.WriteLine(Divide(0, 0));
    }
    catch (Exception e)
    {
        Console.WriteLine("Exception Caught:");
        Console.WriteLine(e.ToString());
    }
    Console.ReadLine();
}
static int Divide(int operand1, int operand2)
{
    try
    {
        return InnerDivide(0,0);
    }
    catch(Exception e)
    {
        throw new Exception(e.Message);
    }
}
static int InnerDivide(int operand1, int operand2)
{
    try
    {
        return operand1 / operand2;
    }
    catch(Exception e)
    {
        throw new Exception(e.Message);
    }
}

When executed, the code produces the following output:

Exception Caught:
System.Exception: Attempted to divide by zero.
   at ExceptionDemo.Program.Divide(Int32 operand1, Int32 operand2) in c:\...\Program.cs:line 35
   at ExceptionDemo.Program.Main(String[] args) in c:\...\Program.cs:line 15

Looking at the stack trace, you can see that the source of the exception is listed as within the Divide method even though the actual error occurs within the InnerDivide method.  Also note that even though a very specific DivideByZeroException was originally thrown, a generic System.Exception is what is ultimately reported.  In a more complex system, this could have support engineers examining the wrong parts of the code and extend the time required to find the real problem.
Another variation of the same pattern comes when the developer attempts pass the original exception rather than creating a new one. This often results in code such as the following:

try
{
// code producing exception
}
catch(Exception e)
{
    throw e;
}

When the sample code above is update to use this technique, it produces the following output:

Exception Caught:
System.DivideByZeroException: Attempted to divide by zero.
   at ExceptionDemo.Program.Divide(Int32 operand1, Int32 operand2) in c:\...\Program.cs:line 35
   at ExceptionDemo.Program.Main(String[] args) in c:\...\Program.cs:line 15

This is slightly better than the previous output in that the original exception type is preserved, but it still carries the disadvantage of losing the original source of the exception within the stack trace. This is because of the way that C# compiles to MSIL. MSIL contains two methods of propagating exception information. The first is the “throw” statement which is used for a new exception and the second is the “rethrow” statement which is used to allow an existing exception to continue bubbling up through the call stack. When C# compiles to MSIL, passing an exception to the throw statement causes the MSIL “throw” statement to be generated while using the throw statement by itself in a line causes the rethrow statement to be generated. The form of try/catch statement which allows the original exception to remain intact is as follows:

try
{
// code that generates exception
}
catch(Exception e)
{
    throw;
}

When the sample program is updated to use this technique, the output is as follows:

Exception Caught:
System.DivideByZeroException: Attempted to divide by zero.
   at ExceptionDemo.Program.InnerDivide(Int32 operand1, Int32 operand2) in c:\...\Program.cs:line 47
   at ExceptionDemo.Program.Divide(Int32 operand1, Int32 operand2) in c:\...\Program.cs:line 35
   at ExceptionDemo.Program.Main(String[] args) in c:\...\Program.cs:line 15

Now the actual line at which the exception occured is communicated along with all of its data being intact.
It’s important to note that throughout the demonstration of the most common problem that I see with how developers (mis)use exceptions, I have quietly introduced the second most common problem – that is having exception handling code that adds no value. It’s common for organizations to have coding standards (documented or otherwise) that dictate every method should use try/catch blocks. In these organizations we often find the “catch and throw” construct that was used in the code samples. In these scenarios we add lines of code and reduce readability without adding any value by performing activities such as logging or adding contextual data to the exception and it would be much better not to have try/catch blocks at all. The sample program used earlier provides identical output and is much easier to read when updated as follows:

static void Main(string[] args)
{
    try
    {
        Console.WriteLine(Divide(0, 0));
    }
    catch (Exception e)
    {
        Console.WriteLine("Exception Caught:");
        Console.WriteLine(e.ToString());
    }
    Console.ReadLine();
}
static int Divide(int operand1, int operand2)
{
    return InnerDivide(0,0);
}
static int InnerDivide(int operand1, int operand2)
{
    return operand1 / operand2;
}

There are many ways to make sure that application failures in production can be made easier to diagnose and resolve and today we have discussed only how a change in approach to the way that try/catch blocks are used can help.  I my next posting, we’ll look at another way to provide runtime information to whoever is supporting your application.

]]>
https://blogs.perficient.com/2013/03/30/dealing-with-exceptions-in-net-applications/feed/ 1 224312
Pagination in SQL Server 2012 https://blogs.perficient.com/2013/02/26/pagination-in-sql-server-2012/ https://blogs.perficient.com/2013/02/26/pagination-in-sql-server-2012/#respond Tue, 26 Feb 2013 23:52:54 +0000 http://blogs.perficient.com/microsoft/?p=17279

When dealing with any product that has been around for a while, it’s not uncommon to observe a progression of common tasks becoming less clumsy as the tools mature. In this post, I take a look at how pagination has evolved with SQL Server resultsets in order to highlight TSQL features introduced last year in SQL Server 2012.
Pagination is used to provide a subset of a potentially large resultset to end users and is used both to prevent overwhelming the user with massive amounts of data and conserve resources such as memory and network bandwidth.  For example, if I perform a Bing search for “pagination”, the results page reports over 4.3 million results – for more than I could ever hope to sift through to find something of value.  I am, however, presented with the first 6 results and the ability to move back and forth between pages of data in anticipation of finding something relevant to my query within at most a few pages.
In SQL Server versions prior to 2005, pagination was a bit of a clumsy task that usually involved populating a temporary table (or table variable for small sets of data after SQL 2000 was released) with all the possible results for the query and a sequential number used to determine each record’s position in the results.  Once this temporary table was populated, a query could be performed to select the rows within a certain range in the order.  This got the job done of only sending a page worth of data between client and server, but was quite cumbersome both to read and write and had the performance penalty of writing the results to a temp table (although some gains could be had by only writing the primary key and ordering value to the temp table and then using a join to the source table).
SQL Server 2005 introduced the ROW_NUMBER() function and made pagination a lot simpler.  This function specifies each row’s position in the resultset, allowing it to serve as a built-in replacement to temporary table column used in previous versions.  The challenge with ROW_NUMBER() is that it cannot be used directly in a WHERE clause, making it necessary to use nested queries and providing a solution that is not always easy to read or maintain.  The following code demonstrates using the ROW_NUMBER() solution to retrieve a page of data from the AdventureWorks2012 sample database:

DECLARE @pageIndex INT;
DECLARE @pageSize INT;
SET @pageIndex = 3;
SET @pageSize = 10;
SELECT r.LastName, r.MiddleName, r.FirstName
FROM
(
SELECT ROW_NUMBER() OVER (ORDER BY p.LastName, p.FirstName, p.MiddleName, p.BusinessEntityId) AS rowNumber
, p.LastName
, p.MiddleName
, p.FirstName
FROM Person.Person p
) AS r
WHERE r.rowNumber > (@pageIndex * @pageSize)
AND r.rowNumber < ((@pageIndex + 1) * @pageSize) + 1;

SQL Server 2012 greatly simplifies pagination by introducing syntax elements present in other popular database platforms, OFFSET and FETCH.  Simply put, these are used to direct that a certain number of rows in the resultset be skipped before returning a certain number of rows to the caller.  The whole idea of having to artificially generate a sequence which can be used to control paging and of having to essentially duplicate data projected in the inner and outer query goes away and leaves you with something that is succinct and easy to produce, understand, and maintain.  To produce the same results as in the previous example with the new syntax, your query would look like this:

DECLARE @pageIndex INT;
DECLARE @pageSize INT;
SET @pageIndex = 3;
SET @pageSize = 10;
SELECT p.LastName, p.MiddleName, p.FirstName
FROM Person.Person p
ORDER BY p.LastName, p.FirstName, p.MiddleName, p.BusinessEntityID
OFFSET (@pageIndex * @pageSize) ROWS
FETCH NEXT @pageSize ROWS ONLY;

In this post I have taken a brief look at the progression of TSQL syntax supporting pagination and demonstrated the new language features that make the task much more intuitive in SQL Server 2012.  In addition to conveying information about this specific improvement, hopefully I have also encouraged some thought about what else has changed.  A good place to start looking might be the aptly named article “What’s New in SQL Server 2012” on MSDN.

]]>
https://blogs.perficient.com/2013/02/26/pagination-in-sql-server-2012/feed/ 0 224270
When “Keep Nothing” doesn’t mean “Keep Nothing” https://blogs.perficient.com/2013/01/28/when-keep-nothing-doesnt-mean-keep-nothing/ https://blogs.perficient.com/2013/01/28/when-keep-nothing-doesnt-mean-keep-nothing/#respond Mon, 28 Jan 2013 12:52:54 +0000 http://blogs.perficient.com/microsoft/?p=11458

Being a frequent user of beta software, I often have to stumble through discovering things the hard way. Last fall when I moved from the Release Preview of Windows 8 to the final product on my Samsung Series 7 slate, I selected the “Keep Nothing” option during the install and was surprised to find when the installation completed that my 32GB hard drive was nearly full. A quick investigation revealed a folder named Windows.old into which my previous windows installation was copied – not quite “Keep Nothing”. I promptly issued the delete command and was returned a permission denied error because the folder and its files were marked as system files. After quite a bit of hassle including the removal of inherited file permissions and taking ownership of the folder and its files I was finally able to delete the files.
A few weeks ago we decided that it was finally time to clean up my wife’s computer after 4 years of use (it’s amazingly difficult to remember to uncheck “install other junk you don’t want” every time you install a new application) and my wife opted to take on the learning curve of moving to Windows 8. We inventoried the software she actually wanted on the machine, made sure all of her documents were backed up, and this weekend I ran the Windows 8 installer – again selecting “Keep Nothing”. When the install completed and I saw the 500GB Windows.old folder, I decided to see if there was an easier way to clean it up. What I found was that Disk Cleanup will remove the folder, but only if you specifically select the “Clean up system files” option. The procedure is documented in the Window 8 support article “How to remove the Windows.old folder“.
Microsoft didn’t choose to maintain these files without good reason (there’s only so much “Microsoft made me lose my files” that one company can take), but it would have been nice to see some notification even it was something like “Keep Nothing (old files will remain available)”. The good news is that they made it much easier to clean up than the way I found myself the first time through and that my wife is now another happy Windows 8 user.

]]>
https://blogs.perficient.com/2013/01/28/when-keep-nothing-doesnt-mean-keep-nothing/feed/ 0 224250
Using Problem Steps Recorder to Help Communicate https://blogs.perficient.com/2013/01/15/using-problem-steps-recorder-to-help-communicate/ https://blogs.perficient.com/2013/01/15/using-problem-steps-recorder-to-help-communicate/#respond Tue, 15 Jan 2013 11:44:07 +0000 http://blogs.perficient.com/microsoft/?p=10922

Most anyone who has worked with me for a while has probably heard me say “it’s not about what you know, but what you can prove”. I am often referring to automated testing, but this also can be applied to communicating the steps to reproduce a problem (repro steps) encountered by a tester or end user when it cannot be reproduced by the developer that is trying to isolate and remediate the problem. Far too often developers and the person they are trying to support become frustrated with one another as the tester “knows” they are seeing and giving the appropriate repro steps for a problem and the developer “knows” that they are faithfully following the steps given by the tester and cannot make the problem happen. In extreme cases, the frustration level escalates to the point where people “flip the bozo bit” and it’s nearly impossible to return to a productive exchange.
While excellent commercial tools exist to automate the capture of repro steps (such as the IntelliTrace tools that ship with some versions of Visual Studio), a simple and useful tool is already installed on the workstation of every tester running Windows 7. This tool is Microsoft’s “Problem Steps Recorder” and can be invoked by opening the start menu and typing “PSR”. The Problem Steps Recorder application has a minimal interface that allows the user to start recording, stop recording, and add a comment while recording. When the “Add Comment” command is selected, the user is also able to highlight a region on their screen to give their comment more context. Once the user has stopped recording, a zip file is created containing an MHTML document that shows screen shots of the users interaction with applications while recording as well as text describing those interactions and details of the program with which the user interacted.
Problem Steps Recorder is by no means a perfect of full featured tool and one conspicuously (and intentionally for security reasons) missing feature is the ability to capture the actual text typed into input controls. Limitations aside, however, PSR is an excellent tool to use when you can’t justify bringing in a full featured test tracking tool or just need help communicating a one-off problem. By capturing and recording the steps and system information the potential for steps to be miscommunicated or misunderstood is significantly reduced and focus can remain on fixing the problem.

]]>
https://blogs.perficient.com/2013/01/15/using-problem-steps-recorder-to-help-communicate/feed/ 0 224233
No Surprises https://blogs.perficient.com/2013/01/07/no-surprises/ https://blogs.perficient.com/2013/01/07/no-surprises/#respond Mon, 07 Jan 2013 13:15:41 +0000 http://blogs.perficient.com/microsoft/?p=10921

When I’m not delving into Microsoft technologies, I enjoy making music. A Christmas gift that I received turned out to be an unlikely trigger to make me think about usability. That gift was a book with transcriptions of some popular sax solos and included Money by Pink Floyd. This song is a bit unique in the popular music world in that it breaks from the typical pattern of having four beats in each measure (or at least an even number of beats) and instead uses five beats per measure. To (over) simplify for the reader who may have no musical knowledge, this means that in most music if you listen to the “pulses” of most music there is a natural breaking of the rythm into groupings of four pulses but in this song the natural grouping will have five instead. What does this have to do with usability? The fact that the song is in a unique time signature is not significant in regards to usability, but what is significant is that it had never occured to me that the time signature was unique. The song was written and played in a way that is “just works”, leaving the user completely unaware of technical significance and just enjoying the music.
When building software we can follow we can follow this example by keeping in mind that great software should never surprise the end user. The fact that it works for a given purpose should not strike the user as the least bit impressive no matter how much development effort was put into making it work.
A key factor in building software that does not surprise the end user is to make sure that software fits well from a user interface design perspective into the ecosystem in which it will run. If building a traditional Windows desktop application, established UI paradigms for that environment should be followed. If building a modern app for Windows 8, a different set of guidelines apply. Microsoft provides a set of guidelines for building a consistent user experience in each of these environments. For traditional desktop applications built on Windows 7 and Windows Vista, you can reference the Windows User Experience Interaction Guidelines and for Windows Store apps running on Windows 8 you can reference Designing UX for apps. Each of these resources cover topics such as navigation and user interface elements appropriate for different types of navigation.
If you build Windows client applications whether they’re running on Windows 7 or Windows 8, spend a little time familiarizing yourself or becoming re-acquainted with the appropriate experience guidelines and make an effort to ensure that your users are never surprised.

]]>
https://blogs.perficient.com/2013/01/07/no-surprises/feed/ 0 224232
Sitecore Rendering Parameter Templates https://blogs.perficient.com/2012/12/03/sitecore-rendering-parameter-templates/ https://blogs.perficient.com/2012/12/03/sitecore-rendering-parameter-templates/#respond Tue, 04 Dec 2012 01:23:08 +0000 https://blogs.perficient.com/microsoft/?p=10015

Both Sitecore and ASP.NET MVC both provide a development model that allows for distinct boundaries to be created between data, application logic, and presentation logic.  The purpose of this post is to describe the manner in which these capabilities can be utilized within a Sitecore MVC solution to provide content authors with the ability to customize the appearance of presentation components.  The implementation of these capabilities involves configuration within Sitecore, creation of enablement code, and implementation of front end display logic to honor the configured parameters.

Rendering Parameters

Sitecore provides for the clear separation between the data which will be displayed and the details of how it will be displayed on different devices through the use of Presentation Layout Details. The Presentation Layout Details allows the content author to configure which layouts and renderings will be utilized when presenting any given content item to the end user. In addition to specifying which layouts and renderings will be used, the individual presentation components can each be further configured using the Parameters field, which is is designed to accept key/value pairs in a querystring format (such as “showBorder=true&margin=20px”) and passes these values to the rendering components as contextual data at runtime. This functionality serves as the basis for the overall solution for configurable views.

Rendering Parameter Templates

The benefit of being able to pass user defined information to renderings at runtime is somewhat offset by the need for the content author to know which key/value pairs will have some sort of meaning to the code performing the rendering at runtime and the range of expected values for each key. With a non-trivial number of renderings and parameters, this quickly becomes unmanageable to the point where rendering parameters seem like they would not be a viable solution. Sitecore provides resolution for this drawback in the form of Rendering Parameter Templates, which allow the template author to specify named parameters which will appear in the dialogue used to configure presentation component properties and the data type of these parameters. Because the parameter names and value data type are specified by the template author, the potential for errors due to incorrectly keyed information is greatly reduced and the use of rendering parameters to control dynamic rendering behavior once again becomes feasible and even desirable.

Creating a Rendering Parameter Template

The key to creating a Rendering Parameter Template is to derive the template from the following template:

/sitecore/templates/System/Layout/Rendering Parameters/Standard Rendering Parameters

This inheritance ensures that the default parameter fields such as those related to the placeholder and caching options are present and then allows you to create your own fields just as with any other data template.
For good discoverability, Rendering Parameter Templates should be named in a manner that clearly identifies the purpose of their parameters and should be logically grouped within the content tree. Here are some examples of parameter templates named in an intuitive manner:

  • Add Margin Rendering Template Parameters
  • Show Separator Rendering Template Parameters
  • Add Margin and Separator Rendering Template Parameters

Assigning a Rendering Template to Presentation Components

Once a rendering template has been created, it can be attached to a presentation component (e.g. rendering, layout, sublayout) by updating the “Parameters Template” field of the presentation component’s definition. It is important to note that the Parameters Template field is a droplink field which only allows for the selection of a single value. This is the reason that multiple inheritance must be used to create a composite template when options from more than one rendering template are desired (as was done with the “Margin and Divider Rendering Template Parameters” template shown in the previous section).

Configuring a Presentation Component Instance

Having assigned a Rendering Parameters Template to a presentation component, content authors will now have the option to assign values to the fields defined in the Rendering Parameters Template by navigating to the Presentation Layout Details of a content item and then selecting the presentation component for which they want to configure rendering parameters. The content author will be presented with appropriate controls for the parameters’ data type to help ensure that they enter appropriate data.

Enablement Code

Accessing the configured rendering parameters at runtime does not require complex code, but the code is also non-trivial and duplication should be avoided when possible, so common code to be utilized by views should be added to your Sitecore MVC solution. The functionality can been exposed through two methods added to a helper class. The following sections describe the methods.

GetValueFromCurrentRenderingParameters

This method is responsible for interacting with the current rendering context to retrieve the value of the parameter with the specified name. If the parameter does not exist or the current rendering context is unavailable, the method will return null. It should be noted that this method works around a problem with in Sitecore’s RenderingParameters constructor (at least in the 6.6 release preview) where certain parameters querystring values cause a “null key” exception by bypassing the Parameters property by instead retrieving the field’s raw value from the Properties collection and parsing from there. The following code shows a sample implementation of this method:

public static string GetValueFromCurrentRenderingParameters(string parameterName)
{
  var rc = RenderingContext.CurrentOrNull;
  if (rc == null || rc.Rendering == null) return (string)null;
  var parametersAsString = rc.Rendering.Properties["Parameters"];
  var parameters = HttpUtility.ParseQueryString(parametersAsString);
  return parameters[parameterName];
}

 

GetItemsFromCurrentRenderingParameters

Rendering parameters use the same data types as any other data template, but the enhanced lookup functionality of field types such as droplink and multilist do not apply, so code that consumes these parameters must interact with Sitecore more directly to parse item ids and retrieve the associated items. This method abstracts the required steps and returns an array of Item objects represented by the field value. If no items are represented by the field value, an empty array is returned as opposed to null, so the return can always be safely used in enumeration (such as a foreach loop). The following code shows a sample implementation of this method:

public static Item[] GetItemsFromCurrentRenderingParameters(string parameterName)
{
  List result = new List();
  var rc = RenderingContext.CurrentOrNull;
  if (rc != null)
  {
    var itemIds = GetValueFromCurrentRenderingParameters(parameterName)
        ?? string.Empty;
    var db = rc.ContextItem.Database;
    var selectedItemIds = itemIds.Split('|');
    foreach (var itemId in selectedItemIds)
    {
      Guid id = Guid.Empty;
      if (Guid.TryParse(itemId, out id))
      {
        var found = db.GetItem(new ID(id));
        if (found != null)
        {
          result.Add(found);
        }
      }
    }
  }
  return result.ToArray();
}

 

Consuming Rendering Parameters from View Code

In addition to making rendering parameters available at runtime, views must be updated to do something with these parameters. The ideal case calls for retrieval of the parameter and either directly injecting the parameter value into markup or performing extremely simple logic to derive a value. The following snippet illustrates code that retrieves a multilist value from the rendering parameter and emits the names of the configured items as space separated CSS class names in the markup that is sent to the browser:

@{
  var marginCssClass = string.Empty;
  var marginItems = myHelper.GetItemsFromCurrentRenderingParameters("Margin");
  if(marginItems.Length > 0)
  {
    marginCssClass = string.Join(" ", marginItems.Select(i => i.Name).ToArray());
  }
}
<section class="@marginCssClass">

 
For an example of parameters that require simple logic to be performed, the following sample illustrates the use of a checkbox field typed parameter to determine whether a CSS class should be appended to the container for a rendering:

@{
  string showSeparator = myHelper.GetValueFromCurrentRenderingParameters("showSeparator");
  string containerClass = showSeparator == "1" ? "seperated" : string.Empty;
}
<section class="@containerClass">

 
As a general practice, using rendering parameters to control whether and which CSS classes are emitted is an effective control to help ensure that the rendering parameters are being used purely for presentation logic as is their purpose. Use beyond that may include determining whether to omit specific sections of markup may also be a valid use, but this also may indicate that a view should be refactored into two renderings which may be used together. In any case, care must be taken to avoid business logic being executed based on rendering parameters because this violates fundamental principles of an MVC-based approach.

Summary

In this posting, I’ve discussed how rendering parameters and rendering parameter templates in Sitecore can be utilized to provide configurable views. You should now have an understanding for how these parameters can be leveraged in a Sitecore MVC to provide a flexible solution that allows for clear segregation of concerns.

]]>
https://blogs.perficient.com/2012/12/03/sitecore-rendering-parameter-templates/feed/ 0 256391