Brian Ball, Author at Perficient Blogs https://blogs.perficient.com/author/bball/ Expert Digital Insights Mon, 14 May 2018 15:29:54 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Brian Ball, Author at Perficient Blogs https://blogs.perficient.com/author/bball/ 32 32 30508587 NuGet.org Launches New Site Redesign https://blogs.perficient.com/2017/07/20/nuget-org-launches-new-site-redesign/ https://blogs.perficient.com/2017/07/20/nuget-org-launches-new-site-redesign/#respond Thu, 20 Jul 2017 12:16:32 +0000 http://blogs.perficient.com/microsoft/?p=35741

NuGet.org, the go-to, repository for .NET developers to find and download packages, has just announced the launch of the site redesign they have been working on. There’s a link from the original site to use the preview site, or you can go directly to it by navigating to preview.nuget.org. According to the release announcement, the primary reasons for the redesign are:

  • Update the look and feel of the website (current site looks “dated”)
  • Bring accessibility up to modern-day standards
  • Give user more information on packages to help determine legitimacy/viability of any given package

The release announcement also goes into some details on features they currently have in the pipeline. Looks like some very exciting stuff!

]]>
https://blogs.perficient.com/2017/07/20/nuget-org-launches-new-site-redesign/feed/ 0 225266
Highlights from Microsoft Build Conference Day 1 Keynote https://blogs.perficient.com/2017/05/10/highlights-from-build-conference-day-1-keynote/ https://blogs.perficient.com/2017/05/10/highlights-from-build-conference-day-1-keynote/#respond Wed, 10 May 2017 18:55:40 +0000 http://blogs.perficient.com/microsoft/?p=35356

The Day 1 Keynote for Microsoft Build 2017 was a great mix of developer inspirations and new technology releases.
The inspiration piece ranged from large fortune 500 companies leveraging Azure all the way to a researcher building a watch-like device that helped a woman with Parkinsons write her own name for the first time in years; I highly recommend watching the video and see what a difference a small device has made in this woman’s life.
Here’s a quick list of what new tools/features were announced at the keynote:
Azure Cloud Shell: From the Azure portal, you can now access a bash command line (by clicking the “>_” icon at the top of the screen. This is a full featured bash interface that is running in a container on Azure. There is no need to install anything on the machine you are using to access the portal. This includes durable storage for files that will be available from wherever you access the portal. The shell comes with the Azure CLI already installed, your user logged in, and access to all subscription your current account has. In the future, Microsoft will add PowerShell as an option in addition to bash.
Azure Mobile Portal: Now you can access your Azure resources wherever you go. The app is available for both Android and iOS. As and added bonus, this app also has the aforementioned Azure Cloud Shell in it!
Visual Studio Snapshot Debugging for Azure: This feature inside Visual Studio 2017 allows you to connect to running applications in Azure and debug them. Unlike standard debugging, a snapshot of the application state is used for debugging. This means a developer can debug issues in the production environment without impacting user experience!
Visual Studio for Mac: This is now in general availability status. Anyone who has a current license for Visual Studio now has Visual Studio for Mac as part of their license. Functionality includes first-class support for Azure integration.
Azure Database Migration Service: A new tool from Microsoft that makes migrating from SQL Server to Azure SQL PaaS with virtually no down time.
New Relation Database PaaS offerings: Microsoft announced both MySQL and PostgreSQL are now available as PaaS services. This means you can use these databases in Azure and not have to worry about high availability, security, and software updates. All existing applications and tools that work with MySQL and PostgreSQL will work with the new PaaS offering with no change other than the connection string itself!
Cosmos DatabaseThis new offering supports multiple models, such as MongoDB, DocumentDB, etc. RDBMs don’t appear to be supported. This service allows turnkey scale out for each of these models. The SLA for this service not only includes up-time guarantee, but also response time guarantees. With a few simple clicks in the Azure portal, you can scale out your databases to multiple regions, and even during scale-out, your databases continue to be available.
Visual Studio 2017 Azure Function Tooling: Tooling for Azure functions in VS2017 has been released. 

]]>
https://blogs.perficient.com/2017/05/10/highlights-from-build-conference-day-1-keynote/feed/ 0 225245
Visual Studio 2017 Released https://blogs.perficient.com/2017/03/07/visual-studio-2017-released/ https://blogs.perficient.com/2017/03/07/visual-studio-2017-released/#respond Tue, 07 Mar 2017 21:28:14 +0000 http://blogs.perficient.com/microsoft/?p=34975

Visual Studio 2017 is now RTM and is available for download. This new version of Visual Studio brings many new features:

  • New Installation Experience – Select only the components you need to use (e.g. web, windows, mobile development features)
  • Supports C# 7.0 language features
  • Debugging Improvements – Exception Helper, faster code navigation, run to click, etc.
  • Live Unit Testing – Get visual feedback on unit test coverage as well as pass/fail as you’re writing code
  • New Git features available directly from the IDE
  • New .Net Core tooling and MSBuild-based project file
  • Node.js tools
  • Xamarin 4.3
  • NuGet – Dependencies can now be defined directly in the project file instead of an external packages.xml file
  • EditorConfig – Support for .editorconfig file that allows a team to define (at both a solution and project level) coding standards and how to treat standard violations (e.g. suggestion, warning, error, etc.)

That list does even scratch the surface of what is available (release notes here).
For today and tomorrow only (March 7 – 8 2017), Microsoft is giving demonstrations of various features (streaming live). Check it out!

]]>
https://blogs.perficient.com/2017/03/07/visual-studio-2017-released/feed/ 0 225228
Adobe Experience Manager on Azure – Virtual Networks https://blogs.perficient.com/2017/01/18/adobe-experience-manager-on-azure-virtual-networks/ https://blogs.perficient.com/2017/01/18/adobe-experience-manager-on-azure-virtual-networks/#respond Wed, 18 Jan 2017 15:58:24 +0000 http://blogs.perficient.com/microsoft/?p=34731

Since the announcement of the Microsoft/Adobe partnership at Ignite, we at Perficient have been working to make this a first-class offer for our clients. The internet wouldn’t exist if all the world’s computers were isolated from one another. Networking is the foundation of modern day computing, so it’s only logical to start with this topic!

Virtual Networks

When deploying Adobe Experience Manager (AEM), the typical network configuration is to have the public facing Apache dispatcher servers in a web DMZ and then the AEM publish and author instances in separate application DMZ. Network communication, from both the public internet as well as other DMZs are controlled through one or more firewalls.
To realize the same type of security in Azure, the concepts are very similar, but the names may not be very familiar. In Azure, we start with a virtual network. All VMs that are running in Azure for a specific environment reside within this virtual network.
To replicate the functionality of DMZs in the on-premises world, a subnet is created for each zone. A network security group (NSG) is created and assigned to each subnet. NSGs allow both inbound and outbound rules to be set. By default, all inbound traffic that originates from the same virtual network is allowed. NSG rules work on a priority system: a rule with a higher priority (lower number) will override a rule with a lower priority (higher number). The default rule allowing in-bound traffic from the virtual network is a very low priority rule, so it is easy to create a higher priority rule to override it. We want to limit traffic as much as possible, so a rule is created to block all incoming traffic. Any exceptions we need to make to this rule will simply be assigned a higher priority.
When creating VMs, there is an option to assign an NSG to the network interface; we typically do not do this as we rely on the NSG of the subnet to handle all of the rules. This allows us to manage all rules from a single screen for a particular subnet. If there are specific needs that require a VM to have special security rules, then a new NSG should be created for that VM.
Important note: NSG rules are applied when a VM starts. If an NSG rule changes after a VM has already started, it will not be aware of this rule until it has restarted.

IP Addresses

We assign private IP addresses within the virtual network statically. This ensures that each time a VM starts, it receives the same IP address, and that any configuration that relies on those IP addresses remain intact.
Public IP addresses are assigned to VMs that need to be connected to directly from the public internet. If the public IP address is dynamic, then we always assign a DNS name label so that no configuration changes are needed if/when the IP address changes. Pricing can be different for static vs dynamic IP addresses; the needs of the system determine which is the better option.

On-Premises Connectivity

Often there are reasons to want to connect with VMs inside the Azure virtual network with machines that are remote/on-premises. This can be accomplished by assigning each VM a public IP address, but in most cases, this is not desirable as it can compromise the security of the virtual network.
Azure virtual networks support VPN. They support both point-to-site and site-to-site VPNs. Going either route allows you to connect with the virtual network, and directly access the VMs in a highly secure fashion. This allows the access that is needed while not compromising the security of the entire virtual network.

]]>
https://blogs.perficient.com/2017/01/18/adobe-experience-manager-on-azure-virtual-networks/feed/ 0 225217
Unit Tests with .NET Core and VSTS https://blogs.perficient.com/2016/08/30/unit-test-with-net-core-and-vsts/ https://blogs.perficient.com/2016/08/30/unit-test-with-net-core-and-vsts/#comments Tue, 30 Aug 2016 13:00:38 +0000 http://blogs.perficient.com/microsoft/?p=33480

Executing unit tests as part of a Visual Studio Team Services (VSTS) build process has, until now, been the most pain free process I’ve ever encountered. In fact, if you select the Visual Studio build template, there is already a step that handles executing and publishing results of your unit tests. It assumes the convention that your DLL will have the word ‘test’ somewhere in the name. Even if it doesn’t, it is configurable, so you can come up with your own wild-card expression that will work for you.

I have taken for granted that executing these tests in VSTS just works. That is until I went to configure my first CI build for my .NET Core solution. I looked at the output of the build, and while it did not error, it also did not find any tests to run. It did not take me long to realize that, as-is, the step does not execute the unit tests.
Getting my unit tests to work locally wasn’t exactly a walk in the park, but after a couple of searches, I had it working in about 10 – 15 minutes (this is out of scope for this article). All in all, not too bad.
I have found two ways to get the unit tests execute AND the results published as part of the build output.

Visual Studio Test

This is the same step that has worked for me in the past. I was able, through a handful of searches, to discover what settings needed to be changed in order for this step to continue working as expected.
The first setting is Test Assembly. Typically this points to a compiled DLL, but for .NET Core, it needs to point to the project.json files, so I updated the value to **\test\**\project.json. In my project, I am following the convention that all unit test projects are located under a “test” folder in the root of the solution, you may need to change yours. For example, if your test projects are mixed in with the non-test project, a value of **\*Test*\project.json may be what you need.
The second setting is hidden in the “Advanced Execution Options” section that you have to expand, it is called “Other console options”. Ultimately, this step executes vstest.console.exe, so this is where any additional command line arguments are placed. In our case, we want to set the value to /UseVsixExtensions:true /logger:trx. This tells the application to use VSIX extensions that are installed. Fortunately, at the very least, the xUnit VSIX is installed on the build server, so it worked for me.
Below are what my settings look like:
Visual Studio Test

Script dotnet test

This is actually the first solution I found. If you decide not to use the Visual Studio Test step, there’s nothing to stop you from scripting it out yourself. I was able to get the unit tests running quickly using the following PowerShell command:

gci test\project.json -Recurse | % {dotnet test $_}

This iterates over all the project.json files located somewhere under the test folder, and executes dotnet test, which uses the settings in the project.json file to determine how to execute the unit tests. After checking in this file, updating the build definition, and executing the build, the unit tests were all executed; there was just one problem: the results did not show up in the build summary screen.
Not surprisingly, VSTS doesn’t parse the output to get all the unit tests information (that would be too fragile and would likely break whenever the specific unit test provider decided to change their output). Fortunately VSTS provides a Publish Test Results step. I updated my PowerShell command:

gci test\project.json -Recurse | % {dotnet test $_ -xml "$($_.DirectoryName)\Test-Results.xml"}

Each provider has a similar switch for specifying an output file. The above works for xUnit, but you may need to alter it to get to work with the test suite you are using.
I executed the updated command and verified that it was writing the results to disk. Now I needed to configure the publish results test to read these files. This was fairly simple:
Publish

Note: For the sake of brevity of this post, I left out the fact that I started with NUnit. It was at this point that I switched to xUnit. The best I can tell is that the Publish Test Results Task’s NUnit format is for NUnit 2.0, but you must use NUnit 3.0 for .NET Core (which has a different results format). This left me at an impasse and I had to switch over to xUnit. I hope VSTS build servers get updated soon to work with NUnit 3.0 format, but so far, my experience with xUnit has been good, so I will likely not switch back for this project.

With these changes in place, I ran the build, and not only did it execute the unit tests as expected, but the results were published successfully and are view-able from the build summary screen.
I like the “out of the box”-ness of the first solution (plus, any one who comes behind looking at the build definition will quickly realize the purpose of the steps and probably not look into the details); however, “fool me once, shame on you…” comes to mind, so I feel the second solution is less likely to give me issues in the future. Ultimately, I will likely go with the second solution as it has the fewest prerequisites for the build server. As long as the build server has dotnet.exe installed (which is a must as it is needed to compile the code itself), then all the files needed to execute the unit tests will be downloaded as part of the dotnet restore command.
Hopefully this will save you, the reader, time and headache while you are setting up your builds!

]]>
https://blogs.perficient.com/2016/08/30/unit-test-with-net-core-and-vsts/feed/ 5 225174
“Database Platform Service” Error with MSDeploy and dbDACFx https://blogs.perficient.com/2016/08/09/database-platform-service-error-with-msdeploy-and-dbdacfx/ https://blogs.perficient.com/2016/08/09/database-platform-service-error-with-msdeploy-and-dbdacfx/#comments Tue, 09 Aug 2016 13:00:38 +0000 http://blogs.perficient.com/microsoft/?p=33146

MSDeploy (a.k.a. Web Deploy) is mainly known as a technology to deploy web applications, but it is much more than that. It is a platform that can be used to deploy many different application and application components. It accomplishes this by allowing custom providers to be written. MSDeploy ships with providers that cover a wide-range of deployment needs.
One common provider used is the dbDACFx provider. This is used to deploy data-tier applications (i.e. databases). The source can be an existing instance of a data-tier application or a .dacpac file, which is nothing more than a zip file that contains XML files (which is very common these days: MS Office, NuGet, etc.).
While attempting to deploy a dacpac file, I got the following error message

Internal Error. The database platform service with type Microsoft.Data.Tools.Schema.Sql.Sql120DatabaseSchemaProvider is not valid. You must make sure the service is loaded, or you must provide the full type name of a valid database platform service.

Pretty cryptic. The only part that made sense was the “Sql120” piece, which is the version given to SQL Server 2014. That made sense as that was the target platform selected when the .dacpac file was created. The command line that was issued that caused the error message was something like

msdeploy -verb:sync -source:dbDACFx=”c:\myapp.dacpac” -dest:dbDACFx=”Server=x;Database=y;Trusted_Connection=True”

I installed MSDeploy through the Web Platform Installer, and I made sure to select the option that included the bundled SQL support.
I verified that I was able to install the .dacpac through Management Studio on my local machine, which tells me the database server was fine, so I knew the issue had to be with the server that was hosting MSDeploy.
After many Google Bing searches, I learned that DACFx was the short name for “SQL Server Data-Tier App Framework”. Looking on the server with MSDeploy, I discovered that the SQL 2012 (aka sql110) version was already installed. It all started coming together now, msdeploy was happy with the dbDACFx provider since it was installed, but when it went to locate the needed version of DACFx (specific by the .dacpac file), it could not find it, and it errored out.
A quick search in the Web Platform Installer resulted in finding “Microsoft SQL Server Data-Tier Application Framework (DACFx) (June 2014). Bingo! I installed it, reran my MSDeploy command, and I got a successful deployment of the .dacpac file!
For those who would rather not run the Web Platform Installer, you can go to the Microsoft site and download it directly (please search for the appropriate version you need).
If you want to see which versions are installed, go to Programs and Features on your machine and look for “Microsoft SQL Server 20xx Data-Tier App Framework”.

]]>
https://blogs.perficient.com/2016/08/09/database-platform-service-error-with-msdeploy-and-dbdacfx/feed/ 1 225167
Ins and Outs of async and await https://blogs.perficient.com/2016/07/27/ins-and-outs-of-async-and-await/ https://blogs.perficient.com/2016/07/27/ins-and-outs-of-async-and-await/#respond Wed, 27 Jul 2016 18:25:24 +0000 http://blogs.perficient.com/microsoft/?p=33001

C# 5.0 introduced two new keywords: async and await. These keywords have a very powerful effect that can be used without fully understanding them. This is a double-edged sword. It’s great to have a language feature that doesn’t take much time to implement, but at the same time, if there isn’t at least a basic understanding, then utilizing it correctly and troubleshooting any issues around it becomes very difficult.
It’s outside the scope of this blog post to do a deep dive into what exactly is happening when using these keywords, but hopefully you will have a better understanding, conceptually, so that you can leverage their functionality in an intelligent way.

async

The best way to describe the async keyword is that it is an implementation detail. This means that although it is used when declaring a method, it is not part of the method signature itself. Any code that invokes a method marked as async doesn’t know, nor does it care, that the method has been marked this way. async is really a way of indicating to the C# compiler that this method needs to be compiled in a different manner than a typical method.
To drive the point home that async is an implementation detail, think about it this way: async is not valid to use on a method when you are defining an interface. If you write a class to implement the interface, async is not included when you have Visual Studio stub out the interface implementation.
When should you mark a method as async? The answer is very simple: whenever the keyword await is used in the body of the method. If you mark a method as async and at least one await is not present, a compiler warning will be generated. The compiled code will still execute, but the method can be compiled in a more optimized way if the async keyword is removed.

await

The next question is naturally “when do I use await?” await can be used with any awaitable type (duh!), but what is an awaitable type? Many developers would answer: System.Threading.Tasks.Task or System.Threading.Tasks.Task<T>. It is correct that those are two awaitable types, that is not the complete answer. The complete answer is a little more complicated. Whether or not a type is awaitable is determined by duck typing.
The duck typing rules (simplified) are that the type you want to await must have a public method called GetAwaiter that takes no parameters and returns an object that meets the following criteria:

  • Implements either of the two interfaces: INotifyCompletion or ICriticalNotifyCompletion (both are located in the System.Runtime.CompilerServices namespace)
  • Has a property called IsCompleted that returns a boolean
  • Has a method called GetResult. The method is not required to return anything (i.e. can be void)

All that being said, Task and Task<T> pass the duck typing test, and are likely going to be the only awaitable types you’ll encounter.
By convention, all methods that return an awaitable type should have the suffix of Async at the end of the method name. This is not a requirement, but it does make it obvious to anyone who sees the method (either in code or via intellisense) that it returns an awaitable type.
Please note that at no point have I said “an awaitable method”. Methods are not awaitable, types are awaitable. The most common use of the await keyword will look something like this:

var value = await SomeMethodAsync();

While it looks like the SomeMethodAsync method is being awaited, it is not, the object it returns is being awaited.

When and How to use await

Just because a method returns an awaitable type doesn’t mean the await keywords should be used. The await keyword should only be used when the result of the awaitable type is needed or if the completion of the asynchronous operation represented by the awaitable type is required before continuing execution of the current method. Often the result is needed right away, but this isn’t always true. Take the following example:

var value1 = await GetValueAsync();
var value2 = await GetAnotherValueAsync();

A first glance, there may not be anything wrong with the above code, but there are potential performance problems. Let’s say GetValueAsync() takes 4 seconds to complete and GetAnotherValueAsync() takes 3 seconds to complete. Executing both lines of code will take approximately 7 seconds to complete. This is because the await keyword causes the current method to suspend execution and the method will not resume executing until the asynchronous operation is complete. Assuming these two asynchronous methods are completely unrelated, there is no need to wait for the first one to complete before executing the second one, so the above code can be optimized by rewriting it this way:

var awaiter1 = GetValueAsync();
var awaiter2 = GetAnotherValueAsync();
await System.Threading.Tasks.Task.WhenAll(awaiter1, awaiter2);
var value1 = awaiter1.Result;
var value2 = awaiter2.Result;

There is a little more code, but in this case, the method will only take about 4 seconds to execute, because both asynchronous operations are happening in parallel. We could have awaited each awaiter individually, and nothing would have been wrong with that, but Task has a static method that makes it easier to combine multiple awaiters into a single awaitable. To get the result of each asynchronous operation, invoke the Result property on the awaiter. Please note: Invoking Result on the awaiter before the asynchronous operation completes will cause the current thread to block until the operation has completed, so please only do this once you know the awaitable has completed. Using the await keyword gives us this guarantee, so we know it is safe to call after it has been awaited.
If the method you are writing doesn’t have a need for the result of an asynchronous operation nor does it need a guarantee that the operation is complete before it finishes execution, then there may be no reason to await the operation at all. One common reason to await an operation even if the result is not needed: the method should handle exceptions that occur in the asynchronous operation. If this isn’t needed , then it’s entirely possible that the method may not need to await anything, and if that’s the case, the method itself may not need to be marked as async. Note: Even if a method is not marked as async, it should still have the Async suffix if it returns an awaitable type.
Hopefully this blog post has given you a better understanding of when and how to use async and await, and will save you time in the future when working with this language feature.

]]>
https://blogs.perficient.com/2016/07/27/ins-and-outs-of-async-and-await/feed/ 0 225166
Leveraging In-Line User Defined Functions in SQL Server https://blogs.perficient.com/2014/01/28/leveraging-in-line-user-defined-functions-in-sql-server/ https://blogs.perficient.com/2014/01/28/leveraging-in-line-user-defined-functions-in-sql-server/#respond Tue, 28 Jan 2014 14:06:45 +0000 https://blogs.perficient.com/microsoft/?p=20865

User-Defined Functions (UDFs) are great, and have been part of SQL Server for a long time now. They come in two primary flavors: scalar and table-value. The first returns a single value, where as the second will return an entire result set. It’s not uncommon to want to reuse a block of SQL. Views are nice, but sometimes you want to be able to pass in parameters, which isn’t allowed with views.
Leveraging In-Line User Defined Functions in SQL ServerUDFs do allow parameters, but in my experience they aren’t very efficient, especially if you are using the CROSS APPLY operator. When this happens, SQL usually  must execute the stored procedure for each row in the result set. Even if your UDF is pretty light weight, this can cause a lot of extra overhead for the server. This is because a UDF can be a multi-statement function, and therefore SQL has to deal with it on a row-by-row basis, instead of working with an entire set of data at once (which is what SQL is optimized to do).

There is a way, however, to write a UDF such that SQL can continue to work with the entire set of data instead of executing the UDF row-by-row. This is known as an in-line UDF. A UDF is considered to be in-line if it is a table-valued UDF that contains a single SELECT statement. This means the UDF can NOT have any of the following:

  • A single query (sub queries are allowed)
  • No local variables
  • No BEGIN/END statements
  • Body of the UDF is a single RETURN statement

The simplest way to break it down is this: a normal UDF is like a block box. SQL puts values in, and values come back out. It doesn’t really know what’s going on. But, with an inline UDF, SQL no longer views it as a black box. Since it’s a single query, it can essentially “copy” and “paste” the contents from the UDF into the query that is calling it. This gives SQL a chance to optimize the overall query, and has the potential for coming up with a more efficient execution plan.
Here’s what a typical table-value UDF looks like:

CREATE FUNCTION [schema].[name] (@parameter1 type, @parameter2 type…)
RETURNS @result TABLE (Column1 type, Column2 type…)
BEGIN
[Multiple SQL Statements – inserting values into @result table variable]
RETURN
END

Here’s what an inline UDF looks like

CREATE FUNCTION [schema].[name] (@parameter1 type, @parameter2 type…)
RETURNS TABLE
AS
RETURN( [SQL Statement] )

Not all UDFs can be written as inline, but in my experience, a lot of them can be; it may require thinking outside the box a bit. Here are some tricks I’ve used to help me create an inline function:

  1. Instead of using IF statements, utilize the CASE statement
  2. If a function is a scalar value function, turn it into a table value function that returns a table with a single row that contains only one column
  3. Instead of creating local variables and setting their values, do the work inside of the query itself
    • This one runs the risk of making your query harder to read, but if your goal is performance, then it may be worth it
]]>
https://blogs.perficient.com/2014/01/28/leveraging-in-line-user-defined-functions-in-sql-server/feed/ 0 224555
SQL Server Change Tracking and Change Data Capture: A Primer https://blogs.perficient.com/2013/08/12/sql-server-change-tracking-and-change-data-capture-a-primer/ https://blogs.perficient.com/2013/08/12/sql-server-change-tracking-and-change-data-capture-a-primer/#respond Mon, 12 Aug 2013 15:00:00 +0000 https://blogs.perficient.com/microsoft/?p=19153

When SQL Server 2008 was released, two of the features added were Change Data Capture and Change Tracking. Both features are essentially designed to allow users to query a database and determine what data has changed. However, they go about it in two very different ways. Some of the differences are obvious while others are much more subtle.
The purpose of this article is to compare and contrast the two features and help the reader understand the pros and cons of each approach so that they may make an informed decision. This article isn’t meant to replace the MSDN section on these features. I highly recommend reading about these two features on MSDN. It is great for understanding how to implement each feature and how to take advantage of it, and while they do have a page devoted to explaining the differences, I feel like they do not emphasize some of the differences as much as they should.

Tip: I will call out information that I feel is important like this. This means that the information is not found in the section of MSDN that I linked previously or that I feel it was not emphasized enough by MSDN.

Change Tracking (CT)
Change Tracking is described by MSDN as “light weight”, and while that is certainly true in some aspects, I do not feel it is entirely honest. The primary example given for using CT is if you need to synchronize with other databases that are occasionally online (think about an application on a tablet or laptop). In order to use CT you must enable it on the database level, but that does not actually begin tracking changes. You must enable it for each table you want to track changes.

Tip #1: CT is a synchronous process. If a table that is being monitored by CT has its data changed (Insert, Update, or Delete), the information about this change is recorded while it is happening, this will cause a slight delay in processing the aforementioned Insert, Update, or Delete.

When you enable CT on the database level, you also configure the retention period. This is the period of time that SQL Server will keep information about data changes. Whatever you are using CT for, you should make sure that you have not passed the retention period since the last time you’ve checked for changes. In order to access data about the changes, you use the CHANGETABLE function in your query.
You can gather the following information with CT:

  • Which rows have been Inserted
  • Which rows have been Updated
  • Which rows have been Deleted
  • (optional) Which column values have changed (but not the actual values)*

The last one is optional because when you enable CT on a table, you also indicate if you want it to keep track of column changes. It also only applies to Updates (not Inserts or Deletes).

Tip #2: CT considers a column as having been changed if a value was assigned to the given column in the UPDATE statement (even if the value being assigned to the column is the same value already in that column).

CT should be thought of as a feature that aggregates changes. This means that if a row has been updated five times since you last checked, querying for changes will only show that the row has been updated, there isn’t any information on how many times. You can get this information, but it is a very manual and brute force process. Another good example is if a row has been updated once and then deleted since you last checked, then CT will report the row as deleted. If getting a true history of all the changes is important, then Change Data Capture is most likely better suited for your needs.
Change Data Capture (CDC)
Change Data Capture is the more serious of the two options. It is able to do more than CT can. While CT is available in the Standard Edition of SQL Server, CDC is not.
CDC also needs to be enabled on the database level before it can be used. CDC needs to be enabled on a per-table basis as well. By default all columns are involved in CDC, but that is configurable.
You can gather the following information with CDC:

  • Which rows have been Inserted
    • The values of the tracked columns at the time of Insert
  • Which rows have been Updated
    • The values both before and after of the tracked columns at the time of the Update
  • Which rows have been Deleted
    • The values of the tracked columns at the time of the Delete
  • The time at which the operation happened

Tip #3: A column is only considered to have changed if the value itself has changed (this pertains to Updates only). If you set a column to the same value it already contains, CDC does NOT consider that the data has changed. This means that if the values you set in an UPDATE operation are all the same as the current value, then there is no record of the UPDATE in CDC.
Tip #4: Remember, CDC can be configured to track a subset of columns in a table. Anything that happens to columns CDC is not configured to track is completely ignored.

Unlike CT, CDC is asynchronous. This means that when a table’s data is changing, CDC is not involved, it does not slow down any transactions. CDC detects changes by reading the transaction log. When CDC is being utilized, there is a job created in SQL Server Agent. When the job runs, it reads the logs and records the changes. This also means that when you query for changes, you may not be getting all of the changes as there is a delay between when the data is changed and when the data is available in CDC.

Tip #5: Asynchronous does NOT mean free! There is a common misconception that if something is asynchronous, that it is “free”, meaning it doesn’t take any resources. This clearly is not true; all it means is that when data is changed, the session that caused the change does not have to wait for CDC to record the change, that will happen at a later time.

CDC creates physical tables. These are the tables you query to determine which values have changed. In these tables, there is a single row created for each INSERT and DELETE operation. However, two rows are created for each UPDATE operation. One row describes the data before the update, and another row describes the data after the UPDATE. Like CT, there is a retention period (default is 3 days). There is another SQL Server Agent job that will remove rows from the CDC tables that are older than the configured retention period.
Hopefully this article has been helpful in articulating some of the important, but not well-documented, differences between CDC and CT.

]]>
https://blogs.perficient.com/2013/08/12/sql-server-change-tracking-and-change-data-capture-a-primer/feed/ 0 224409
Exploring C#: Type inference and the var keyword https://blogs.perficient.com/2013/07/11/exploring-c-type-inference-and-the-var-keyword/ https://blogs.perficient.com/2013/07/11/exploring-c-type-inference-and-the-var-keyword/#comments Thu, 11 Jul 2013 15:00:13 +0000 http://blogs.perficient.com/microsoft/?p=18693

Type inference is a language feature that isn’t very well-known, yet it is used by virtually every developer. It was formalized in the C# language with C# 3.0. Let’s start with an example of some code:
List<string> myList = new List<string>();
myList.Add<string>(“hello”);
The code isn’t too bad, but can be cleaned up:
var myList = new List<string>();
myList.Add(“hello”);
Functionally the code is identical, but more terse. The var keyword was introduced in C# 3.0. It is not a variant type. It is a static type, but it is not determined until the code is compiled. When the above code is compiled, the compiler sees that there is a variable called ‘myList’, and it knows that it should be of type List<string>. It knows this because it looks at what type is being assigned to the variable. You can not do the following:

var myList;
myList = new List<string>();
The reason is that since the variable is being created before it is being assigned, the compiler has no idea what type the variable should be. In order to use the var keyword, it must be initialized on the same line as it is declared.
If you want the variable to be of type IList<string>(), then you must specifically declare it that way; you can only use var if you want the variable type to be the same compile time type that is being assigned to it.
“Compile time” type is the key phrase, let’s say you’re invoking a method that has a return type of System.IO.Stream. If you store the results of the method call in a variable declared var, then that variable will be of type System.IO.Stream, because that is the type the compiler is aware of. If the method returns a MemoryStream at runtime, the variable type will still be Stream, since the type of var is determined at compile time, not at runtime.
These assumptions that the compiler is making on types is known as type inference.
Just because the var keyword is used doesn’t guarantee that your code is as simple as possible. I’ve actually seen the following line of code before:
var item = (IItem) new Item();
Item does implement the interface IItem, so there is nothing incorrect about the syntax, but it could actually be made more simplistic by removing the var keyword:
IItem item = new Item();
This isn’t type inference; it’s actually implicit type casting.
Lambdas as we know them would not be possible without type inference.
var lambda = x => x.Name
The above is not a valid statement. Why not? Because the compiler does not know what the type of “x => x.Name” is. It can either be used as a shorthand for expressing a function, or it could be shorthand for an expression.
Func<T,U> lambda = x => x.Name;
Expression<Func<T,U>> lambda = x => x.Name;
Both of the above are valid statements, but cause the compiler to behave in very different ways. With the first statement, the compiler creates a method, just like if you had wrote a method in your class. In the second statement, the compiler actually analyzes the expression and breaks it down into a series of nested objects that can be inspected at runtime and interoperate what the developer wants to do.
When you use a LINQ method such as Select, you would simply use the syntax x => x.Name. The reason you don’t need to specify the type, unlike before, is because of type inference. If the method you are invoking is looking for an expression, then the compiler infers that the syntax is for an expression, and compiles it as such. If the method you are invoking is expecting a function, then the compiler will compile it in that way. You also don’t need to specify what x is or what the return type of the expression is because of type inference. The compiler knows the type of x because the method signature dictates the type of x. You don’t need to specify the return type of the function/expression because the compiler sees that you are returning the Name property from x, so it assumes that is the return type.
Type inference is essentially there to make a bunch of safe assumptions about the types of various values in your code. Relying on the compiler to do this reduces the amount of code you have to write and makes your code much more readable. There is nothing stopping you from explicitly declaring all of your types, but it results in larger code files that aren’t as easy to read.

]]>
https://blogs.perficient.com/2013/07/11/exploring-c-type-inference-and-the-var-keyword/feed/ 1 224379
Where does SharePoint store MS Project column mappings? https://blogs.perficient.com/2013/07/10/where-does-sharepoint-store-ms-project-column-mappings/ https://blogs.perficient.com/2013/07/10/where-does-sharepoint-store-ms-project-column-mappings/#respond Wed, 10 Jul 2013 15:00:38 +0000 http://blogs.perficient.com/microsoft/?p=18685

I came across something interesting in SharePoint the other day. I knew that a user can open a SharePoint task list in MS Project, edit the items, then hit save, and the changes will be reflected in the task list. This can be done by going to the task list screen, clicking on the “List” ribbon and then clicking “Open with Project”.
I was working with a third party product that uses the task list in SharePoint, but the list had been customized to include a good number of new columns that aren’t part of the default SharePoint task list. When editing in Project, using the aforementioned workflow, some of those custom columns were already included in the MS Project file. This wasn’t too surprising as the list was associated with a template project file, and that template had the mappings already set. You can view the mappings from within MS Project by going to File -> Info, and then clicking the “Map Fields” button, which is visible if the MS Project file has already been synced with SharePoint.

What surprised me was the fact that I created a blank MS Project file, added some tasks, and then synced it with an existing SharePoint task list — which can be done using File -> Save As — all of the column mappings still came through. How is this possible? My tasks were synced back to SharePoint, so that means the template MS Project file wasn’t used. I could tell when I closed MS Project and then opened it again using the ribbon in SharePoint that it was my file that was used; it wasn’t merged with an existing file.
The only explanation I could come up with is that SharePoint was actually storing the column mappings somewhere, but where? It took me a while to find the answer. I couldn’t find it online, so I grabbed a copy of SharePoint Manager and started digging through the site structure.
It turns out that the column mappings are stored in the task list itself. If you go to the Root Folder of the list, then go to the Properties collection, there is a property called “WSSSyncFieldMap”, but only if the task list has been synchronized with an MS Project file first. The value of that property is an XML fragment. That fragment contains the column mappings between the SharePoint task list and MS Project.
It’s worth noting that there are some column mappings that can’t be edited via MS Project, these mappings are assumed by SharePoint and MS Project, and they will not appear in the XML Fragment.

]]>
https://blogs.perficient.com/2013/07/10/where-does-sharepoint-store-ms-project-column-mappings/feed/ 0 224378
Easy Email Templates with the Razor View Engine https://blogs.perficient.com/2013/05/01/easy-email-templates-with-the-razor-view-engine/ https://blogs.perficient.com/2013/05/01/easy-email-templates-with-the-razor-view-engine/#comments Wed, 01 May 2013 16:27:57 +0000 http://blogs.perficient.com/microsoft/?p=18059

    I was given a task recently to have an application generate and send an email. The requirements were straight forward, the email should contain some customer information along with a list of products. My first thought was to use StringBuilder to handle the task, but something about that bothered me. In 2013, am I still building markup by hand? That just felt wrong!
The email is a fairly simple one, so using StringBuilder wasn’t out of the question, but who wants to mess up their well-written, well-formatted C# code with a bunch of messy hard-coded HTML strings?
My first thought was to use XSLT. It’s pretty well-suited to the task at hand: you create a template, then serialize your data and use it to transform the template into HTML. I have three issues with that:

  1. My data is coming from multiple sources, so I would need to create a single class that holds all my data
  2. I would have to re-learn XSLT
  3. Whoever comes behind me in the future would need to also know XSLT

The first issue is very minor, more of an annoyance really. The second issue is there because although I’ve used XSLT in the past, it has been years, and I would essentially need to re-learn it in order to use it. The third issue is that I don’t feel XSLT is a common skill among developers. You could go your entire development career and never touch it.
I remember reading somewhere online a while ago about using Razor outside the context of MVC. Razor is designed to generate HTML and gives the developer the entire power of the .NET Framework (types, methods, etc.). MVC has become extremely popular, and so has Razor, so there is a pretty good chance that the developer who comes behind me will know it. At the very least, they will know HTML and C#, which will make viewing/editing the template fairly straight-forward.
What is Razor?
Razor is a view engine developed to be used with ASP.NET MVC. When ASP.NET MVC was first introduced, it was packaged with the WebForms view engine (since there is no code behind, it ends up looking a lot like classic ASP). One of the tenants of MVC is separation of concerns. Because of this, the MVC team made sure that if someone wanted to write their own view engine, they could do so and plug it into the MVC framework. Sometime later, Microsoft released their own alternate view engine called “Razor”. One of the primary goals for Razor was to have a much cleaner, easier to type syntax, here’s a quick comparison of the Razor and WebForms:
WebForms:
<a href=”<%= Model.NavigateUrl %>”><%: Model.Name %></a>
Razor
<a href=”@Model.NavigateUrl”>@Model.Name</a>
Not only is the Razor syntax easier to read, but if you think about actually typing the two lines out, it is also much easier to write.
How does Razor work?
It’s important to understand how Razor actually works. I’ll walk through the engine from the perspective of an MVC application (its typical use).
Once the controller requests that a view is rendered, MVC determines that the view is a Razor view and invokes the Razor view engine. The engine will read the template, convert it into a C# class (or VB.NET, depending on the file extension). The code is then compiled into an assembly and the assembly is cached. The generated class is then invoked and the output is written to the response stream. Anything in the template that is indicated as code (using the @ prefix) is left as code to be executed, and everything else is turned into literal strings. If the file is changed, the Razor view engine is notified that the file has changed, the cached assembly is dumped and the next time the view is requested, it will generate a new assembly.
So we’re going to do all that?
No. All of those steps seemed like overkill, and when I searched online, I didn’t see any easy way to implement all of those steps. We’re going to precompile the view into our application, and use it like we would any other standard C# class. The up-side to this approach is that there isn’t any initial runtime cost like there is with MVC. The downside is that if you want to change the template, you have to recompile your code and deploy. That may sound like a lot of work if you want to make a simple change, but assuming you are making the change in a production environment, do you really want to be able to make a change that easily (and possibly introduce bugs?) – not I!
That’s nice… so how do I create an email using Razor?
Now we are getting to the meat of the subject. For this post I will use a console application. It’s the simplest type of application, and therefore would likely require the most amount of setup. If we can get this to work in a console application, we can get it to work anywhere.
Since we will need to generate the C# code from the template up-front, we will want to do it in Visual Studio. There is a Visual Studio extension, go to Tools -> Extensions and Updates… In the modal, click “Online” from the left-hand side, and at the top, search for “Razor Generator”. The extension you want is the one shown below. Install it. Restart Visual Studio of needed. If you don’t already have NuGet, you will need to download that extension as well.
 Extension 
Create your console application (I’m using Visual Studio 2012 and the 4.0 Framework – just because).
In your console application, right-click on References, and select “Manage NuGet packages…” Search for “RazorGenerator” (one word). The package you will want to download is called RazorGenerator.Templating.
 NuGet 
The RaorGenerator.Mvc package is for precompiling Razor views in an MVC application, which is outside the scope of this post.
When the package installs, it creates a file in your project called “SampleTemplate.cshtml”. Feel free to look at this file, but I usually delete it and start writing my own from scratch.
To create your own template, you will need to add a Razor file. Unless you are in an MVC project, you won’t be able to directly add one. I add a class file instead, but make sure it ends with the .cshtml extension.
Delete any text that is in your newly created file. In Solution Explorer, find your file, right-click and go to “Properties”. In the properties window, make sure it looks like this:
 Properties
When you created the file, if you selected “Class”, the only change you will need to make is that you will need to type “RazorGenerator” in the “Custom Tool” property. These settings tell Visual Studio to not attempt to compile the file and to not copy it to the output directory. It also tells Visual Studio to use the ReazorGenerator custom tool (added by the Visual Studio extension) whenever the file is updated (or the user right-clicks the file and selects “Run Custom Tool”).
The custom tool will generate the C# class file and you can view it by expanding the .cshtml file and seeing a generated file underneath. Do not manually edit the generated file! Any edits you make will be overwritten the next time the custom tool runs.
If you look at the generated file right away, it will have a message for you to help you understand why it can’t generate a code file.
The reason the NuGet package is needed is because it adds a reference in your project to RazorGenerator.Templating.dll. The class generated by the custom tool inherits from a class that is located in this assembly. If you don’t want to install the NuGet package, you can go to the project site (linked at the bottom of this poast) and download the DLL and manually add it to your project.
The Razor Generator extension has several providers that it can use to create the class file, but at this time it doesn’t know which one to use. Unlike MVC, your .cshtml file will need a comment on the first like, it will look like this:
This line tells Razor Generator to use the template provider.  In razor, @* this is a comment *@, so the line @* Generator: Template*@ is ignored by the Razor view engine, but the custom tool will read that line and know which provider to use.
Once that line is in place, save the .cshtml file and now look at the generated file, it is a C# class – though it doesn’t actually do much.
A few quick notes for those of you familiar with Razor in MVC:

  • You will not have access to any of the various helpers (Html, Url, Ajax, etc.)
  • None of the properties you are used to exist (Model, ViewBag, etc.)
    • The intellisense will include these properties, but that is because Visual Studio assumes .cshtml will be used in the context of MVC. The good news is that if you use one of these that don’t exist, it will generate a compiler error and not a runtime error, so you will be notified sooner rather than later.

You can create a web page like you normally would (add an html tag, head and body tag, etc). Save and compile.
In your Main method of the console application, create a new instance of your template (the name of the .cshtml file is the class name). The class will have a method called TransformText. You can invoke that and it will render the template. If you were to run the application now you would see your markup returned from the method.
At this time, this isn’t very exciting. You could have created a static file and included it with your project and you would have the same results. Let’s add some data!
In the template file you can declare fields, methods, properties, etc. You must do this in the special @functions{ .. } block. I usually put this near the top – after the generator comment, but before the markup begins.
 Functions
Notice that you can also put in using statements. This is useful so you don’t have to specify a namespace for all the different types you will be using.
My template will create a list of customers in a nice table, so I’ve added a property to my template called Customers. You can add fields as well as methods in this area. Anything declared in this block will be scoped to the class level. Save your change in order for those changes to be reflected in the generated file.
Now, if you go back to your Main method in the console application, you will now see that your template has a property on it called Customers. This is how you will pass data into your template. Now in your template, you can include a line like @Customers.Count() and when you run the template, that will output the number of customers in the IEnumerable<>.
Note: By default, the generator includes the following using statements:
 Usings
This is why I am able to use IEnumerable<> without typing out the namespace, and why I can invoke a LINQ method from my Customers property.
Layouts
Layouts are supported by the Template generator, but sections are not. This means you can have a layout, but you cannot call RenderSection; you can only call RenderBody.
Summary
With the power of the Razor view engine, you will be able to create templates for emails or any other need you may have. Does the project have nasty hard-coded markup and does it use some form a string concatenation? Sure, but it’s been abstracted away so that we don’t need to manually write it nor do we need to actually see it (unless we really want to), and that is more than good enough for me!
In order to edit the .cshtml file, the extension will need to be installed, but anyone can compile and deploy the template without having any special tools (e.g. automated builds machines). If you do not have the extension installed and you edit a .cshtml file that uses the generator, then no error will be shown, the file will be saved, but the generated class file will not be modified.
Related Links

]]>
https://blogs.perficient.com/2013/05/01/easy-email-templates-with-the-razor-view-engine/feed/ 8 224332