Getting Started with Chef Habitat on Windows - Perficient Blogs
Blog
  • Topics
  • Industries
  • Partners

Explore

Topics

Industries

Partners

Getting Started with Chef Habitat on Windows

Overview

This is the second post in our series on Chef Habitat. For an introduction to Habitat, please refer to our initial post. In this writeup, we will be looking closely at Habitat in a Windows context. There are a few differences between Habitat on Windows versus Linux or Mac which we will point out. Additionally, we will take you through steps to package your own Windows applications in Habitat.

Habitat on Windows uses PowerShell instead of Linux shell scripting to build packages and perform package installation. Dependent packages must run on Windows or be cross-platform, such as .NET Core or Java. PowerShell Core is used for the Habitat Studio on Windows, providing a cleanroom for working with packages. You can also run Habitat Studio in a Windows Server 2016 container for additional isolation. Along with modern Windows applications, Habitat supports build, packaging, and deployment of legacy Windows applications. See this post from Chef for additional information and this one for legacy Windows applications.

Chef has created packages for PowerShell Core, Visual Studio Build Tools, 7-Zip, WIX, .NET Core, and Visual C++ redistributable binaries that can be used as dependencies in your Habitat Plan to create custom application packages. Once a HART package exists, you can deploy it directly to physical or virtual servers, or export the package for target runtimes such as Docker, Kubernetes, or Cloud Foundry. HART packages can also be uploaded to a public or private Builder for archival and future deployments.

Our sample application, Contoso University, is written in ASP.NET and based on Microsoft Entity Framework Core. Contoso University is a database-driven application for managing students, courses, and instructor information at a fictional university. If you want to skip the tutorial and see the completed code right away, I pushed it to this repository.

Prerequisites

Many of these prerequisites apply to Linux workstation setup. User accounts are required for GitHub, Habitat Builder, and Docker Hub.

  1. Google Chrome – For browsing the sample application. Google Chrome is the most compatible browser for our application.
  2. Git – For source code management and cloning the source repository.
  3. GitHub – A GitHub account is used for authentication to Habitat Builder.
  4. Habitat Builder – A Habitat Builder account is required for building and publishing Habitat packages.
  5. Docker and Docker Hub – We will use Docker to run our application after building the Habitat package.

Workstation Setup

Chocolatey is an open-source, community-managed package manager (similar to Homebrew on Mac). Chocolatey is used by a number of companies including Chef. Chocolatey packages are vetted against VirusTotal however a more thorough vetting process should be adopted if using these packages in production environments. We use Chocolatey here to demonstrate how package managers work and how workstation setup on Windows can be streamlined.

Install Choclatey, Habitat, Git, and Google Chrome

Install Choclatey with PowerShell:

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

Install Chef Habitat:

choco install habitat -y

Install Google Chrome:

choco install googlechrome -y

Install Git and refresh the system path:

choco install git -y
refreshenv

Configure Habitat

Start configuration of the Habitat command-line interface (CLI) using hab setup. First, point the Habitat CLI to a Habitat Builder instance. This can be an on-premise Builder or the publicly-hosted Builder. Choosing Yes will prompt you for the on-premise Builder endpoint (illustrated below however the remaining steps assume use of the public Builder):

Connect to an on-premise bldr instance? Yes
Enter a local builder

Enter the Origin name created on the Habitat Public Builder site:

Set up a default Origin? Yes
Default origin name: manny-rodriguez

An Origin key pair allows for secure uploads to the Builder Depot. Create one now, if needed:

Create a Public Signing Key for the Builder Depot? Yes

Add a Habitat Personal Access Token to your CLI configuration for uploading to Builder and checking job status:

Set up a default Habitat personal access token? Yes
Habitat personal access token: <TOKEN>

Setup now prompts you about a Control (CTL) Gateway. This will be covered in a separate blog post. Enter No to proceed:

Setup a default Habitat CTLGateway Secret? No

Add a binklink to the system path for package binaries to be easily found:

Add binlink to directory Path? Yes

Choose whether to enable or disable usage analytics:

Enable analytics? Yes

Package and Deploy a Windows ASP.NET Application

You’re now ready to start working with Habitat!

Habitat requires you to declare all application dependencies in a Habitat Plan. On Windows, this file is typically named plan.ps1. See here for additional information.

Let’s start with downloading the code, expanding the archive, and navigating to the appropriate directory:

cd c:
Invoke-Webrequest -uri https://code.msdn.microsoft.com/ASPNET-MVC-Application-b01a9fe8/file/169473/2/ASP.NET%20MVC%20Application%20Using%20Entity%20Framework%20Code%20First.zip -OutFile contosouniversity.zip
Expand-Archive contosouniversity.zip
cd contosouniversity

Now, we start authoring our Habitat Plan. The hab plan init command is useful for getting started here:

hab plan init --windows

The resulting directory structure is shown below:

tree habitat /F
| default.toml
| plan.ps1
| README.md
├── config
└── hooks

Package Variables and Dependencies

$pkg_name and $pkg_origin will be automatically updated by Habitat based on contents of the local repository. $pkg_mainataner and $pkg_license should be updated manually with the appropriate details. These variables are passed to functions and script files that are used as templates for package installation and configuration:

$pkg_name="contosouniversity"
$pkg_origin="myorigin"$pkg_version="0.1.0"
$pkg_maintainer="Manny Rodriguez <Immanuel.Rodriguez@fake-email.com>"
$pkg_license=@("Apache-2.0")

Package dependencies should be declared at this point. Use the $pkg_deps variable for deployment/runtime dependencies. Here, we specify core/dsc as a package dependency, which is a core package representing PowerShell Desired State Configuration (DSC). This allows any configuration not in the run hook (described further down) to be implemented. We require PowerShell DSC to configure SQL Server 2017 and the ASP.NET application:

$pkg_deps=@("core/dsc-core")

$pkg_build_deps is for build-time dependencies. core/nuget is required for fetching dependent .NET packages:

$pkg_build_deps=@("core/nuget")

We use $pkg_binds to specify the database connection details:

$pkg_binds={"database"="username password port"}

Build Logic

For our application, we must override the standard build logic to make sure our ASP.NET package is built correctly. Specifically, we override the Invoke-Build and Invoke-Install functions. A difference here between Linux Bash syntax and PowerShell is that Bash functions are defined using do instead of function:

function Invoke-Build {
   Copy-Item $PLAN_CONTEXT/../* $HAB_CACHE_SRC_PATH/$pkg_dirname -recurse -force
   nuget restore "$HAB_CACHE_SRC_PATH/$pkg_dirname/C#/$pkg_name/packages.config" -PackagesDirectory "$HAB_CACHE_SRC_PATH/$pkg_dirname/C#/packages" -Source "https://www.nuget.org/api/v2"
   nuget install MSBuild.Microsoft.VisualStudio.Web.targets -Version 14.0.0.3 -OutputDirectory $HAB_CACHE_SRC_PATH/$pkg_dirname/
   $env:VSToolsPath = "$HAB_CACHE_SRC_PATH/$pkg_dirname/MSBuild.Microsoft.VisualStudio.Web.targets.14.0.0.3/tools/VSToolsPath"
   ."$env:SystemRoot\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe" "$HAB_CACHE_SRC_PATH/$pkg_dirname/C#/$pkg_name/${pkg_name}.csproj" /t:Build /p:VisualStudioVersion=14.0
   if($LASTEXITCODE -ne 0) {
       Write-Error "dotnet build failed!"
   }
}

function Invoke-Install {
   ."$env:SystemRoot\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe" "$HAB_CACHE_SRC_PATH/$pkg_dirname/C#/$pkg_name/${pkg_name}.csproj" /t:WebPublish /p:WebPublishMethod=FileSystem /p:publishUrl=$pkg_prefix/www
}

Application Configuration

The configuration for our ASP.NET application is defined in a default.toml file. The configuration values are passed to the appropriate files when the application is built. Below we see that the application listening port is specified, as well as the IIS application pool, application name, and site name. PowerShell DSC uses these values to properly configure IIS. If any of these configuration items should change, make updates in the default.toml file then push the changes out to the Habitat Supervisor using hab config apply to apply the updates:

port = 8099
app_pool = "hab_pool"
app_name = "hab_app"
site_name = "hab_site
hab config apply --remote-sup=hab1.mycompany.com myapp.prod 1 /tmp/newconfig.toml

More on PowerShell DSC

PowerShell DSC is Microsoft’s preferred method of configuration management for Windows. This works by using PowerShell for low-level tasks and scripting, along with DSC to provide idempotent configurations that can be applied and executed only if there has been a change on the server that needs correction. Microsoft provides DSC resources, such as xWebAdministration, for quickly developing configurations that need to be applied to one or multiple servers, along with instructions on how to create custom resources. This fits in nicely with Habitat, allowing you to define a desired state for your applications. Plan variables (from plan.ps1) are used to update the templated PowerShell script with the correct values:

Configuration NewWebsite
{
   Import-DscResource -Module xWebAdministration
   Node 'localhost' {
       WindowsFeature ASP
       {
           Ensure = "Present"
           Name = "Web-Asp-Net45"
       }
       xWebAppPool {{cfg.app_pool}}
       {
           Name   = "{{cfg.app_pool}}"
           Ensure = "Present"
           State  = "Started"
       }
       xWebsite {{cfg.site_name}}
       {
           Ensure          = "Present"
           Name            = "{{cfg.site_name}}"
           State           = "Started"
           PhysicalPath    = Resolve-Path "{{pkg.svc_path}}"
           ApplicationPool = "{{cfg.app_pool}}"
           BindingInfo = @(
               MSFT_xWebBindingInformation
               {
                   Protocol = "http"
                   Port = {{cfg.port}}
               }
           )
       }
       xWebApplication {{cfg.app_name}}
       {
           Name = "{{cfg.app_name}}"
           Website = "{{cfg.site_name}}"
           WebAppPool =  "{{cfg.app_pool}}"
           PhysicalPath = Resolve-Path "{{pkg.svc_var_path}}"
           Ensure = "Present"
       }
   }
}

Defining Database Connection Logic

In order to connect to the database, the steps below need to be executed in order to create the configuration to connect to the SQL Server.

In the next section, Lifecycle Event Handlers (Hooks), the completed code is shown.

The PowerShell code below will parse the csproj file for the web.config and create a new on that can be updated with the appropriate connection string:

Copy-Item .C#ContosoUniversityWeb.config .habitatconfig
Remove-Item .C#ContosoUniversityWeb*.config
[xml]$xml = Get-Content .C#ContosoUniversityContosoUniversity.csproj
$nodes = $xml.Project.ItemGroup.Content | ? { $_.Include -like "Web.*" }
$nodes | % { $_.ParentNode.RemoveChild($_) }
$f = Resolve-Path .C#ContosoUniversityContosoUniversity.csproj
$xml.Save($f)

Update the web.config at habitat/config with the code below:

web.config

<connectionStrings>
     <add name="SchoolContext" connectionString="Data Source={{bind.database.first.sys.ip}},{{bind.database.first.cfg.port}};Initial Catalog=ContosoUniversity2;User ID={{bind.database.first.cfg.username}};Password={{bind.database.first.cfg.password}};" providerName="System.Data.SqlClient" />
</connectionStrings>

The templatized web.config will need to be updated during the init hook with the code below:

Init Hook Update

Set-Location {{pkg.svc_path}}var
New-Item -Name Web.config -ItemType SymbolicLink -target "{{pkg.svc_config_path}}/Web.config" -Force | Out-Null

One last step is needed. The run hook needs to be able to update the permissions of the web.config file. Add the code below to the run hook:

Run Hook Update

Import-Module "{{pkgPathFor "core/dsc-core"}}/Modules/DscCore"
Start-DscCore (Join-Path {{pkg.svc_config_path}} website.ps1) NewWebsite
$pool = "{{cfg.app_pool}}"
$access = New-Object System.Security.AccessControl.FileSystemAccessRule "IIS APPPOOL$pool", "ReadAndExecute", "Allow"
$acl = Get-Acl "{{pkg.svc_config_path}}/Web.config"
$acl.SetAccessRule($access)

$acl | Set-Acl "{{pkg.svc_config_path}}/Web.config" try { ...

Lifecycle Event Handlers (Hooks)

On Windows, PowerShell Core is used in the Habitat Plan to implement event-driven hooks which occur throughout the lifecycle of applications/services. In our example, we will focus on the init and run hooks (these are most common).

The init hook executes when the application package is initially installed and can be used to ensure certain files are available or configuration items are in place:

Init Hook

Set-Location {{pkg.svc_path}}
if(Test-Path var) { Remove-Item var -Recurse -Force }
New-Item -Name var -ItemType Junction -target "{{pkg.path}}/www" | Out-Null
Set-Location {{pkg.svc_path}}var
New-Item -Name Web.config -ItemType SymbolicLink -target "{{pkg.svc_config_path}}/Web.config" -Force | Out-Null

The run hook executes after the init hook, either when the application package starts or is updated or when the package configuration changes. The run hook in our case is used to prepare the server for application installation and also to start the service itself. Again in our case, PowerShell DSC resources are made available. They are downloaded from the PowerShell Gallery, a public repository hosted by Microsoft, though they can also be downloaded from elsewhere. Permissions are also set for the IIS configuration in our run hook. Any arbitrary PowerShell code can be used here to configure the application:

Run Hook

# The Powershell Progress stream can sometimes interfere
# with the Supervisor output. Its non critical so turn it off
$ProgressPreference="SilentlyContinue"
# We need to install the xWebAdministration DSC resource. # Habitat runs its hooks inside of Powershell Core but DSC # configurations are applied in a hosted WMI process by # Windows Powershell. In order for Windows Powershell to locate # the installed resource, it must be installed using Windows # Powershell instead of Powershell Core. We can use Invoke-Command # and point to localhost to "remote" from Powershell Core to # Windows Powershell. Invoke-Command -ComputerName localhost -EnableNetworkAccess { $ProgressPreference="SilentlyContinue" Write-Host "Checking for nuget package provider..." if(!(Get-PackageProvider -Name nuget -ErrorAction SilentlyContinue -ListAvailable)) {    Write-Host "Installing Nuget provider..."    Install-PackageProvider -Name NuGet -Force | Out-Null
}
Write-Host "Checking for xWebAdministration PS module..." if(!(Get-Module xWebAdministration -ListAvailable)) { Write-Host "Installing xWebAdministration PS Module..."
Install-Module xWebAdministration -Force | Out-Null }
}

# Leverage the Powershell Module in the dsc-core package # that makes applying DSC configurations in Powershell # Core simple. Import-Module "{{pkgPathFor "core/dsc-core"}}/Modules/DscCore" Start-DscCore (Join-Path {{pkg.svc_config_path}} website.ps1) NewWebsite # The svc_config_path lacks an ACL for the USERS group # so we need to ensure the app pool user can access those files $pool = "{{cfg.app_pool}}" $access = New-Object System.Security.AccessControl.FileSystemAccessRule `
"IIS APPPOOL$pool",`
"ReadAndExecute",` "Allow" $acl = Get-Acl "{{pkg.svc_config_path}}/Web.config" $acl.SetAccessRule($access) $acl | Set-Acl "{{pkg.svc_config_path}}/Web.config" # The run hook must run indefinitely or else the Supervisor # will think the service has terminated and will loop # trying to restart it. The above DSC apply starts our # application in IIS. We will continuously poll our app # and cleanly shut down only if the application stops # responding or if the Habitat service is stopped or # unloaded. try { Write-Host "{{pkg.name}} is running" $running = $true while($running) {
Start-Sleep -Seconds 1 $resp = Invoke-WebRequest "http://localhost:{{cfg.port}}/{{cfg.app_name}}" -Method Head if($resp.StatusCode -ne 200) { $running = $false } } }

catch { Write-Host "{{pkg.name}} HEAD check failed"
}

finally {   # Add any cleanup here which will run after supervisor stops the service Write-Host "{{pkg.name}} is stoping..." ."$env:SystemRootSystem32inetsrvappcmd.exe" stop apppool "{{cfg.app_pool}}" ."$env:SystemRootSystem32inetsrvappcmd.exe" stop site "{{cfg.site_name}}" Write-Host "{{pkg.name}} has stopped"
}

Building the Package

The Habitat Studio is a cleanroom for building and testing your Habitat packages. On Windows, the Studio exposes the Windows system, core Habitat services, and the application source directories. The Studio will download required, missing packages or update any pre-installed packages upon starting, so may take a few minutes longer to start the first time. When using Docker to run Habitat Studio, the underlying Windows containers will be pulled which may also take time. Executing the build command within the Studio will gather package dependencies and source code (may be installation binaries for COTS applications), and assemble a Habitat Artifact (HART) package for testing and distribution:

hab studio enter -W

or

$env:HAB_DOCKER_OPTS="--memory 2gb -p 80:8099"
hab studio enter -D
build

Testing the Habitat Package Locally

To test the package locally within the Habitat Studio, run the commands below. This will install and configure SQL Server 2017, IIS, ASP.NET, and our sample application. After loading the core/sqlserver package, we check the Habitat Supervisor log to ensure it is fully running before loading other dependent packages:

hab svc load core/sqlserver
Get-SupervisorLog
sqlserver.default hook[post-run]:(HK): 1> 2> 3> 4> 5> 6> Application user setup complete
hab svc load manny-rodriguez/contosouniversity --bind database:sqlserver.default

Open a browser to http://<local IP>/hab_app after seeing the following output in the Supervisor log:

contosouniversity.default(O): contosouniversity is running


Congratulations! You’re almost there. Finally, the Habitat package should be uploaded to the Builder Depot using the command below. You should point to the current build file to ensure you publish the latest changes:

hab pkg upload .resultsmanny-rodriguez-contosouniversity-0.2.0-20190314110601-x86_64-windows.hart

Deploying to a Server

When your applications are packaged with Chef Habitat, the only installation requirement on your target servers is Chef Habitat. Chocolatey can be used here again or any other deployment method. Habitat will also need to be configured as outlined in Workstation Setup. Start the Habitat Supervisor using hab sup run and execute the same commands used when testing locally to load your application. After the Supervisor is started, a new PowerShell prompt may need to be opened. A Windows service can also be used to run the Habitat Supervisor unattended, which we’ll cover in a subsequent post:

hab sup run
hab svc load core/sqlserver
Get-SupervisorLog
sqlserver.default hook[post-run]:(HK): 1> 2> 3> 4> 5> 6> Application user setup complete
hab svc load manny-rodriguez/contosouniversity --bind database:sqlserver.default


Once again, open a browser to http://<server hostname or IP>/hab_app once the Supervisor log indicates your service is running.

Exporting Packages for Docker

Aside from deploying HART files directly to traditional server environments, Habitat packages can be exported to Docker containers or other runtime formats using hab pkg export. We illustrate this below using two packages: the core package for SQL Server 2017 and the one we just built for our sample application.

hab pkg export docker core/sqlserver
hab pkg export docker .resultsmanny-rodriguez-contosouniversity-0.2.0-20190314110601-x86_64-windows.hart

In order for our application container to communicate with the SQL Server container, we need to note the IP address of the SQL Server container and feed this to the docker run command for our application. Following is some PowerShell code to capture the SQL Server container IP in a variable:

$sql = docker run -d --memory 2GB core/sqlserver
$ip = docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $sql
docker run -it -p 80:8099 myorigin/contosouniversity --peer $ip --bind database:sqlserver.default

Wrapping Up

In this blog post we took a Modern ASP.NET Application and packaged and deployed it with Chef Habitat. This included the prerequisites of installation of the required software, creating the needed accounts and setting up Chef Habitat. A Habitat Plan was created and modified for packaging and deployment of the ASP.NET Application. We learned about the Lifecycle Event Handlers (hooks) and how to use them to build the package and run the application. A local environment was used to build, package, test and upload to the Builder Depot. Next, a Docker Container was exported for SQL Server 2017 and our ASP.NET application. The Docker Containers were started, applications and Windows

Features were installed and configured. Testing resulted in the ASP.NET Application running.

The key takeaways are that Applications can be quickly built, packaged, deployed and managed using Chef Habitat. This changes the way Applications are currently managed through their Lifecycle, resulting in less time spent during the development cycle, along with quicker deployments saving time and money.

In the next Blog Post, we will discuss packaging a Legacy ASP.NET application that uses a no longer supported version of SQL Server.

Perficient can help!

With Windows and SQL Server end-of-support happening beginning this year, now is the time to begin migrating those legacy applications with Habitat. This approach eliminates your dependencies on these legacy operating systems and helps you avoid costly support contracts.  We can also help you modernize your application development processes at the core, using an OS independent approach that makes your business more innovative and resilient for the future.  Let us know if we can help.

Lead Technical Consultant

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to the Weekly Blog Digest:

Sign Up