Chef Articles / Blogs / Perficient https://blogs.perficient.com/tag/chef/ Expert Digital Insights Tue, 28 Sep 2021 19:15:21 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Chef Articles / Blogs / Perficient https://blogs.perficient.com/tag/chef/ 32 32 30508587 Using Chef Habitat to Migrate Legacy Windows Applications https://blogs.perficient.com/2019/08/14/using-chef-habitat-to-migrate-legacy-windows-applications/ https://blogs.perficient.com/2019/08/14/using-chef-habitat-to-migrate-legacy-windows-applications/#respond Wed, 14 Aug 2019 15:25:36 +0000 https://blogs.perficient.com/?p=238917

Overview

In this bog post we will discuss Using Chef Habitat to Migrate Legacy Windows Applications to a Modern Secure Platform. With support ending for Windows Server 2008 and Microsoft SQL Server 2008, legacy Windows applications will need to be migrated to newer secure and supportable modern platforms. For instance, Windows Server 2016 and 2019 along with Microsoft SQL Server 2017 and later. In this blog post, the third in our series on Chef Habitat, we will migrate a legacy ASP.NET 2.0 application from Windows Server 2008 and Microsoft SQL Server 2005 to a Windows Server 2016 physical or virtual machine and a Docker container with Windows Server 2016 Core. Windows Server 2016 Core is a striped down version of Windows Server without a GUI and is managed as you would Linux, from the command line. Since there is no GUI, the attack surface is reduced and the OS image is significantly smaller. Only what needs to be loaded for the application will be installed. Chef has a Github repository with a legacy ASP.NET 2.0 Application from the Codeplex Archive that we will be installing. This will show how Legacy ASP.NET 2.0 applications can be migrated to modern and secure platforms.

Prerequisites

See the previous blog post under Prerequisites and Workstation Setup for installing the required tools and configuring Chef Habitat.

VMs need to be running Windows Server 2016 with a minimum of 4GB of RAM. Containers need 2GB of RAM for each container.

Note: This application currently only runs on a Windows Server running on a physical/virtual hardware, or in a container but not directly on Windows 10. For Docker on Windows, you must be running in Windows container mode. AWS or Azure images will need to have Docker preinstalled, and at least 50GB of disk space.

Common Steps for Windows Server and Containers

Now the code from the Github repository needs to be downloaded. This needs to be on the root of the C: drive. The MS SQL Server installation can fail if the path is to long. Entering into a local studio at c:\users\administrators\sqlwebadmin will result in a much longer install path than entering from c:\sqlwebadmin. Clone the repository and cd into the top level directory:

cd c:\
git clone https://github.com/habitat-sh/sqlwebadmin
cd sqlwebadmin

A local default origin should have been setup from the Prerequisites and Workstation Setup section in the previous blog post.

The INSTALL_HOOK now needs to be enable. See this blog post for more information.

$env:HAB_FEAT_INSTALL_HOOK=$true

This plan takes advantage of several dependencies that use this feature to run an install hook when the dependency is installed for things like enabling windows features and registering a COM component.

Installing on a Windows Server 2016 (Physical or Virtual)

Enter a local Habitat Studio and load core/sqlserver2005:

hab studio enter
hab svc load core/sqlserver2005

This will take several minutes to load since it is downloading and installing the .Net 2.0 runtime and installing SQL Server 2005, while its loading, build this plan:

build

Now we need to wait for SQL Server’s post-run hook to complete. View the Supervisor output with Get-SupervisorLog and wait for the message:

sqlserver2005.default hook[post-run]:(HK): 1> 2> 3> 4> 5> 6> Application user setup complete

Now load <your_origin>/sqlwebadmin:

hab svc load <your_origin>/sqlwebadmin --bind database:sqlserver2005.default

In the Supervisor log wait for:

sqlwebadmin.default(O): sqlwebadmin is running

The website should now be accessible. Browse to http://localhost:8099/databases.aspx.

Exporting to a Windows Server 2016 Docker Container

Export the core/sqlserver2005 package to a docker image:

$env:HAB_SQLSERVER2005="{\`"svc_account\`":\`"NT AUTHORITY\\SYSTEM\`"}"
hab pkg export docker --memory 2gb core/sqlserver2005

The first line above will make sure that the SQL Server install sets the svc_account to the SYSTEM account instead of the default NETWORK SERVICE account which is advisable in a container environment.

Build our sqlwebadmin package (make sure you are still in c:\sqlwebadmin):

hab pkg build .

Export our sqlwebadmin hart to a docker image:

hab pkg export docker --memory 2gb <path to HART file>

Now lets bring these two containers together into a Habitat supervisor ring:

$sql = docker run -d --env HAB_LICENSE=accept-no-persist --memory 2gb core/sqlserver2005
$ip = docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $sql
docker run -it --env HAB_LICENSE=accept-no-persist <your_origin>/sqlwebadmin --bind database:sqlserver2005.default --peer $ip

Alternatively you can use Docker Compose along with the provided docker-compose.yml to bring up the containers. Update the docker-compose.yml file with your origin and the environment variables below for each container:

version: '2.2'
services:
  sqlserver2005:
    image: core/sqlserver2005
    environment:
      - HAB_LICENSE=accept
  sqlwebadmin:
    image: <your_origin>/sqlwebadmin
    environment:
      - HAB_LICENSE=accept
    ports:
      - 8099:8099
    links:
      - sqlserver2005
    depends_on:
      - sqlserver2005
    command: --peer sqlserver2005 --bind database:sqlserver2005.default

networks:
  default:
    external:
      name: nat
docker-compose up

Grab the IP address of the sqlwebadmin container:

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aql)

Browsing to http://<CONTAINER_IP>:8099/databases.aspx should bring up the application.

Perficient can help!

In this blog post we discussed the steps of Using Chef Habitat to Migrate Legacy Windows Applications to a Modern Secure Platform using Chef Habitat. With Windows and SQL Server end-of-support happening beginning this year, now is the time to begin migrating those legacy applications with Habitat. This approach eliminates your dependencies on these legacy operating systems and helps you avoid costly support contracts.  We can also help you modernize your application development processes at the core, using an OS independent approach that makes your business more innovative and resilient for the future.  Let us know if we can help.

]]>
https://blogs.perficient.com/2019/08/14/using-chef-habitat-to-migrate-legacy-windows-applications/feed/ 0 238917
DevSecOps in the Cloud – Policy and Practice https://blogs.perficient.com/2019/08/13/devsecops-in-the-cloud-policy-and-practice/ https://blogs.perficient.com/2019/08/13/devsecops-in-the-cloud-policy-and-practice/#respond Tue, 13 Aug 2019 15:50:09 +0000 https://blogs.perficient.com/?p=243299

Cloud computing is now ubiquitous throughout the software development industry.  There are many cloud service providers offering everything from ‘bare-metal’ virtual servers to complete server-less computing platforms.  The speed by which computing resources can be reserved and instantiated is a major contributing factor to the success of DevOps, where repeatability and automation is central.  However, cloud-based computing comes with a significant security risk if not properly defined, governed and audited.  This blog post introduces these risks and provides guidance on how to establish a set of security policies and practices for cloud resource utilization.

Cloud Resource Policies and Standards

Development and security teams do not speak the same language, nor are they motivated in the same way.  For development teams the core drivers are system requirements, whereas for the security team the primary motivator is identification and control of risk.  These and other disciplines must work jointly to create and deploy effective and secure applications.  They are often hampered, however, by an understanding of what is needed for a secure cloud-enabled system and how to implement those policies.  There are several challenges, therefore, regarding the translation of security needs from one group to another.

The key challenge is to define security policies for cloud resource use in such a way as to facilitate an implementable (and auditable) practice.  But before we can define a set of security polices for cloud-based computing we must first understand the shared responsibility model (1).  Simply put, this is the agreement that cloud service providers define for the use of those resources and where each party’s responsibility begins and ends.  For the cloud-provider that is ‘security of the cloud’ – which means the physical security of equipment, facilities, power, and personnel who maintain the underlying infrastructure of the service.  For the development team this is ‘security in the cloud’ – which translates to all security requirements for applications, data, and network configuration.  It is very important to distinguish which security policies relate to on-premise computing vs. cloud-based computing because of this model.  Therefore, some of the existing security policies will be relevant and others will need to be enhanced or modified to support the different cloud usage models.

Cloud Usage Models

There are multiple ways to use cloud computing.  These include infrastructure-as-a-service (IaaS), where the provider establishes a set of virtual servers, storage, and networking, but leaves all of the software installation and configuration to the user.  The next level is platform-as-a-service (PaaS) where the cloud-provider creates a computing platform, such as a web-application stack of server, data store, and supporting components, that the development team leverages for their solution.  Yet another approach is software-as-a-service (SaaS) where the entire software product is offered from a cloud-based platform; no direct installation is required.  There are many other cloud-resources that are available to development teams (e.g. function-as-a-service – FaaS – for event driven server-less computing) that again require special consideration for security risks.

Fortunately for development and security teams a baseline set of standards have been published by the Center for Internet Security that extends a common set of best-practice security policies into the cloud computing environment.  These are presented as a series of risks and controls for commonly encountered security issues in cloud-based environments.  For example, the configuration of a network firewall in AWS by default opens the SSH communication port 22 to the world (e.g. 0.0.0.0/0, ::/0 in ipv4 and ipv6 format respectively).  Clearly this is a significant risk to any computing environment, but as recently seen with the CapitalOne data breach, improper establishment of security policies in cloud environments can lead to exposure of protected information.

Beyond the need for a baseline security policy set, each corporate security group will have specific needs for the organization.  These additional policies must be defined and treated in the same manner as any other corporate policy.  For example, if the organization is under the General Data Protection Rules (GDPR) established in the European Union (EU), then the policies that are created for data storage and backup in the cloud must reflect those regulatory requirements.  To be compliant, each security policy and control that is defined by the security team must be implemented by the development and operations support teams.  This is a very time consuming and labor intensive task for many organizations.  Clearly DevOps automation will be very helpful in this regard.

Cloud Compliance Automation

All major cloud-service providers offer a well-documented and defined API for access to their service platform.  Many development teams are, or would like, to leverage these access points to automate the creation and configuration of cloud-resources.  This access is usually configured to implement a secure access policy where only granted permissions can be employed (such as server creation/destruction.  As noted above, this indicates that some form of identity management must be implemented according to security policy.  However, after the establishment of these resources it is up to the security team to periodically audit the various platforms for continued conformance to the defined policies.  Here is where compliance automation can be most effective.

There are several tools available to automatically verify the configuration state of cloud-based resources, especially for the PaaS and IaaS usage models.  Evaluation of established policies is defined by a ‘profile’ that represents a set of well-defined comparisons for configuration.  For example, using the InSpec profile provided by the CIS team verifies that the above-mentioned default network access group security vulnerability has been removed:

control 'cis-aws-foundations-4.4'

title 'Ensure the default security group of every VPC restricts all traffic'

aws_vpcs.vpc_ids.each do |vpc|
   describe aws_security_group(group_name: 'default', vpc_id: vpc) do
      its('inbound_rules') { should be_empty }
      its('outbound_rules') { should be_empty }
   end
end

 

Through the use of security policy automated audit built into the CI/CD DevOps pipeline, the security team can be better informed of policy violations, frequency, and time for remediation metrics.  This type of report is also very helpful to external auditors when the organization moves to obtain certain levels of certification, such as HiTrust for life-science and healthcare organizations.

Cloud Resource Governance and Oversight

Finally, it is important for every organization that intends to have extensive use of cloud-based resources that there is a mechanism for periodic evaluation of cloud resource policies.  From time to time new capabilities are offered by cloud-providers and existing capabilities undergo significant changes.  The establishment of a governance and oversight body is therefore necessary to ensure that policies stay current with the growing cloud-resource computing needs of development teams.

In addition to periodic review of cloud security policies, it is a best-practice to ensure that there are minimal, but necessary, controls placed on development team use of cloud-resources.  This is for several reasons, first to reduce costs from frivolous or lazy instantiation of resources that are then not used or forgotten.  Second, there must be controls around what kinds of cloud resources are to be made available, and from what vendor.  Finally, as a cost control mechanism a periodic audit of how teams are using cloud resources should be conducted.

As discussed in this post, there are three aspects to establishing a cloud-resource utilization security policy and set of practices.  First is to define the appropriate set of baseline polices, standards, guidelines and practices that are to be enforced for all teams using specific utilization models.  Second is to automate the audit of these resources to ensure that the security team is aware of policy violations and to provide a mechanism for rapid resolution of the issue.  Finally, it is important to establish governance and oversight of teams to avoid costly mistakes when it comes to over or improper use of cloud-resources.

Links:

1) AWS Shared Security Model, Azure Shared Security Model

]]>
https://blogs.perficient.com/2019/08/13/devsecops-in-the-cloud-policy-and-practice/feed/ 0 243299
Perficient’s DevOps Journey & Approach – short video https://blogs.perficient.com/2019/05/31/chefconf-2019-recap-1-perficients-devops-practice-approach-short-video/ https://blogs.perficient.com/2019/05/31/chefconf-2019-recap-1-perficients-devops-practice-approach-short-video/#respond Fri, 31 May 2019 15:45:01 +0000 https://blogs.perficient.com/?p=240545

Perficient colleagues recently attended Chef Software’s annual conference, ChefConf 2019 in Seattle, Washington.  In this short video, Perficient DevOps Delivery Director Sean Wilbur shares his point of view on the evolution of Perficient’s DevOps practice.  He also talks about how DevOps transformation is much more than just tools and code – and how culture and soft skills play a significant role in driving digital transformation.

Thanks to our friends at Digital Anarchists and DevOps.com for creating these videos.  You can find other videos from ChefConf 2019 here.

]]>
https://blogs.perficient.com/2019/05/31/chefconf-2019-recap-1-perficients-devops-practice-approach-short-video/feed/ 0 240545
ChefConf 2019 – Are you ready for it? https://blogs.perficient.com/2019/05/07/chefconf-2019-are-you-ready-for-it/ https://blogs.perficient.com/2019/05/07/chefconf-2019-are-you-ready-for-it/#respond Tue, 07 May 2019 17:29:26 +0000 https://blogs.perficient.com/?p=239383

ChefConf 2019 kicks off in Chef’s (and my) hometown of Seattle in less than two weeks on May 20th.  If you’re interested in going and haven’t already registered, check out more information here and use code Hugs4Chef19 for 10% off your registration.

Things I hope to hear at ChefConf

This year promises to be a pretty significant year at ChefConf.  Since last year’s conference, the company has gone through some significant changes – new leadership, product updates, go-to-market approaches, etc.  Here are some things I am eager to hear more about:

  • Chef Habitat:  Chef has had the “infrastructure as code” and “compliance as code” story lines solidified for some time, but only in the past year has the Habitat “managing applications as code” story really started to coalesce in a meaningful way.  Chef Habitat is such a fundamental change and innovation to application development while also solving immediate needs like legacy app modernization that bringing that story together takes time.   You can read more about Perficient’s thoughts on Habitat here and here.  I am also eager to hear more about the “Habitat managed Chef” and “Habitat managed InSpec” and how to integrate these patterns in new and interesting ways.
  • Chef as an Open Source Company & the Enterprise Automation Stack(s):  ICYMI, Chef announced its move to a full open source model at the start of April as well as some significant updates to their product naming and bundling.  Chef’s new Enterprise Automation Stack is a compelling way for customers to address automation in a fundamentally holistic sense – infrastructure, compliance, and applications – whether on-prem, in one or many clouds, etc.  The move to full open source gives additional opportunities for innovation on the Chef platform.  With just a bit over a month since this change, I’m eager to see how Chef will communicate the many implications of this change to clients and partners.
  • Chef and the ecosystem: For partners like Perficient who deliver enterprise-grade DevSecOps transformations, Chef and all its wonderful parts are only part of the people, process, and tools story.  Chef has always sought new ISV and major cloud integrations.  I look forward to hearing more about how Chef has advanced these partnerships, specifically with the launch of Azure DevOps and the continued enterprise progress Google Cloud is making.

Key Sessions to Attend

Chef is one of the most fun, energetic and well-run conferences I’ve ever attended.  This year, there are some amazing workshops and sessions available to attendees and as per usual, I won’t be able to attend all of them.  Here are some I’m very interested in attending:

  • Monday:  I’ll be attending the Managing a DevOps Transformation workshop to get an update from Chef’s Professional Services team on their point of view of successful patterns driving DevOps transformation, and my colleague Sean Wilbur  will be attending Modern Operations on Azure – Automate and monitor your infrastructure.
  • Tuesday/Wednesday:  I’m going to have a tough time both days as there are multiple competing session on both days all of which are interesting.

    Tuesday Wednesday Schedule for ChefConf 2019

    Tuesday Wednesday Schedule for ChefConf 2019

Ways to Connect

I’m looking forward to connecting with other ChefConf attendees during breaks, at the evening events and parties, and at Perficient’s booth in the Expo Hall (#100).  I hope you’ll stop by, introduce yourself, grab some commemorative ChefConf 2019 and Seattle specific stickers (they are super cool!), and enter to win one of our prizes.

Before you go, be sure to download the ChefConf 2019 Official App, start building your schedule, and add me and your other Chef friends as connections!  I look forward to seeing you there.

]]>
https://blogs.perficient.com/2019/05/07/chefconf-2019-are-you-ready-for-it/feed/ 0 239383
Join Perficient at ChefConf 2019 https://blogs.perficient.com/2019/05/02/join-perficient-at-chefconf-2019/ https://blogs.perficient.com/2019/05/02/join-perficient-at-chefconf-2019/#respond Thu, 02 May 2019 16:23:12 +0000 https://blogs.perficient.com/?p=239251

ChefConf 2019, taking place in Seattle on May 20-23, is less than a month away! Perficient is proud to be exhibiting at ChefConf as a Silver sponsor. ChefConf offers a week full of workshops, deep sessions, and premier key speakers. It is one of the best opportunities for the DevOps community to come together to learn and share insights around the latest strategies and tactics for transformational application delivery.

Connect with us at ChefConf

Have questions about starting your DevOps journey? Looking to accelerate your application modernization strategy by leveraging Chef Habitat? We can help.

Our Chef experts will be at booth #100 in the Expo Hall. They will be available to answer all of your questions around your DevOps transformation, whether it be implementing your Chef architecture and tooling or full-scale agile delivery program implementation.

Our team of expert DevOps consultants will share with you how we can partner with your organization to provide services across our DevOps Jump Start, agile DevOps transformation, and modern application factory.

Upgrade Your Keyboard

In addition to hearing from our experts as they share their thought leadership and discuss recent customer stories in our booth, also stop by for a chance to win a Das Keyboard 4Q!

We look forward to seeing you soon at ChefConf 2019!

 

Are you attending ChefConf this year? Reach out to connect with our team.
]]>
https://blogs.perficient.com/2019/05/02/join-perficient-at-chefconf-2019/feed/ 0 239251
Getting Started with Chef Habitat on Windows https://blogs.perficient.com/2019/04/02/getting-started-with-chef-habitat-on-windows/ https://blogs.perficient.com/2019/04/02/getting-started-with-chef-habitat-on-windows/#respond Tue, 02 Apr 2019 11:31:09 +0000 https://blogs.perficient.com/?p=237861

Overview

This is the second post in our series on Chef Habitat. For an introduction to Habitat, please refer to our initial post. In this write up, we will be looking closely at Habitat in a Windows context. There are a few differences between Habitat on Windows versus Linux or Mac which we will point out. Additionally, we will take you through steps to package your own Windows applications in Habitat.

Habitat on Windows uses PowerShell instead of Linux shell scripting to build packages and perform package installation. Dependent packages must run on Windows or be cross-platform, such as .NET Core or Java. PowerShell Core is used for the Habitat Studio on Windows, providing a clean room for working with packages. You can also run Habitat Studio in a Windows Server 2016 container for additional isolation. Along with modern Windows applications, Habitat supports build, packaging, and deployment of legacy Windows applications. See this post from Chef for additional information and this one for legacy Windows applications.

Chef has created packages for PowerShell Core, Visual Studio Build Tools, 7-Zip, WIX, .NET Core, and Visual C++ redistributable binaries that can be used as dependencies in your Habitat Plan to create custom application packages. Once a HART package exists, you can deploy it directly to physical or virtual servers, or export the package for target run times such as Docker, Kubernetes, or Cloud Foundry. HART packages can also be uploaded to a public or private Builder for archival and future deployments.

Our sample application, Contoso University, is written in ASP.NET and based on Microsoft Entity Framework Core. Contoso University is a database-driven application for managing students, courses, and instructor information at a fictional university. If you want to skip the tutorial and see the completed code right away, I pushed it to this repository.

Prerequisites

Many of these prerequisites apply to Linux workstation setup. User accounts are required for GitHub, Habitat Builder, and Docker Hub.

  1. Google Chrome – For browsing the sample application. Google Chrome is the most compatible browser for our application.
  2. Git – For source code management and cloning the source repository.
  3. GitHub – A GitHub account is used for authentication to Habitat Builder.
  4. Habitat Builder – A Habitat Builder account is required for building and publishing Habitat packages.
  5. Docker and Docker Hub – We will use Docker to run our application after building the Habitat package.

Workstation Setup

Chocolatey is an open-source, community-managed package manager (similar to Homebrew on Mac). Chocolatey is used by a number of companies including Chef. Chocolatey packages are vetted against VirusTotal however a more thorough vetting process should be adopted if using these packages in production environments. We use Chocolatey here to demonstrate how package managers work and how workstation setup on Windows can be streamlined.

Install Choclatey, Habitat, Git, and Google Chrome

Install Choclatey with PowerShell:

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

Install Chef Habitat:

choco install habitat -y

Install Google Chrome:

choco install googlechrome -y

Install Git and refresh the system path:

choco install git -y
refreshenv

Configure Habitat

Start configuration of the Habitat command-line interface (CLI) using hab setup. First, point the Habitat CLI to a Habitat Builder instance. This can be an on-premise Builder or the publicly-hosted Builder. Choosing Yes will prompt you for the on-premise Builder endpoint (illustrated below however the remaining steps assume use of the public Builder):

Connect to an on-premise bldr instance? No

Enter the Origin name created on the Habitat Public Builder site:

Set up a default Origin? Yes
Default origin name: manny-rodriguez

An Origin key pair allows for secure uploads to the Builder Depot. Create one now, if needed:

Create a Public Signing Key for the Builder Depot? Yes

Add a Habitat Personal Access Token to your CLI configuration for uploading to Builder and checking job status:

Set up a default Habitat personal access token? Yes
Habitat personal access token: <TOKEN>

Setup now prompts you about a Control (CTL) Gateway. This will be covered in a separate blog post. Enter No to proceed:

Setup a default Habitat CTLGateway Secret? No

Add a binlink to the system path for package binaries to be easily found:

Add binlink to directory Path? Yes

Choose whether to enable or disable usage analytics:

Enable analytics? No

Package and Deploy a Windows ASP.NET Application

You’re now ready to start working with Habitat!

Habitat requires you to declare all application dependencies in a Habitat Plan. On Windows, this file is typically named plan.ps1. See here for additional information.

Let’s start with downloading the code, expanding the archive, and navigating to the appropriate directory:

cd c:
Invoke-Webrequest -uri https://code.msdn.microsoft.com/ASPNET-MVC-Application-b01a9fe8/file/169473/2/ASP.NET%20MVC%20Application%20Using%20Entity%20Framework%20Code%20First.zip -OutFile contosouniversity.zip
Expand-Archive contosouniversity.zip
cd contosouniversity

Now, we start authoring our Habitat Plan. The hab plan init command is useful for getting started here:

hab plan init --windows

The resulting directory structure is shown below:

tree habitat /F
| default.toml
| plan.ps1
| README.md
├── config
└── hooks

Package Variables and Dependencies

$pkg_name and $pkg_origin will be automatically updated by Habitat based on contents of the local repository. $pkg_mainataner and $pkg_license should be updated manually with the appropriate details. These variables are passed to functions and script files that are used as templates for package installation and configuration:

$pkg_name="contosouniversity"
$pkg_origin="myorigin"
$pkg_version="0.1.0"
$pkg_maintainer="Manny Rodriguez <Immanuel.Rodriguez@fake-email.com>"
$pkg_license=@("Apache-2.0")

Package dependencies should be declared at this point. Use the $pkg_deps variable for deployment/runtime dependencies. Here, we specify core/dsc as a package dependency, which is a core package representing PowerShell Desired State Configuration (DSC). This allows any configuration not in the run hook (described further down) to be implemented. We require PowerShell DSC to configure SQL Server 2017 and the ASP.NET application:

$pkg_deps=@("core/dsc-core")
$pkg_build_deps is for build-time dependencies.
# core/nuget is required for fetching dependent .NET packages:
$pkg_build_deps=@("core/nuget")

We use $pkg_binds to specify the database connection details:

$pkg_binds={"database"="username password port"}

Build Logic

For our application, we must override the standard build logic to make sure our ASP.NET package is built correctly. Specifically, we override the Invoke-Build and Invoke-Install functions. A difference here between Linux Bash syntax and PowerShell is that Bash functions are defined using do instead of function:

function Invoke-Build {
    Copy-Item $PLAN_CONTEXT/../* $HAB_CACHE_SRC_PATH/$pkg_dirname -recurse -force
    nuget restore "$HAB_CACHE_SRC_PATH/$pkg_dirname/C#/$pkg_name/packages.config" -PackagesDirectory "$HAB_CACHE_SRC_PATH/$pkg_dirname/C#/packages" -Source "https://www.nuget.org/api/v2"
    nuget install MSBuild.Microsoft.VisualStudio.Web.targets -Version 14.0.0.3 -OutputDirectory $HAB_CACHE_SRC_PATH/$pkg_dirname/
    $env:VSToolsPath = "$HAB_CACHE_SRC_PATH/$pkg_dirname/MSBuild.Microsoft.VisualStudio.Web.targets.14.0.0.3/tools/VSToolsPath"
    ."$env:SystemRoot\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe" "$HAB_CACHE_SRC_PATH/$pkg_dirname/C#/$pkg_name/${pkg_name}.csproj" /t:Build /p:VisualStudioVersion=14.0
    if($LASTEXITCODE -ne 0) {
        Write-Error "dotnet build failed!"
    }
}

function Invoke-Install {
    ."$env:SystemRoot\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe" "$HAB_CACHE_SRC_PATH/$pkg_dirname/C#/$pkg_name/${pkg_name}.csproj" /t:WebPublish /p:WebPublishMethod=FileSystem /p:publishUrl=$pkg_prefix/www
}

 

Application Configuration

The configuration for our ASP.NET application is defined in a default.toml file. The configuration values are passed to the appropriate files when the application is built. Below we see that the application listening port is specified, as well as the IIS application pool, application name, and site name. PowerShell DSC uses these values to properly configure IIS. If any of these configuration items should change, make updates in the default.toml file then push the changes out to the Habitat Supervisor using hab config apply to apply the updates:

port = 8099
app_pool = "hab_pool"
app_name = "hab_app"
site_name = "hab_site"
hab config apply --remote-sup=hab1.mycompany.com myapp.prod 1 /tmp/newconfig.toml

More on PowerShell DSC

PowerShell DSC is Microsoft’s preferred method of configuration management for Windows. This works by using PowerShell for low-level tasks and scripting, along with DSC to provide idempotent configurations that can be applied and executed only if there has been a change on the server that needs correction. Microsoft provides DSC resources, such as xWebAdministration, for quickly developing configurations that need to be applied to one or multiple servers, along with instructions on how to create custom resources. This fits in nicely with Habitat, allowing you to define a desired state for your applications. Plan variables (from plan.ps1) are used to update the templated PowerShell script with the correct values:

Configuration NewWebsite
{
    Import-DscResource -Module xWebAdministration
    Node 'localhost' {
        WindowsFeature ASP {
            Ensure = "Present"
            Name   = "Web-Asp-Net45"
        }
        xWebAppPool { { cfg.app_pool } }
        {
            Name   = "{{cfg.app_pool}}"
            Ensure = "Present"
            State  = "Started"
        }
        xWebsite { { cfg.site_name } }
        {
            Ensure          = "Present"
            Name            = "{{cfg.site_name}}"
            State           = "Started"
            PhysicalPath    = Resolve-Path "{{pkg.svc_path}}"
            ApplicationPool = "{{cfg.app_pool}}"
            BindingInfo     = @(
                MSFT_xWebBindingInformation
                {
                    Protocol = "http"
                    Port     = { { cfg.port } }
                }
            )
        }
        xWebApplication { { cfg.app_name } }
        {
            Name       = "{{cfg.app_name}}"
            Website    = "{{cfg.site_name}}"
            WebAppPool =  "{{cfg.app_pool}}"
            PhysicalPath = Resolve-Path "{{pkg.svc_var_path}}"
            Ensure     = "Present"
        }
    }
}

 

Defining Database Connection Logic

In order to connect to the database, the steps below need to be executed in order to create the configuration to connect to the SQL Server.

In the next section, Life cycle Event Handlers (Hooks), the completed code is shown.

The PowerShell code below will parse the csproj file for the web.config and create a new on that can be updated with the appropriate connection string:

Copy-Item '.\C#\ContosoUniversity\Web.config' .\habitat\config
Remove-Item '.\C#\ContosoUniversity\Web*.config'
$xml = Get-Content '.\C#\ContosoUniversity\ContosoUniversity.csproj'
$nodes = $xml.Project.ItemGroup.Content | Where-Object { $_.Include -like "Web.*" }
$nodes | ForEach-Object { $_.ParentNode.RemoveChild($_) }
$f = Resolve-Path '.\C#\ContosoUniversity\ContosoUniversity.csproj'
$xml.Save($f)

web.config

<connectionStrings>
    <add name="SchoolContext" connectionString="Data Source={{bind.database.first.sys.ip}},{{bind.database.first.cfg.port}};Initial Catalog=ContosoUniversity2;User ID={{bind.database.first.cfg.username}};Password={{bind.database.first.cfg.password}};" providerName="System.Data.SqlClient" />
</connectionStrings>

The templatized web.config will need to be updated during the init hook with the code below:

Init Hook Update

Set-Location {{pkg.svc_path}}var
New-Item -Name Web.config -ItemType SymbolicLink -target "{{pkg.svc_config_path}}/Web.config" -Force | Out-Null

One last step is needed. The run hook needs to be able to update the permissions of the web.config file. Add the code below to the run hook:

Run Hook Update

Import-Module "{{pkgPathFor "core/dsc-core"}}/Modules/DscCore"
Start-DscCore (Join-Path {{pkg.svc_config_path}} website.ps1)
NewWebsite $pool = "{{cfg.app_pool}}"
$access = New-Object System.Security.AccessControl.FileSystemAccessRule "IIS APPPOOL$pool", "ReadAndExecute", "Allow"
$acl = Get-Acl "{{pkg.svc_config_path}}/Web.config"
$acl.SetAccessRule($access)
$acl | Set-Acl "{{pkg.svc_config_path}}/Web.config"
try{
    ...

Life cycle Event Handlers (Hooks)

On Windows, PowerShell Core is used in the Habitat Plan to implement event-driven hooks which occur throughout the life cycle of applications/services. In our example, we will focus on the init and run hooks (these are most common).

The init hook executes when the application package is initially installed and can be used to ensure certain files are available or configuration items are in place:

Init Hook

Set-Location {{pkg.svc_path}}
if (Test-Path var) {
    Remove-Item var -Recurse -Force
}
New-Item -Name var -ItemType Junction -target "{{pkg.path}}/www" | Out-Null
Set-Location {{pkg.svc_path}}\var
New-Item -Name Web.config -ItemType SymbolicLink -target "{{pkg.svc_config_path}}/Web.config" -Force | Out-Null

The run hook executes after the init hook, either when the application package starts or is updated or when the package configuration changes. The run hook in our case is used to prepare the server for application installation and also to start the service itself. Again in our case, PowerShell DSC resources are made available. They are downloaded from the PowerShell Gallery, a public repository hosted by Microsoft, though they can also be downloaded from elsewhere. Permissions are also set for the IIS configuration in our run hook. Any arbitrary PowerShell code can be used here to configure the application:

Run Hook

# The Powershell Progress stream can sometimes interfere
# with the Supervisor output. Its non critical so turn it off
$ProgressPreference="SilentlyContinue"

# We need to install the xWebAdministration DSC resource.
# Habitat runs its hooks inside of Powershell Core but DSC
# configurations are applied in a hosted WMI process by
# Windows Powershell. In order for Windows Powershell to locate
# the installed resource, it must be installed using Windows
# Powershell instead of Powershell Core. We can use Invoke-Command
# and point to localhost to "remote" from Powershell Core to
# Windows Powershell.
Invoke-Command -ComputerName localhost -EnableNetworkAccess {
    $ProgressPreference="SilentlyContinue"
    Write-Host "Checking for nuget package provider..."
    if(!(Get-PackageProvider -Name nuget -ErrorAction SilentlyContinue -ListAvailable)) {
        Write-Host "Installing Nuget provider..."
        Install-PackageProvider -Name NuGet -Force | Out-Null
    }
    Write-Host "Checking for xWebAdministration PS module..."
    if(!(Get-Module xWebAdministration -ListAvailable)) {
        Write-Host "Installing xWebAdministration PS Module..."
        Install-Module xWebAdministration -Force | Out-Null
    }
}

# Leverage the Powershell Module in the dsc-core package
# that makes applying DSC configurations in Powershell
# Core simple.
Import-Module "{{pkgPathFor "core/dsc-core"}}/Modules/DscCore"
Start-DscCore (Join-Path {{pkg.svc_config_path}} website.ps1) NewWebsite

# The svc_config_path lacks an ACL for the USERS group
# so we need to ensure the app pool user can access those files
$pool = "{{cfg.app_pool}}"
$access = New-Object System.Security.AccessControl.FileSystemAccessRule `
"IIS APPPOOL\$pool",`
"ReadAndExecute",`
"Allow"
$acl = Get-Acl "{{pkg.svc_config_path}}/Web.config"
$acl.SetAccessRule($access)
$acl | Set-Acl "{{pkg.svc_config_path}}/Web.config"

# The run hook must run indefinitely or else the Supervisor
# will think the service has terminated and will loop
# trying to restart it. The above DSC apply starts our
# application in IIS. We will continuously poll our app
# and cleanly shut down only if the application stops
# responding or if the Habitat service is stopped or
# unloaded.
try {
    Write-Host "{{pkg.name}} is running"
    $running = $true
    while($running) {
        Start-Sleep -Seconds 1
        $resp = Invoke-WebRequest "http://localhost:{{cfg.port}}/{{cfg.app_name}}" -Method Head
        if($resp.StatusCode -ne 200) { $running = $false }
    }
}
catch {
    Write-Host "{{pkg.name}} HEAD check failed"
}
finally {
    # Add any cleanup here which will run after supervisor stops the service
    Write-Host "{{pkg.name}} is stoping..."
    ."$env:SystemRoot\System32\inetsrv\appcmd.exe" stop apppool "{{cfg.app_pool}}"
    ."$env:SystemRoot\System32\inetsrv\appcmd.exe" stop site "{{cfg.site_name}}"
    Write-Host "{{pkg.name}} has stopped"
}

Building the Package

The Habitat Studio is a cleanroom for building and testing your Habitat packages. On Windows, the Studio exposes the Windows system, core Habitat services, and the application source directories. The Studio will download required, missing packages or update any pre-installed packages upon starting, so may take a few minutes longer to start the first time. When using Docker to run Habitat Studio, the underlying Windows containers will be pulled which may also take time. Executing the build command within the Studio will gather package dependencies and source code (may be installation binaries for COTS applications), and assemble a Habitat Artifact (HART) package for testing and distribution:

hab studio enter -W

or

$env:HAB_DOCKER_OPTS="--memory 2gb -p 80:8099"
hab studio enter -D
build

Testing the Habitat Package Locally

To test the package locally within the Habitat Studio, run the commands below. This will install and configure SQL Server 2017, IIS, ASP.NET, and our sample application. After loading the core/sqlserver package, we check the Habitat Supervisor log to ensure it is fully running before loading other dependent packages:

hab svc load core/sqlserver
Get-SupervisorLog
sqlserver.default hook[post-run]:(HK): 1> 2> 3> 4> 5> 6> Application user setup complete
hab svc load manny-rodriguez/contosouniversity --bind database:sqlserver.default

Open a browser to http://<local IP>/hab_app after seeing the following output in the Supervisor log:

contosouniversity.default(O): contosouniversity is running


Congratulations! You’re almost there. Finally, the Habitat package should be uploaded to the Builder Depot using the command below. You should point to the current build file to ensure you publish the latest changes:

hab pkg upload .resultsmanny-rodriguez-contosouniversity-0.2.0-20190314110601-x86_64-windows.hart

Deploying to a Server

When your applications are packaged with Chef Habitat, the only installation requirement on your target servers is Chef Habitat. Chocolatey can be used here again or any other deployment method. Habitat will also need to be configured as outlined in Workstation Setup. Start the Habitat Supervisor using hab sup run and execute the same commands used when testing locally to load your application. After the Supervisor is started, a new PowerShell prompt may need to be opened. A Windows service can also be used to run the Habitat Supervisor unattended, which we’ll cover in a subsequent post:

hab sup run
hab svc load core/sqlserver
Get-SupervisorLog
sqlserver.default hook[post-run]:(HK): 1> 2> 3> 4> 5> 6> Application user setup complete
hab svc load manny-rodriguez/contosouniversity --bind database:sqlserver.default

Once again, open a browser to http://<server hostname or IP>/hab_app once the Supervisor log indicates your service is running.

Exporting Packages for Docker

Aside from deploying HART files directly to traditional server environments, Habitat packages can be exported to Docker containers or other run-time formats using hab pkg export. We illustrate this below using two packages: the core package for SQL Server 2017 and the one we just built for our sample application.

hab pkg export docker core/sqlserver
hab pkg export docker .resultsmanny-rodriguez-contosouniversity-0.2.0-20190314110601-x86_64-windows.hart

In order for our application container to communicate with the SQL Server container, we need to note the IP address of the SQL Server container and feed this to the docker run command for our application. Following is some PowerShell code to capture the SQL Server container IP in a variable:

$sql = docker run -d --memory 2GB core/sqlserver
$ip = docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $sql
docker run -it -p 80:8099 myorigin/contosouniversity --peer $ip --bind database:sqlserver.default

Wrapping Up

In this blog post we took a Modern ASP.NET Application and packaged and deployed it with Chef Habitat. This included the prerequisites of installation of the required software, creating the needed accounts and setting up Chef Habitat. A Habitat Plan was created and modified for packaging and deployment of the ASP.NET Application. We learned about the Lifecycle Event Handlers (hooks) and how to use them to build the package and run the application. A local environment was used to build, package, test and upload to the Builder Depot. Next, a Docker Container was exported for SQL Server 2017 and our ASP.NET application. The Docker Containers were started, applications and Windows

Features were installed and configured. Testing resulted in the ASP.NET Application running.

The key takeaways are that Applications can be quickly built, packaged, deployed and managed using Chef Habitat. This changes the way Applications are currently managed through their Life cycle, resulting in less time spent during the development cycle, along with quicker deployments saving time and money.

In the next Blog Post, we will discuss packaging a Legacy ASP.NET application that uses a no longer supported version of SQL Server.

Perficient can help!

With Windows and SQL Server end-of-support happening beginning this year, now is the time to begin migrating those legacy applications with Habitat. This approach eliminates your dependencies on these legacy operating systems and helps you avoid costly support contracts.  We can also help you modernize your application development processes at the core, using an OS independent approach that makes your business more innovative and resilient for the future.  Let us know if we can help.

]]>
https://blogs.perficient.com/2019/04/02/getting-started-with-chef-habitat-on-windows/feed/ 0 237861
Chef Habitat – The Key to Self-Driving Applications https://blogs.perficient.com/2019/03/28/chef-habitat-the-key-to-self-driving-applications/ https://blogs.perficient.com/2019/03/28/chef-habitat-the-key-to-self-driving-applications/#respond Thu, 28 Mar 2019 14:02:48 +0000 https://blogs.perficient.com/?p=238071

One of the newest and most compelling technologies from Chef Software is a project known as Habitat. Much of the early messaging and use cases around Habitat have left some with the impression that it’s simply another way to build application containers. Habitat is in fact much more and could be quite disruptive in terms of how we are accustomed to managing applications. While Habitat does allow you to build a package with all application dependencies embedded, like a container, the difference is what occurs post-deployment. Habitat offers an everlasting environment for your applications where in some sense, services can live, grow, and respond to other lifecycle events in a secure home regardless of the underlying operating system or target runtime. Traditional applications are not re-deployed to a target in the traditional sense, yet applications can still be updated as stable changes are promoted to a release. Applications and services become self-driving.

In this post, I offer a bit of background on Chef and layout the overall objectives of Habitat, as well as outline some key aspects of the project that highlight why it is worth considering for any DevOps improvement initiative.

Chef and Infrastructure Automation

Chef is best known for its focus on continuous automation, and enabling key capabilities like infrastructure as code, configuration management, and continuous compliance testing of infrastructure. Organizations write Chef cookbooks to standardize base server configurations, perform OS hardening, and apply updates at every layer of the stack. Chef is quite powerful and has always been a compelling solution: simply run chef-client periodically, execute a default run-list, and watch everything converge to a recipe. The issue perhaps is that there is so much installation and configuration that must occur before you are able to deploy your application, which is all we really care about in the end anyway. This is not a Chef problem, this is just the nature of application deployments as we have always known. Therefore, we naturally write cookbooks in a very bottom-up approach and end up with lots of automation focused on the OS, middleware, and application dependencies.

Application Automation with Habitat

The first objective of Habitat is to challenge the infrastructure first, application later mindset. Enter the cleanroom, and explicitly declare what your application needs to run. The cleanroom is furnished with a simple set of base utilities: some Habitat binaries, bash, diffutils, less, make, mg, vim, file, the glibc bins, grep, gzipopenssl, sed, wget, and a handful of others (on Windows, the cleanroom is furnished with comparable Windows binaries). There are no preinstalled compilers or common dependencies, and the intent is to provide a reasonable minimum set of tooling. For a complete list of tools, enter the Habitat Studio and check the PATH environment variable. Ultimately, the cleanroom ensures that nothing is deployed to your servers that is not explicitly declared. Less bloat, tighter supply chain, and a very application-centric point of view. Meanwhile, your infrastructure only needs the base minimum to run Habitat. This is roughly equivalent to FROM scratch in a Dockerfile and copying just a single application binary into the image, except with more flexibility. Suddenly, server buildout is drastically simplified and OS installations are minimal, since Habitat will ensure that everything your application needs is present at runtime (pulled from Habitat Builder). This is true even for Windows applications with hard dependencies on DLLs or registry configurations that are only present in older versions of Windows. Habitat truly doesn’t care about the underlying OS if the requirements to run Habitat are met, and this means you don’t need to concern yourself too much with the underlying OS either:

Habitat flips the traditional model of bottom-up application deployments and instead allows you to focus just on application automation.To reiterate, the first goal of Chef Habitat is to challenge the age-old mentality that you must first concern yourself with the OS or middleware to deploy your application. The second motive behind Habitat is to change the way software is managed post-deployment. A Habitat package is not the end state. Rather, Habitat altogether offers you an enhanced end state via the Habitat Builder (which includes the package Depot) and Supervisors. The culmination is “a comfy abode for your code” that can be perceived as a perpetual living arrangement for your application. Applications are packaged in portable home that can be deployed to any runtime. Inside this home, your application is continuously monitored, able to reload/restart/update itself, and respond to other lifecycle events without intervention. To realize this perpetual, enhanced end state, all you need is a well-written Habitat Plan. The following image depicts an initial package build flow in Chef Habitat:

This image depicts a package build flow in Habitat.

The Habitat Plan (plan.sh or plan.ps1) is where everything starts. Application dependencies are declared in the Plan and various lifecycle events are also implemented here. Upon building a Habitat Plan, a Habitat Artifact file, or HART file, is produced. This is the artifact you should now store in a package repository. HART files can be deployed as-is to any server running Habitat. From a HART file, you can also export your application to other runtime formats: Docker, Kubernetes, Cloud Foundry, a TAR file, and others. After an initial deployment, the Habitat Builder services work closely with the Depot to execute lifecycle events for your application, based on notifications from the Habitat Supervisor which provides ongoing monitoring of your applications.

Now we find ourselves in a position to deploy to our target of choice and let Habitat take over from there. As application updates become available, these versions can be built and pushed to the Builder Depot, then pulled down to our servers upon promotion to a stable channel. We no longer need to re-deploy applications in the typical sense. Instead, the application lifecycle is streamlined and managed for us, since we trust our Habitat Plan knows how to respond to different events (like a promotion). Chef Habitat is the key to self-driving applications.

Perficient Can Help

We live in a world where we deploy applications to infrastructure, then look for ways to monitor and automate our applications from there. Infrastructure generally has no first-class knowledge of your application. This is the ultimate goal of Chef Habitat: to package our applications once, deploy them with additional hooks and embedded intelligence, then get back to writing software. At Perficient, we are excited about Habitat and the opportunity to help organizations get started. Along with Chef Software, we want to improve the way organizations deploy and manage software in the wild, one application at a time.

]]>
https://blogs.perficient.com/2019/03/28/chef-habitat-the-key-to-self-driving-applications/feed/ 0 238071
Perficient Elevates Chef to Strategic Partner Status https://blogs.perficient.com/2019/01/17/chef-strategic-partner-status/ https://blogs.perficient.com/2019/01/17/chef-strategic-partner-status/#respond Thu, 17 Jan 2019 23:13:56 +0000 https://blogs.perficient.com/?p=234961

Perficient is excited to announce that Chef is moving to a Strategic Partnership.

They will join other distinguished partners on this list, including Google and AWS. These companies represent continually high levels of success and achievement at Perficient.

Graphic logo of Chef as Perficient Strategic Partner

Chef is a leader in secure, continuous automation open source and commercial software. Their elevation to Strategic Partner results from their increasing momentum and innovation in the market. We recognize Chef as a company that has the potential for significant growth opportunities. In addition, we recognize them for maintaining technology capabilities that can help us better serve our clients’ unique needs.

Jason Krech, Director of DevOps Consulting at Perficient, expressed his excitement around the expanding partnership. “Our work with Chef has continued to accelerate over the course of our two year partnership – we are excited to partner with Chef as a key services partner, as well as take Chef embedded solutions to clients as part of our DevOps solutions.”

As a services partner to Chef, we are also continuing to invest in on-going training and certification of our existing team of Chef-certified practitioners, as well as investing in hiring in this space.

This elevated partnership better positions us to continue achieving our goal of providing more clients with end-to-end solutions to automate their entire software delivery cycles. We are able to better leverage our core competencies, enhance and extend our service offerings, and strengthen our deep cross-platform experience and expertise.

Learn more about our Chef expertise and how we can help you begin your DevOps journey with Chef here.

 

]]>
https://blogs.perficient.com/2019/01/17/chef-strategic-partner-status/feed/ 0 234961
Developing PaaS Using Step Functions and Hashicorp Tools https://blogs.perficient.com/2018/11/19/developing-paas-using-step-functions-and-hashicorp-tools/ https://blogs.perficient.com/2018/11/19/developing-paas-using-step-functions-and-hashicorp-tools/#respond Mon, 19 Nov 2018 18:05:26 +0000 https://blogs.perficient.com/?p=233481

Introduction:

Cloud tools now have the ability to let DevOps deliver cloud infrastructure along-side applications that are deployed on it. Did I just say, build a PaaS solution? Commercial PaaS solutions like OpenShift and Pivotal Cloud Foundry can be expensive and require specialized skills. They do speed up development and keep your enterprise cloud adoption vendor agnostic. However, adopting them calls for a strategic shift in the way your organization does application development. All good with this approach, it just takes time – POC, POV, Road Show and then a Decision. While PaaS solutions are great, another alternative is to use individual AWS services alongside open source tools that can help provision, secure, run and connect cloud computing infrastructure.

Operating knowledge of these tools and orchestrating them in a cohesive workflow can help your DevOps team do continuous deployment on a cloud infrastructure whose results are similar to commercial PaaS solutions. This solution is economical and manageable without hiring specialized skill sets. Why no specialized skill set here – because your development team already has the skills to build “Castles in the Cloud”. While they are conceptualized as solutions, the end result is a full-blown product with its own governance and management lifecycle. It can easily be integrated with application delivery pipeline. Moreover, the solution provisions immutable EC2 instances that capture log information for monitoring and debugging. Underlying belief driving this approach – “Complete Automation, seamless integration using non-commercial tools and services”.

Solution:

At first, it appears that the solution lies in Elastic Beanstalk. Though Beanstalk produces immutable infrastructure, it has certain drawbacks when it comes to encrypting configuration and log data during infrastructure provisioning. This could pose a challenge to organizations that operate in a highly regulated industry. As such, the requirements to push service logs to an encrypted S3 bucket, to make the AMI generation process configuration driven and to be able to automate the monitoring and auditing of infrastructure requires a custom comprehensive configuration driven solution. Moreover, highly regulated industries like finance and healthcare require complete encryption of transitive and logged data.

Cloud infrastructure automation can be broken into five key processes:

  • Pre Provision
  • Bakery
  • Provision
  • Validation
  • Post Provision

Consider the above process as individual workers trying to accomplish a fixed and independent task. AWS Step functions can easily orchestrate the workflow among these individual workers (activity workers) and can be configured to build a comprehensive, configuration driven and dynamic infrastructure provisioning process. With Step functions, the above five processes are now individual states that are executed in a chronological order. A process remains in that particular state till the activity worker completes its activity. A state machine is set up to pass control to different states which internally executes activity workers that are build using Lambda functions.

A quick summary of each process/state:

  • Pre Provision – This is the first stage of the process that is triggered by the applications’ CI pipeline. Mostly enterprise CI pipelines are built using CI tools like Jenkins. The pipeline sends a notification to an SNS topic. A lambda function subscribed to the topic then triggers the step function execution. In this step, the activity gathers pertinent information from an application configuration file. It combines this information with process-specific configuration and environment-related information received from the pipeline trigger. It then encrypts this information and saves it to an encrypted EC2 parameter store. Application configuration file is generated by the application development teams using a rule-based UI that restricts access to AWS services as per application needs.

 

  • Bakery – This process is the heart of automation solution. This step is the next transition state after Pre Provisioning. It uses tools like Packer, InSpec, Chef and AWS CW Agent. The state calls a Lambda activity worker that executes an SSM command. The command starts a packer build running on a separate EC2 instance. The packer pulls all the relevant information required for the build from the encrypted EC2 param store and starts the build. It uses Chef to layer application, middleware and other dependencies on the AMI. Post packer build, application-specific AMI is encrypted and shared with the application AWS account owner for provisioning.

 

  • Provision – Once the AMI is ready and shared by the application account owner, next state in the automation process is Provision. This state calls a Lambda activity worker which executes another SSM command that starts executing Terraform modules, which provisions the following – ALB, LC with AMI Id that is baked in the previous state and ASG to supplement elasticity. At the end of this state, the entire application AWS physical architecture is up and running and one should be able to use the ALB DNS to connect to the application. SSH access is removed to keep instances immutable.

 

  • Validation – Validation is the next stage in the process. After the infrastructure is provisioned, automated InSpec validation scripts validate the OS and services provisioned. This phase too is invoked by a Lambda activity worker. InSpec logs are moved to an encrypted S3 bucket from where they are sourced to the testing team to review and log defects as necessary. These defects are then triaged and assigned to respective teams.

 

  • Post Provisioning – This is the last state in the process where the new provisioned infrastructure undergoes a smoke test before it is delivered to the application/testing teams. This state configures the EC2 based CW logs with an encrypted S3 bucket. From S3, the logs are exported into log management tools like Splunk. In Splunk operations team can build monitoring dashboards. Moreover, in this step, all AWS services provisioned along with application ID are stored in a Dynamodb table for logging and auditing purposes. Lastly, this stage also initiates blue-green deployments for a smoother transition to the new release.

The above infrastructure automation process nukes and paves the infrastructure using AWS services. A new release or an update to the base SOE image triggers the execution of the automation process. It can significantly improve the efficiency of deploying applications on AWS. It greatly reduces the EC2 provisioning time and can bring down your AWS operating costs over a period of time. Though custom, these automation solutions are complex and require deep knowledge of Cloud Native Services and tools that help build Infrastructure through code. Perficient’s Cloud Platform Services team is adept in such solutions and can help your organization look past the “pet EC2 instances” world. if you’re interested in learning more, reach out to one of our specialists at sales@perficient.com and download our Amazon Web Services guide for additional information.

]]>
https://blogs.perficient.com/2018/11/19/developing-paas-using-step-functions-and-hashicorp-tools/feed/ 0 233481
The Secret Ingredient to Digital Transformation: Your People https://blogs.perficient.com/2018/05/02/the-secret-ingredient-to-digitally-transforming-your-business/ https://blogs.perficient.com/2018/05/02/the-secret-ingredient-to-digitally-transforming-your-business/#respond Wed, 02 May 2018 13:00:09 +0000 https://blogs.perficient.com/?p=206283

Digital innovation continues to disrupt industries at lightning speed. Today’s organizations are transforming their entire business – from strategy to operations, technology to culture – to better deliver value to their customers. In 2017, we compiled the top 10 trends leaders needed to know when it came to their digital transformation journey. In this 10-week blog series, we’ll further explore each trend and address how you can continue to modernize your business for success.

Think about your last experience at a high-end restaurant – especially one that has an open kitchen. This set-up gives you a peek at what happens behind the scenes. You can’t help but watch with fascination as the team works to carefully orchestrate and deliver dishes made to absolute perfection. Beyond the kitchen, there’s another team striving to deliver the best possible experience from the minute you walk in the restaurant to the moment you leave.

Some of the world’s most highly-rated restaurants are run by notable chefs like Gordon Ramsay, Emeril Lagasse, and Bobby Flay. These top chefs possess the essential skills to build and run successful restaurant empires.

In a recent article I read, “8 Skills that Make You a Chef (or Just about Any Other Biz Leader,)” I noticed that some skills closely mirror those needed to be a successful digital business leader, including:

  • Have a vision – “One of the most important things [a leader] does is see things that don’t yet exist, and find ways to bring them to life.”
  • Think and operate systematically – “Always look to make things more efficient and productive without sacrificing quality.”
  • Set and strive for a standard of excellence – “From company culture, to cleanliness, to customer service, to plate presentation, it all starts with the chef. If you don’t have time to do it right, when will you have time to do it over?”
  • Be supportive, yet empathetic – “You need [the ability] to manage people, make them feel appreciated and valued, and that you have their back. Everything else may be in place, but without a solid, dependable team behind [your] mission, [you] have nothing.”

This last point is especially applicable if you’re wanting to digitally transform your organization. After all, digital business transformation isn’t solely about the front-end customer experience, the technology, or optimizing operations.

There’s so much more because it’s a cultural shift. And, what does it take to implement a cultural shift? People.

Prioritizing Your People

Bringing digital transformation to life involves an entire company supporting the changes to be made.

If your company is like most, change is feared rather than revered. Culture and organization are typically the biggest obstacles to a digital transformation.

Here are a few examples of how some businesses perceive digital transformation. Do any sound familiar?

  1. Let’s transfer our investment from traditional marketing to digital marketing.
  2. We need to shift our investment from brick-and-mortar to a digital store.
  3. We need to foster innovation to stay ahead of the competition.
  4. We need a customer-facing experience that forces individuals to change how they interact, track, and measure customer touch points.
  5. Let’s open up organizational data and systems to third parties via APIs or digital hooks.

Ignoring cultural and organizational challenges only diminishes your return on investment and will ultimately stall your digital transformation. The key to overcoming them is to make sure you’re prioritizing your people when planning and executing this change.

Thinking back to our earlier example: if you want your organization to run like an acclaimed restaurant, everyone on board – from the maître d and wait staff to the chefs and dishwashers – must work towards the same vision.

As a digital business leader, when you appreciate your people and value their roles, you receive collective commitment in return. This mutual appreciation makes it possible to endure any stresses that stem from wide-reaching change.

Getting Your People to Embrace Change

Change is hard. Humans naturally resist it. So, when your people hear “digital transformation,” they understand that it’s a profound shift from one stage of existence to another. Each step of the digital transformation process requires buy-in across the organization, not just within specific offices or departments.

Two-thirds of all enterprise-level projects fail to meet business objectives, bearing little or no ROI, because of poor adoption techniques rather than inadequate technology

Bringing a transformation to life requires careful attention to the way change is managed. To ensure digital transformation success, many businesses include organizational change management (OCM) as part of the process.

OCM is a structured approach for getting people ready, willing, and able to accept and embrace new ways of working that are critical to future-state performance. A good strategy motivates willing individuals, encourages those who have doubts, and aligns the motivation and encouragement to the implementation.

Transformation occurs one person at a time, especially at the enterprise level. For more on this topic, listen to our on-demand webinar, Why Organizational Change Management is Critical to Digital Transformation.

Avoiding Common Change Management Mistakes

Investing in organizational change management – and a commitment to getting it right – might be the most critical move your company can make to ensure digital transformation success.

One thing I’ve learned: change management isn’t easy. It takes a lot of effort, and obstacles can (and often do) arise at any stage of a large project, such as a new website or technology implementation.

– David Chapman, General Manager for Organizational Change Management, Perficient

In working with clients, we’ve observed common mistakes to successful change management. Here are the top five:

  1. Assuming change management is “just communication and training”
  2. Putting off a change management plan
  3. Lacking active and visible executive sponsorship
  4. Treating all stakeholders the same
  5. Underestimating the amount of work involved

Stumbling at any one of these mistakes during a large project or implementation can hinder your employees’ buy-in to embracing a new process or using a new solution. And, lack of user adoption can cause any project – or strategic initiative – to fail.

Learn more about how to correct these common missteps in: How to Overcome 5 Change Management Mistakes.

Getting Started with Organizational Change Management

Similar to chefs adding alcohol to a dish and creating a flambé, digital transformations temporarily cause a commotion or distraction within your organization. It will take time until the new system takes hold and gains momentum.

Change management strategies provide clarity to the levels of individual and organizational commitment needed for successful digital transformations. Where do you start? The keys to success include:

  1. Recognizing you need to change
  2. Starting at the top by engaging the C-suite to gain their buy-in and support
  3. Involving the CFO to help you build a case for change from a budgetary standpoint and show how it benefits the company
  4. Engaging the CIO to help implement this change more quickly
  5. Establishing a team/board/committee (cross-functional and regions) to make this effective

From Vision to Reality

Digital transformations are not quick fixes. They are expansive and a cultural shift from what employees consider “business as usual.”

When change management is done well, it’s a beautiful thing. People are engaged with the right messages, in the right ways, at the right times. They understand why the change is being implemented. They buy into what the new system or process means to them, why they should care, and what’s in it for them.

Change management increases the probability of staying on schedule and budget. This results in higher benefit retention and ROI, and ultimately, true digital transformation success.


Do you want to read more about the top ten digital transformation trends?

Click here to read about trend four: Mastering the Art of Data and Analytics to Transform Customer Experience

]]>
https://blogs.perficient.com/2018/05/02/the-secret-ingredient-to-digitally-transforming-your-business/feed/ 0 206283
AWS OpsWorks for Chef Automate https://blogs.perficient.com/2018/02/23/aws-opsworks-for-chef-automate/ https://blogs.perficient.com/2018/02/23/aws-opsworks-for-chef-automate/#respond Fri, 23 Feb 2018 18:49:55 +0000 https://blogs.perficient.com/integrate/?p=5638

CIOs expect to shift 21% of their company’s applications to a public cloud this year, and 46% by 2020, according to a report by Morgan Stanley.

Intro

Recently, I attended a webinar on “Cloud Migration”. It was a joint presentation by folks from AWS and Chef. It touched on two key areas – “migration to cloud” and ” developing DevOps simultaneously”. They demonstrated how Chef can be used to migrate, monitor, secure and automate the application development in a hybrid environment. Why hybrid – cause that is what many smart companies do, maintain a hybrid version to minimize low availability risk.

Large organizations are slowly but steadily evaluating cloud adoption. Architecture teams are gradually modifying the organization’s Enterprise Reference Architecture. This reflects their willingness to investigate cloud technologies. Choices rendered for migration are either infrastructure centric or application centric.  Economic benefits drive infrastructure migration whereas cloud-native architectures drive application migration. This post covers a brief on Chef Automate for infrastructure centric migration. I use the words AWS and cloud interchangeably in my posts, primarily cause of my experience in the AWS space.

Application Centric Migration

As a Solutions Architect, the foremost question that I encounter when planning for cloud migration is – how to scrape a public cloud with minimal or no impact to my existing and rather healthy application development? Chef Automate in AWS OpsWorks appears to be a good answer.

Some organizations do have on premise Chef installation. The easiest way for them is to start with infrastructure centric cloud migration  is to spin an ec2 instance in cloud (security and networking setup implied), bootstrap the new ec2 instance to the inhouse Chef server and attach the existing runlist of required recipes to the instance. That is it! Your native Chef server will now treat this new node as any other instance in your organization’s network. It will push Cookbooks or Recipes to this new node as it has been doing to the existing ones. What did we achieve with this simple spinning and bootstrapping of ec2 instance – our first step on cloud without any impact on the existing DevOps process. Once the ec2 node is tested for stability and performance, more ec2 instances can then replace the inhouse instances. Hence comes along a gradual migration to cloud through DevOps.

For organizations that do not have on premise Chef installation as doing so requires specialized skill set, a simpler way is to proceed with AWS OpsWorks Chef Automate. It is a fully managed chef server that has all the goodies of rich Chef installation including but not limited to workflow automation, compliance checking and monitoring. It takes between 5-10 mins to set up the server. You get a choice to pick your server instance based on the number of projected nodes. Default security credentials to log onto the Chef Automate server and a sample test chef repository are made available through console. The test repository has the required directory structure built into it. Well that spares some time to do more meaningful work. Chef Automate is full compatible with supermarket. Most commonly used cookbooks can be found there. You can download and modify them for your application’s deployment needs. You can also generate a new one and code accordingly. However, that does require some knowledge of Ruby and JSON.  Once the server is up and running, you can bootstrap both the on premise and ec2 instances to this server. Now this appears to be a more confident and a bigger step towards infrastructure centric cloud migration. After your hybrid chef configuration is in place, you can set up DevOps workflow to automate your application deployment.

Compliance is another good features that comes out of the box with Chef Automate. CIS benchmark could be downloaded and configured with the Chef server and this will help evaluate each node’s security profile. Ultimate result “instance hardening”. Who loves to be hacked anyway!

Summary

In short, migration to cloud is a first step in a totally new direction. With this comes anxiety and no matter how adept the teams are, a little professional help to mitigate risks is always helpful. At Perficient we continue to monitor cloud technologies and trends. We do understand the challenges in embracing cloud technologies  and hence have come up with proven cloud based solutions, platforms, architectures and methodologies to aid smoother migration.  If you’re interested in learning more, please reach out to one of our specialists at sales@perficient.com and download our Amazon Web Services guide for additional information.

]]>
https://blogs.perficient.com/2018/02/23/aws-opsworks-for-chef-automate/feed/ 0 196524
Reach the Peak of Innovation with DevOps https://blogs.perficient.com/2017/11/09/reach-the-peak-of-innovation-with-devops/ https://blogs.perficient.com/2017/11/09/reach-the-peak-of-innovation-with-devops/#respond Thu, 09 Nov 2017 17:00:55 +0000 https://blogs.perficient.com/integrate/?p=4875

According to research firm Gartner, IT spending will total $86.4 billion by the end of this year, with much of the investment going into initiatives including DevOps. Organizations who leverage tools within the DevOps toolchain including Sonatype, CloudBees, Chef, Ansible, Amazon Web Services, and Pivotal experience positive business outcomes, from money saved to easier collaboration to accelerated time to market.

We have seen success with DevOps in our clients, which include leading institutions in healthcare, media, automotive, and retail. To share the impact, we created a video to clearly explain the benefits of DevOps which you can view below.

If you’re interested in learning about how DevOps can positively impact your organization, speak to one of our specialists at sales@perficient.com and download our guide below for additional insights and best practices.

]]>
https://blogs.perficient.com/2017/11/09/reach-the-peak-of-innovation-with-devops/feed/ 0 196468