DevOps Articles / Blogs / Perficient https://blogs.perficient.com/tag/devops/ Expert Digital Insights Wed, 05 Mar 2025 06:43:31 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png DevOps Articles / Blogs / Perficient https://blogs.perficient.com/tag/devops/ 32 32 30508587 Automate the Deployment of a Static Website to an S3 Bucket Using GitHub Actions https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/ https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/#comments Wed, 05 Mar 2025 06:43:31 +0000 https://blogs.perficient.com/?p=377956

Automating deployments is crucial for efficiency and reliability in today’s fast-paced development environment. GitHub Actions provides a seamless way to implement CI/CD pipelines, allowing developers to automate the deployment of static websites without manual intervention.

In this blog, we will explore how to deploy a static website to an AWS S3 bucket using GitHub Actions. We’ll cover setting up an S3 bucket, configuring IAM roles for secure authentication, and leveraging GitHub Actions workflows to streamline deployment. By the end, you’ll have a fully automated pipeline that ensures quick and secure deployments with minimal effort.

Prerequisites

  1. Amazon S3 Bucket: Create an S3 bucket and enable static website hosting.
  2. IAM User & Permissions: Create an IAM user with access to S3 and store credentials securely.
  3. GitHub Repository: Your static website code should be in a GitHub repository.
  4. GitHub Secrets: Store AWS credentials in GitHub Actions Secrets.
  5. Amazon EC2 – to create a self-hosted runner.

Deploy a Static Website to an S3 Bucket

Step 1

First, create a GitHub repository. I already made one with the same name, which is why it exists.

Static 1

 

 

Step 2

You can clone the repository from the URL below and put it into your local system. I have added the website-related code to my GitHub repository, so you just need to clone it: https://github.com/Kunal2795/Static-Website.git.

 

Step 3

Push the code to host this static website with your changes, such as updating the bucket name and AWS region. I already have it locally, so you just need to push it using the Git commands below:

Static 2

Step 4

Once the changes are pushed to your GitHub repository, ensure the main. The yaml file is in the .github/workflows directory.

Staticc 3

If the main.yaml file is not present in the .github/workflows/ directory. Create it and add a job to run the static website pipeline in GitHub Actions. The main.yaml file is the primary configuration file in GitHub Actions that runs the entire pipeline.

Add the following job code to the main.yaml file in the .github/workflows/ directory:

name: Portfolio Deployment2

on:

  push:

    branches:

    – main

jobs:

  build-and-deploy:

    runs-on: [self-hosted, silver]

    steps:

    – name: Checkout

      uses: actions/checkout@v1

 

    – name: Configure AWS Credentials

      uses: aws-actions/configure-aws-credentials@v1

      with:

        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}

        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

        aws-region: us-east-2

 

    – name: Deploy static site to S3 bucket

      run: aws s3 sync . s3://kc-devops –delete

 

You need to make some modifications in the above jobs, such as:

  • runs-on – Add either a self-hosted runner or a default runner (I have added a self-hosted runner).
  • AWS-access-key-id – You need to add the Access Key ID variable name (store the variable value in Variables, which I will show you below).
  • AWS-secret-access-key – You need to add the Secret Access Key ID variable name (store its value in Variables, which I will show you below)
  • AWS-region – Add Region of s3 bucket
  • run – In that section, you need to add the path of your bucket where you want to store your static website code.

How to Create a Self-hosted Runner

Launch an EC2 instance with Ubuntu OS using a simple configuration.

Static 4

After that, create a self-hosted runner using specific commands. To get these commands, go to Settings in GitHub, navigate to Actions, click on Runners, and then select Create New Self-Hosted Runner.

Select Linux as the runner image.

Static 5

Static 6

Run the above commands step by step on your EC2 server to download and configure the self-hosted runner.

Static 7

 

Static 8

Once the runner is downloaded and configured, check its status to ensure it is idle or offline. If it is offline, start the GitHub Runner service on your EC2 server.

Also, ensure that AWS CLI is installed on your server.

Static 9

IAM User

Create an IAM user and grant it full access to EC2 and S3 services.

Static 10

Then, go to Security Credentials, create an Access Key and Secret Access Key, and securely copy and store both the Access Key and Secret Access Key in a safe place.

Static 11

 

Next, navigate to GitHub Actions → Secrets & Variables → Actions, then add your AWS Access Key ID and Secret Access Key securely.

Static 12

After adding the Access Key ID and Secret Access Key, proceed to the next section: S3.

Create an S3 bucket—I have created one with the name kc-devops.

Static 13

Add the policy below to your S3 bucket and update the bucket name with your own bucket name.

Static 14

After setting up everything, go to GitHub Actions, open the main. In the yaml file, update the bucket name and commit the changes.

Then, click the Actions tab to see all your triggered workflows and their status.

Static 15

We can see that all the steps for the build and deploy jobs have been successfully completed.

Static 16

Lastly, sign in to the AWS Management Console and open the Amazon S3 console. Check all the codes are stored in your bucket.

Static 17

Then, go to the Properties tab. Under Static website hosting, find and click on the Endpoint URL. (Bucket Website endpoint)

This Endpoint URL is the Amazon S3 website endpoint for your bucket.

Static 18

Output

Finally, we have successfully deployed and hosted a static website using automation to the Amazon S3 bucket.

Static 19

Conclusion

With this setup, whenever you push changes to your GitHub repository, GitHub Actions automatically trigger the deployment process. This ensures that your static website is seamlessly updated and deployed to your AWS S3 bucket without any manual intervention. This automation streamlines the deployment workflow, making it more efficient and error-free.

 

]]>
https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/feed/ 1 377956
Install Sitecore Hotfixes on Azure PaaS with Azure DevOps Pipeline https://blogs.perficient.com/2025/02/17/install-sitecore-hotfixes-on-azure-paas-with-azure-devops-pipeline/ https://blogs.perficient.com/2025/02/17/install-sitecore-hotfixes-on-azure-paas-with-azure-devops-pipeline/#respond Mon, 17 Feb 2025 21:47:29 +0000 https://blogs.perficient.com/?p=377308

Why Automate Sitecore Hotfix Deployment to Azure PaaS?

Sitecore frequently releases hotfixes to address reported issues, including critical security vulnerabilities or urgent problems. Having a quick, automated process to apply these updates is crucial. By automating the deployment of Sitecore hotfixes with an Azure DevOps pipeline, you can ensure faster, more reliable updates while reducing human error and minimizing downtime. This approach allows you to apply hotfixes quickly and consistently to your Azure PaaS environment, ensuring your Sitecore instance remains secure and up to date without manual intervention. In this post, we’ll walk you through how to automate this process using Azure DevOps.

Prerequisites for Automating Sitecore Hotfix Deployment

Before diving into the pipeline setup, make sure you have the following prerequisites in place:

  1. Azure DevOps Account: Ensure you have access to Azure DevOps to create and manage pipelines.
  2. Azure Storage Account: You’ll need an Azure Storage Account to store your Sitecore WDP hotfix files.
  3. Azure Subscription: Your Azure PaaS environment should be up and running, with a subscription linked to Azure DevOps.
  4. Sitecore Hotfix WDP: Download the Cloud Cumulative package for your version and topology. Be sure to check the release notes for additional instructions.

Steps to Automate Sitecore Hotfix Deployment

  1. Upload Your Sitecore Hotfix to Azure Storage
    • Create a storage container in Azure to store your WDP files.
    • Upload the hotfix using Azure Portal, Storage Explorer, or CLI.
  2. Create a New Pipeline in Azure DevOps
    • Navigate to Pipelines and create a new pipeline.
    • Select the repository containing your Sitecore solution.
    • Configure the pipeline using YAML for flexibility and automation.
  3. Define the Pipeline to Automate Hotfix Deployment
    • Retrieve the Azure Storage connection string securely via Azure Key Vault.
    • Download the Sitecore hotfix from Azure Storage.
    • Deploy the hotfix package to the Azure Web App production slot.
  4. Set Up Pipeline Variables
    • Store critical values like storage connection strings and hotfix file names securely.
    • Ensure the web application name is correctly configured in the pipeline.
  5. Trigger and Verify the Deployment
    • Run the pipeline manually or set up an automatic trigger on commit.
    • Verify the applied hotfix by checking the Sitecore instance and confirming issue resolution.

Enhancing Security in the Deployment Process

  • Use Azure Key Vault: Securely store sensitive credentials and access keys, preventing unauthorized access.
  • Restrict Access to Storage Accounts: Implement role-based access control (RBAC) to limit who can modify or retrieve the hotfix files.
  • Enable Logging and Monitoring: Utilize Azure Monitor and Application Insights to track deployment performance and detect potential failures.

Handling Rollbacks and Errors

  • Implement Deployment Slots: Test hotfix deployments in a staging slot before swapping them into production.
  • Set Up Automated Rollbacks: Configure rollback procedures to revert to a previous stable version if an issue is detected.
  • Enable Notifications: Use Azure DevOps notifications to alert teams about deployment success or failure.

Scaling the Approach for Large Deployments

  • Automate Across Multiple Environments: Extend the pipeline to deploy hotfixes across development, QA, and production environments.
  • Use Infrastructure as Code (IaC): Leverage tools like Terraform or ARM templates to ensure a consistent infrastructure setup.
  • Integrate Automated Testing: Implement testing frameworks such as Selenium or JMeter to verify hotfix functionality before deployment.

Why Streamline Sitecore Hotfix Deployments with Azure DevOps is Important

Automating the deployment of Sitecore hotfixes to Azure PaaS with an Azure DevOps pipeline saves time and ensures consistency and accuracy across environments. By storing the hotfix WDP in an Azure Storage Account, you create a centralized, secure location for all your hotfixes. The Azure DevOps pipeline then handles the rest—keeping your Sitecore environment up to date.

This process makes applying Sitecore hotfixes faster, more reliable, and less prone to error, which is exactly what you need in a production environment.

]]>
https://blogs.perficient.com/2025/02/17/install-sitecore-hotfixes-on-azure-paas-with-azure-devops-pipeline/feed/ 0 377308
Extending the Capabilities of Your Development Team with Visual Studio Code Extensions https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/ https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/#respond Tue, 11 Feb 2025 20:53:23 +0000 https://blogs.perficient.com/?p=377088

Introduction

Visual Studio Code (VS Code) has become a ubiquitous tool in the software development world, prized for its speed, versatility, and extensive customization options. At its heart, VS Code is a lightweight, open-source code editor that supports a vast ecosystem of extensions. These extensions are the key to unlocking the true potential of VS Code, transforming it from a simple editor into a powerful, tailored IDE (Integrated Development Environment).

This blog post will explore the world of VS Code extensions, focusing on how they can enhance your development team’s productivity, code quality, and overall efficiency. We’ll cover everything from selecting the right extensions to managing them effectively and even creating your own custom extensions to meet specific needs.

What are Visual Studio Code Extensions?

Extensions are essentially plugins that add new features and capabilities to VS Code. They can range from simple syntax highlighting and code completion tools to more complex features like debuggers, linters, and integration with external services. The Visual Studio Code Marketplace hosts thousands of extensions, catering to virtually every programming language, framework, and development workflow imaginable.

Popular examples include Prettier for automatic code formatting, ESLint for identifying and fixing code errors, and Live Share for real-time collaborative coding.

Why Use Visual Studio Code Extensions?

The benefits of using VS Code extensions are numerous and can significantly impact your development team’s performance.

  1. Improve Code Quality: Extensions like ESLint and JSHint help enforce coding standards and identify potential errors early in the development process. This leads to more robust, maintainable, and bug-free code.
  2. Boost Productivity: Extensions like Auto Close Tag and IntelliCode automate repetitive tasks, provide intelligent code completion, and streamline your workflow. This allows developers to focus on solving complex problems rather than getting bogged down in tedious tasks.
  3. Enhance Collaboration: Extensions like Live Share enable real-time collaboration, making it easier for team members to review code, pair program, and troubleshoot issues together, regardless of their physical location.
  4. Customize Your Workflow: VS Code’s flexibility allows you to tailor your development environment to your specific needs and preferences. Extensions like Bracket Pair Colorizer and custom themes can enhance readability and create a more comfortable and efficient working environment.
  5. Stay Current: Extensions provide support for the latest technologies and frameworks, ensuring that your team can quickly adapt to new developments in the industry and leverage the best tools for the job.
  6. Save Time: By automating common tasks and providing intelligent assistance, extensions like Path Intellisense can significantly reduce the amount of time spent on mundane tasks, freeing up more time for creative problem-solving and innovation.
  7. Ensure Consistency: Extensions like EditorConfig help enforce coding standards and best practices across your team, ensuring that everyone is following the same guidelines and producing consistent, maintainable code.
  8. Enhance Debugging: Powerful debugging extensions like Debugger for Java provide advanced debugging capabilities, making it easier to identify and resolve issues quickly and efficiently.

Managing IDE Tools for Mature Software Development Teams

As software development teams grow and projects become more complex, managing IDE tools effectively becomes crucial. A well-managed IDE environment can significantly impact a team’s ability to deliver high-quality software on time and within budget.

  1. Standardization: Ensuring that all team members use the same tools and configurations reduces discrepancies, improves collaboration, and simplifies onboarding for new team members. Standardized extensions help maintain code quality and consistency, especially in larger teams where diverse setups can lead to confusion and inefficiencies.
  2. Efficiency: Streamlining the setup process for new team members allows them to get up to speed quickly. Automated setup scripts can install all necessary extensions and configurations in one go, saving time and reducing the risk of errors.
  3. Quality Control: Enforcing coding standards and best practices across the team is essential for maintaining code quality. Extensions like SonarLint can continuously analyze code quality, catching issues early and preventing bugs from making their way into production.
  4. Scalability: As your team evolves and adopts new technologies, managing IDE tools effectively facilitates the integration of new languages, frameworks, and tools. This ensures that your team can quickly adapt to new developments and leverage the best tools for the job.
  5. Security: Keeping all tools and extensions up-to-date and secure is paramount, especially for teams working on sensitive or high-stakes projects. Regularly updating extensions prevents security issues and ensures access to the latest features and security patches.

Best Practices for Managing VS Code Extensions in a Team

Effectively managing VS Code extensions within a team requires a strategic approach. Here are some best practices to consider:

  1. Establish an Approved Extension List: Create and maintain a list of extensions that are approved for use by the team. This ensures that everyone is using the same core tools and configurations, reducing inconsistencies and improving collaboration. Consider using a shared document or a dedicated tool to manage this list.
  2. Automate Installation and Configuration: Use tools like Visual Studio Code Settings Sync or custom scripts to automate the installation and configuration of extensions and settings for all team members. This ensures that everyone has the same setup without manual intervention, saving time and reducing the risk of errors.
  3. Implement Regular Audits and Updates: Regularly review and update the list of approved extensions to add new tools, remove outdated ones, and ensure that all extensions are up-to-date with the latest security patches. This helps keep your team current with the latest developments and minimizes security risks.
  4. Provide Training and Documentation: Offer training and documentation on the approved extensions and best practices for using them. This helps ensure that all team members are proficient in using the tools and can leverage them effectively.
  5. Encourage Feedback and Collaboration: Encourage team members to provide feedback on the approved extensions and suggest new tools that could benefit the team. This fosters a culture of continuous improvement and ensures that the team is always using the best tools for the job.

Security Considerations for VS Code Extensions

While VS Code extensions offer numerous benefits, they can also introduce security risks if not managed properly. It’s crucial to be aware of these risks and take steps to mitigate them.

  1. Verify the Source: Only install extensions from trusted sources, such as the Visual Studio Code Marketplace. Avoid downloading extensions from unknown or unverified sources, as they may contain malware or other malicious code.
  2. Review Permissions: Carefully review the permissions requested by extensions before installing them. Be cautious of extensions that request excessive permissions or access to sensitive data, as they may be attempting to compromise your security.
  3. Keep Extensions Updated: Regularly update your extensions to ensure that you have the latest security patches and bug fixes. Outdated extensions can be vulnerable to security exploits, so it’s important to keep them up-to-date.
  4. Use Security Scanning Tools: Consider using security scanning tools to automatically identify and assess potential security vulnerabilities in your VS Code extensions. These tools can help you proactively identify and address security risks before they can be exploited.

Creating Custom Visual Studio Code Extensions

In some cases, existing extensions may not fully meet your team’s specific needs. Creating custom VS Code extensions can be a powerful way to add proprietary capabilities to your IDE and tailor it to your unique workflow. One exciting area is integrating AI Chatbots directly into VS Code for code generation, documentation, and more.

  1. Identify the Need: Start by identifying the specific functionality that your team requires. This could be anything from custom code snippets and templates to integrations with internal tools and services. For this example, we’ll create an extension that allows you to highlight code, right-click, and generate documentation using a custom prompt sent to an AI Chatbot.

  2. Learn the Basics: Familiarize yourself with the Visual Studio Code Extension API and the tools required to develop extensions. The API documentation provides comprehensive guides and examples to help you get started.

  3. Set Up Your Development Environment: Install the necessary tools, such as Node.js and Yeoman, to create and test your extensions. The Yeoman generator for Visual Studio Code extensions can help you quickly scaffold a new project.

  4. Develop Your Extension: Write the code for your extension, leveraging the Visual Studio Code Extension API to add the desired functionality. Be sure to follow best practices for coding and testing to ensure that your extension is reliable, maintainable, and secure.

  5. Test Thoroughly: Test your extension in various scenarios to ensure that it works as expected and doesn’t introduce any new issues. This includes testing with different configurations, environments, and user roles.

  6. Distribute Your Extension: Once your extension is ready, you can distribute it to your team. You can either publish it to the Visual Studio Code Marketplace or share it privately within your organization. Consider using a private extension registry to manage and distribute your custom extensions securely.

Best Practices for Extension Development

Developing robust and efficient VS Code extensions requires careful attention to best practices. Here are some key considerations:

  • Resource Management:

    • Dispose of Resources: Properly dispose of any resources your extension creates, such as disposables, subscriptions, and timers. Use the context.subscriptions.push() method to register disposables, which will be automatically disposed of when the extension is deactivated.
    • Avoid Memory Leaks: Be mindful of memory usage, especially when dealing with large files or data sets. Use techniques like streaming and pagination to process data in smaller chunks.
    • Clean Up on Deactivation: Implement the deactivate() function to clean up any resources that need to be explicitly released when the extension is deactivated.
  • Asynchronous Operations:

    • Use Async/Await: Use async/await to handle asynchronous operations in a clean and readable way. This makes your code easier to understand and maintain.
    • Handle Errors: Properly handle errors in asynchronous operations using try/catch blocks. Log errors and provide informative messages to the user.
    • Avoid Blocking the UI: Ensure that long-running operations are performed in the background to avoid blocking the VS Code UI. Use vscode.window.withProgress to provide feedback to the user during long operations.
  • Security:

    • Validate User Input: Sanitize and validate any user input to prevent security vulnerabilities like code injection and cross-site scripting (XSS).
    • Secure API Keys: Store API keys and other sensitive information securely. Use VS Code’s secret storage API to encrypt and protect sensitive data.
    • Limit Permissions: Request only the necessary permissions for your extension. Avoid requesting excessive permissions that could compromise user security.
  • Performance:

    • Optimize Code: Optimize your code for performance. Use efficient algorithms and data structures to minimize execution time.
    • Lazy Load Resources: Load resources only when they are needed. This can improve the startup time of your extension.
    • Cache Data: Cache frequently accessed data to reduce the number of API calls and improve performance.
  • Code Quality:

    • Follow Coding Standards: Adhere to established coding standards and best practices. This makes your code more readable, maintainable, and less prone to errors.
    • Write Unit Tests: Write unit tests to ensure that your code is working correctly. This helps you catch bugs early and prevent regressions.
    • Use a Linter: Use a linter to automatically identify and fix code style issues. This helps you maintain a consistent code style across your project.
  • User Experience:

    • Provide Clear Feedback: Provide clear and informative feedback to the user. Use status bar messages, progress bars, and error messages to keep the user informed about what’s happening.
    • Respect User Settings: Respect user settings and preferences. Allow users to customize the behavior of your extension to suit their needs.
    • Keep it Simple: Keep your extension simple and easy to use. Avoid adding unnecessary features that could clutter the UI and confuse the user.

By following these best practices, you can develop robust, efficient, and user-friendly VS Code extensions that enhance the development experience for yourself and others.

Example: Creating an AI Chatbot Integration for Documentation Generation

Let’s walk through creating a custom VS Code extension that integrates with an AI Chatbot to generate documentation for selected code. This example assumes you have access to an AI Chatbot API (like OpenAI’s GPT models). You’ll need an API key. Remember to handle your API key securely and do not commit it to your repository.

1. Scaffold the Extension:

First, use the Yeoman generator to create a new extension project:

yo code

2. Modify the Extension Code:

Open the generated src/extension.ts file and add the following code to create a command that sends selected code to the AI Chatbot and displays the generated documentation:

import * as vscode from 'vscode';
import axios from 'axios';

export function activate(context: vscode.ExtensionContext) {
 let disposable = vscode.commands.registerCommand('extension.generateDocs', async () => {
  const editor = vscode.window.activeTextEditor;
  if (editor) {
   const selection = editor.selection;
   const selectedText = editor.document.getText(selection);

   const apiKey = 'YOUR_API_KEY'; // Replace with your actual API key
   const apiUrl = 'https://api.openai.com/v1/engines/davinci-codex/completions';

   try {
    const response = await axios.post(
     apiUrl,
     {
      prompt: `Generate documentation for the following code:\n\n${selectedText}`,
      max_tokens: 150,
      n: 1,
      stop: null,
      temperature: 0.5,
     },
     {
      headers: {
       'Content-Type': 'application/json',
       Authorization: `Bearer ${apiKey}`,
      },
     }
    );

    const generatedDocs = response.data.choices[0].text;
    vscode.window.showInformationMessage('Generated Documentation:\n' + generatedDocs);
   } catch (error) {
    vscode.window.showErrorMessage('Error generating documentation: ' + error.message);
   }
  }
 });

 context.subscriptions.push(disposable);
}

export function deactivate() {}

3. Update package.json:

Add the following command configuration to the contributes section of your package.json file:

"contributes": {
    "commands": [
        {
            "command": "extension.generateDocs",
            "title": "Generate Documentation"
        }
    ]
}

4. Run and Test the Extension:

Press F5 to open a new VS Code window with your extension loaded. Highlight some code, right-click, and select “Generate Documentation” to see the AI-generated documentation.

Packaging and Distributing Your Custom Extension

Once you’ve developed and tested your custom VS Code extension, you’ll likely want to share it with your team or the wider community. Here’s how to package and distribute your extension, including options for local and private distribution:

1. Package the Extension:

VS Code uses the vsce (Visual Studio Code Extensions) tool to package extensions. If you don’t have it installed globally, install it using npm:

npm install -g vsce

Navigate to your extension’s root directory and run the following command to package your extension:

vsce package

This will create a .vsix file, which is the packaged extension.

2. Publish to the Visual Studio Code Marketplace:

To publish your extension to the Visual Studio Code Marketplace, you’ll need to create a publisher account and obtain a Personal Access Token (PAT). Follow the instructions on the Visual Studio Code Marketplace to set up your publisher account and generate a PAT.

Once you have your PAT, run the following command to publish your extension:

vsce publish

You’ll be prompted to enter your publisher name and PAT. After successful authentication, your extension will be published to the marketplace.

3. Share Privately:

If you prefer to share your extension privately within your organization, you can distribute the .vsix file directly to your team members. They can install the extension by running the following command in VS Code:

code --install-extension your-extension.vsix

Alternatively, you can set up a private extension registry using tools like Azure DevOps Artifacts or npm Enterprise to manage and distribute your custom extensions securely.

Conclusion

Visual Studio Code extensions are a powerful tool for enhancing the capabilities of your development environment and improving your team’s productivity, code quality, and overall efficiency. By carefully selecting, managing, and securing your extensions, you can create a tailored IDE that meets your specific needs and helps your team deliver high-quality software on time and within budget. Whether you’re using existing extensions from the marketplace or creating your own custom solutions, the possibilities are endless. Embrace the power of VS Code extensions and unlock the full potential of your development team.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/feed/ 0 377088
Migration of DNS Hosted Zones in AWS https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/ https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/#respond Tue, 31 Dec 2024 08:00:47 +0000 https://blogs.perficient.com/?p=374245

Transferring Route 53 hosted zone records between AWS accounts using the CLI involves exporting the records from one account and then importing them to another. Here’s a step-by-step guide:

Migration of DNS Hosted Zones in AWS

The primary objective of this process is to migrate Route 53 hosted zone records seamlessly between AWS accounts while ensuring minimal disruption to DNS functionality. This involves securely transferring DNS records, preserving their integrity, maintaining availability, and ensuring linked AWS resources remain accessible. Additionally, cross-account DNS access may be implemented as needed to meet business requirements.

Img1

Objectives:

  • Migration Process Overview
  • Prerequisites
  • Configuration Overview
  • Best Practices
  • Conclusion

 

Prerequisites:

  • Account Permissions: Ensure you have AmazonRoute53FullAccess permissions in both source and destination accounts. For domain transfers, additional permissions (TransferDomains, DisableDomainTransferLock, etc.) are required.
  • Export Tooling: Use the AWS CLI or SDK for listing and exporting DNS records, as Route 53 does not have a built-in export feature.
  • Destination Hosted Zone: Create a hosted zone in the destination account with the same domain name as the original. Note the new hosted zone ID for use in subsequent steps.
  • AWS Resource Dependencies: Identify resources tied to DNS records (such as EC2 instances or ELBs) and ensure these are accessible or re-created in the destination account if needed.

 

Configuration Overview:

1. Crete EC2 Instance and Download the cli53 in Using Below Commands:

  • Use the AWS CLI53 to list DNS records in the source account and save them to a JSON file:

Wget https://github.com/barnybug/cli53/releases/download/0.8.16/cli53-linux-amd64

Note: Linux can also be used, but it requires cli53 dependency and AWS credentials

 

  • Move the cli53 to the bin folder and change the permission

Img2

2. Create Hosted Zone in Destination Account:

  • In the destination account, create a new hosted zone with the same domain name using cli or GUI:
    • Take note of the new hosted zone ID.

3. Export DNS Records from Existing Hosted Zone:

  • Export the records using cli53 in ec2 instance using below command and remove NS and SOA records from this file, as the new hosted zone will generate these by default.

Img3

Note: Created Microsoft.com as dummy hosted zone.

4. Import DNS Records to Destination Hosted Zone:

  • Use the exported JSON file to import records into the new hosted zone for that just copy all records from the domain.com.txt file

Img4

  • Now login to other AWS route53 account and just import the records those copied from the exported file, please refer to below ss
  • Now save the file and verified the records

Img5

5. Test DNS Records:

  • Verify DNS record functionality by querying records in the new hosted zone and ensuring that all services resolve correctly.

 

Best practices:

When migrating Route 53 hosted zones between AWS accounts, applying best practices helps ensure a smooth transition with minimal disruption. Here are key best practices for a successful Route 53 hosted zone migration:

1. Plan and Document the Migration Process

  • Detailed Planning: Outline each step of the migration process, including DNS record export, transfer, and import, as well as any required changes in the destination account.
  • Documentation: Document all DNS records, configurations, and dependencies before starting the migration. This helps in troubleshooting and serves as a backup.

2. Schedule Migration During Low-Traffic Periods

  • Reduce Impact: Perform the migration during off-peak hours to minimize potential disruption, especially if you need to update NS records or other critical DNS configurations.

3. Test in a Staging Environment

  • Dry Run: Before migrating a production hosted zone, perform a test migration in a staging environment. This helps identify potential issues and ensures that your migration plan is sound.
  • Verify Configurations: Ensure that the DNS records resolve correctly and that applications dependent on these records function as expected.

4. Use Route 53 Resolver for Multi-Account Setups

  • Centralized DNS Management: For environments with multiple AWS accounts, consider using Route 53 Resolver endpoints and sharing resolver rules through AWS Resource Access Manager (RAM). This enables efficient cross-account DNS resolution without duplicating hosted zones across accounts.

5. Avoid Overwriting NS and SOA Records

  • Use Default NS and SOA: Route 53 automatically creates NS and SOA records when you create a hosted zone. Retain these default records in the destination account, as they are linked to the new hosted zone’s configuration and AWS infrastructure.

6. Update Resource Permissions and Dependencies

  • Resource Links: DNS records may point to AWS resources like load balancers or S3 buckets. Ensure that these resources are accessible from the new account and adjust permissions if necessary.
  • Cross-Account Access: If resources remain in the source account, establish cross-account permissions to ensure continued access.

7. Validate DNS Records Post-Migration

  • DNS Resolution Testing: Test the new hosted zone’s DNS records using tools like dig or nslookup to confirm they are resolving correctly. Check application connectivity to confirm that all dependent services are operational.
  • TTL Considerations: Set a low TTL (Time to Live) on records before migration. This speeds up DNS propagation once the migration is complete, reducing the time it takes for changes to propagate.

8. Consider Security and Access Control

  • Secure Access: Ensure that only authorized personnel have access to modify hosted zones during the migration.

9. Establish a Rollback Plan

  • Rollback Strategy: Plan for a rollback if any issues arise. Keep the original hosted zone active until the new configuration is fully tested and validated.
  • Backup Data: Maintain a backup of all records and configurations so you can revert to the original settings if needed.

Conclusion

Migrating a Route 53 hosted zone between AWS accounts involves careful planning, especially to ensure DNS records are exported and imported correctly. After migrating, testing is crucial to confirm that DNS resolution works as expected. Cross-account setups may require additional configuration, such as Route 53 Resolver rules, to ensure seamless DNS functionality across environments.

]]>
https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/feed/ 0 374245
Unit Testing in Android Apps: A Deep Dive into MVVM https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/ https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/#respond Tue, 26 Nov 2024 19:56:40 +0000 https://blogs.perficient.com/?p=372567

Understanding Unit Testing

Unit testing is a crucial aspect of software development, especially in complex applications like Android apps. It involves testing individual units of code, such as methods or classes, in isolation. This ensures the correctness of each component, leading to a more robust and reliable application.

Why Unit Testing in MVVM?

The Model-View-ViewModel (MVVM) architectural pattern is widely adopted in Android app development. It separates the application into three distinct layers:

  • Model: Handles data logic and interacts with data sources.
  • View: Responsible for the UI and user interactions.
  • ViewModel: Acts as a bridge between the View and Model, providing data and handling UI logic.

Unit testing each layer in an MVVM architecture offers numerous benefits:

  • Early Bug Detection: Identify and fix issues before they propagate to other parts of the app.
  • Improved Code Quality: Write cleaner, more concise, and maintainable code.
  • Accelerated Development: Refactor code and add new features with confidence.
  • Enhanced Collaboration: Maintain consistent code quality across the team.

Setting Up the Environment

  1. Android Studio: Ensure you have the latest version installed.
  2. Testing Framework: Add the necessary testing framework to your app/build.gradle file:

    testImplementation 'junit:junit:4.13.2'
    androidTestImplementation 'androidx.test.ext:junit:1.1.5'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1'
  3. Testing Library: Consider using a testing library like Mockito or MockK to create mock objects for testing dependencies.

Unit Testing ViewModels

  1. Create a Test Class: Create a separate test class for each ViewModel you want to test.
  2. Set Up Test Dependencies: Use dependency injection frameworks like Dagger Hilt or Koin to inject dependencies into your ViewModel. For testing, use mock objects to simulate the behavior of these dependencies.
  3. Write Test Cases: Write comprehensive test cases covering various scenarios:
  • Input Validation: Test how the ViewModel handles invalid input.
  • Data Transformation: Test how the ViewModel transforms data from the Model.
  • UI Updates: Test how the ViewModel updates the UI through LiveData or StateFlow.
  • Error Handling: Test how the ViewModel handles errors and exceptions.

Example:

@RunWith(AndroidJUnit4::class)
class MyViewModelTest {

    @Test
    fun `should update the UI when data is fetched successfully`() {
        // ... (Arrange)
        val viewModel = MyViewModel(mockRepository)

        // ... (Act)
        viewModel.fetchData()

        // ... (Assert)
        viewModel.uiState.observeForever { uiState ->
            assertThat(uiState.isLoading).isFalse()
            assertThat(uiState.error).isNull()
            assertThat(uiState.data).isEqualTo(expectedData)
        }
    }
}

Unit Testing Repositories

  1. Create Test Classes: Create separate test classes for each Repository class.
  2. Set Up Test Dependencies: Use dependency injection to inject dependencies into your Repository. For testing, use mock objects to simulate the behavior of data sources like databases or network APIs.
  3. Write Test Cases: Write test cases to cover:
  • Data Fetching: Test how the Repository fetches data from remote or local sources.
  • Data Storage: Test how the Repository stores and retrieves data.
  • Data Manipulation: Test how the Repository processes and transforms data.
  • Error Handling: Test how the Repository handles errors and exceptions.

Example:

@RunWith(AndroidJUnit4::class)
class MyRepositoryTest {

    @Test
    fun `should fetch data from remote source successfully`() {
        // ... (Arrange)
        val mockApi = mock(MyApi::class.java)
        val repository = MyRepository(mockApi)

        // ... (Act)
        repository.fetchData()

        // ... (Assert)
        verify(mockApi).fetchData()
    }
}

Implementing SonarQube

SonarQube is a powerful tool for code quality and security analysis. Here’s a detailed guide on how to integrate SonarQube with your Android project:

  1. Set Up SonarQube Server:
  • Install SonarQube Server: Download and install the SonarQube server on your machine or a server.
  • Configure SonarQube: Configure the server with database settings, user authentication, and other necessary parameters.
  • Start SonarQube Server: Start the SonarQube server.
  1. Configure SonarQube Scanner:
  • Install SonarQube Scanner: Download and install the SonarQube Scanner.
  • Configure Scanner Properties: Create a sonar-scanner.properties file in your project’s root directory and configure the following properties:

    sonar.host.url=http://localhost:9000
    sonar.login=your_sonar_login
    sonar.password=your_sonar_password
    sonar.projectKey=my-android-project
    sonar.projectName=My Android Project
    sonar.sources=src/main/java
    sonar.java.binaries=build/intermediates/javac/release/classes
  1. Integrate SonarQube with Your Build Process:
  • Gradle: Add the SonarQube Gradle plugin to your build.gradle file:

    plugins {
        id 'org.sonarsource.scanner-gradle' version '3.3'
    }

    Configure the plugin with your SonarQube server URL and authentication token.

  • Maven: Add the SonarQube Maven plugin to your pom.xml file. Configure the plugin with your SonarQube server URL and authentication token.
  1. Run SonarQube Analysis:
  • Execute the SonarQube analysis using the SonarQube Scanner. This can be done manually or integrated into your CI/CD pipeline.
  1. Analyze the Results:
  • Once the analysis is complete, you can view the results on the SonarQube dashboard. The dashboard provides insights into code quality, security vulnerabilities, and potential improvements.

Implementing Test Coverage with Bitrise

Test coverage measures the percentage of your code that is covered by tests. It’s a crucial metric to assess the quality of your test suite. Here’s how to measure test coverage with Bitrise:

  1. Configure Code Coverage Tool: Add a code coverage tool like JaCoCo to your project. Configure it to generate coverage reports in a suitable format (e.g., XML).
  2. Add Code Coverage Step to Bitrise Workflow: Add a step to your Bitrise Workflow to generate the code coverage report. This step should execute your tests and generate the report.
  3. Upload Coverage Report to SonarQube: Add a step to upload the generated code coverage report to SonarQube. This will allow SonarQube to analyze the report and display the coverage metrics.

Best Practices for Unit Testing

  • Write Clear and Concise Tests: Use descriptive names for test methods and variables.
  • Test Edge Cases: Consider testing scenarios with invalid input, empty data, or network errors.
  • Use a Testing Framework: A testing framework like JUnit provides a structured way to write and run tests.
  • Leverage Mocking: Use mocking frameworks like Mockito or MockK to isolate units of code and control their behavior.
  • Automate Testing: Integrate unit tests into your CI/CD pipeline to ensure code quality.
  • Review and Refactor Tests: Regularly review and refactor your tests to keep them up-to-date and maintainable.

By following these guidelines and incorporating unit testing into your development process, you can significantly improve the quality and reliability of your Android apps.

]]>
https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/feed/ 0 372567
Fixing an XM Cloud Deployment Failure https://blogs.perficient.com/2024/06/14/fixing-an-xm-cloud-deployment-failure/ https://blogs.perficient.com/2024/06/14/fixing-an-xm-cloud-deployment-failure/#respond Fri, 14 Jun 2024 18:49:22 +0000 https://blogs.perficient.com/?p=364321

Intro 📖

Last week, I noticed that deployments to Sitecore XM Cloud were failing on one of my projects. In this blog post, I’ll review the troubleshooting steps I went through and what the issue turned out to be. To provide a bit more context on the DevOps setup for this particular project, an Azure DevOps pipeline runs a script. That script uses the Sitecore CLI and the Sitecore XM Cloud plugin’s cloud deployment command to deploy to XM Cloud. The last successful deployment was just a few days prior and there hadn’t been many code changes since. Initially, I was pretty stumped but, hey, what can you do except start from the top…

Troubleshooting 👷‍♂️

  1. Anyone that has worked with cloud-based SaaS services knows that transient faults are a thing–and XM Cloud is no exception. The first thing I tried was to simply rerun the failed stage in our pipeline to see if this was “just a hiccup.” Alas, several subsequent deployment attempts failed with the same error. Okay, fine, this wasn’t a transient issue 😞.
  2. Looking at the logs in the XM Cloud Deploy interface, the build stage was consistently failing. Drilling into the logs, there were several compilation errors citing missing Sitecore assemblies. For example: error CS0246: The type or namespace name ‘Item’ could not be found (are you missing a using directive or an assembly reference?). This suggested an issue with either the NuGet restore or with compilation more broadly.
  3. Rerunning failed stages in an Azure DevOps pipeline uses the same commit that was associated with the first run–the latest code from the branch isn’t pulled on each rerun attempt. This meant that the code used for the last successful deployment was the same code used for the subsequent attempts. In other words, this probably wasn’t a code issue (famous last words, right 😅?).
  4. Just to be sure, I diffed several recent commits on our development branch and, yeah, there weren’t any changes that could have broken compilation since the last successful deployment.
  5. To continue the sanity checks, I pulled down the specific commit locally and verified that I could:
    1. Restore NuGet packages, via both the UI and console
    2. Build/rebuild the Visual Studio solution
  6. After revisiting and diffing the XM Cloud Deploy logs, I noticed that the version of msbuild had changed between the last successful deployment and the more recent failed deployments. I downloaded the same, newer version of msbuild and verified, once again, that I could restore NuGet packages and build the solution.
  7. Finally, I confirmed that the validation build configured for the development branch (via branch policies in Azure DevOps) was running as expected and successfully building the solution each time a new pull request was created.

At this point, while I continued to analyze the deployment logs, I opened a Sitecore support ticket to have them weigh in 🙋‍♂️. I provided support with the last known working build logs, the latest failed build logs, and the list of my troubleshooting steps up to that point.

The Fix 🩹

After hearing back from Sitecore support, it turned out that Sitecore had recently made a change to how the buildTargets property in the xmcloud.build.json file was consumed and used as part of deployments. To quote the support engineer:

There were some changes in the build process, and now the build targets are loaded from the “buildTargets ” list. The previous working builds were using the “.sln” file directly.
It looks like that resulted in the build not working properly for some projects.

The suggested fix was to specifically target the Visual Studio solution file to ensure that the XM Cloud Deployment NuGet package restore and compilation worked as expected. My interpretation of the change was “XM Cloud Deploy used to not care about/respect buildTargets…but now it does.”

After creating a pull request to change the buildTargets property from this (targeting the specific, top-level project):

{
  ...
  "buildTargets": [
    "./src/platform/Project/Foo.Project.Platform/Platform.csproj"
  ]
  ...
}

…to this (targeting the solution):

{
  ...
  "buildTargets": [
    "./Foo.sln"
  ]
  ...
}

…the next deployment to XM Cloud (via CI/CD) worked as expected. ✅🎉

After asking the Sitecore support engineer where this change was documented, they graciously escalated internally and posted a new event to the Sitecore Status Page to acknowledge the change/issue: Deployment is failing to build.

If you’re noticing that your XM Cloud deployments are failing on the build step while compiling your Visual Studio solution, make sure you’re targeting the solution file (.sln) and not a specific project file (.csproj) in the buildTargets property in the xmcloud.build.json file…because it matters now, apparently 😉.

Thanks for the read! 🙏

]]>
https://blogs.perficient.com/2024/06/14/fixing-an-xm-cloud-deployment-failure/feed/ 0 364321
GitHub – On-Prem Server Connectivity Using Self-Hosted Runners https://blogs.perficient.com/2024/06/05/github-on-prem-server-connectivity-using-self-hosted-runners/ https://blogs.perficient.com/2024/06/05/github-on-prem-server-connectivity-using-self-hosted-runners/#respond Wed, 05 Jun 2024 06:09:03 +0000 https://blogs.perficient.com/?p=363835

Various deployment methods, including cloud-based (e.g., CloudHub) and on-premises, are available to meet diverse infrastructure needs. GitHub, among other tools, supports versioning and code backup, while CI/CD practices automate integration and deployment processes, enhancing code quality and speeding up software delivery.

GitHub Actions, an automation platform by GitHub, streamlines building, testing, and deploying software workflows directly from repositories. Although commonly associated with cloud deployments, GitHub Actions can be adapted for on-premises setups with self-hosted runners. These runners serve as the execution environment, enabling deployment tasks on local infrastructure.

Configuring self-hosted runners allows customization of GitHub Actions workflows for on-premises deployment needs. Workflows can automate tasks like Docker image building, artifact pushing to private registries, and application deployment to local servers.

Leveraging GitHub Actions for on-premises deployment combines the benefits of automation, version control, and collaboration with control over infrastructure and deployment processes.

What is a Runner?

A runner refers to the machine responsible for executing tasks within a GitHub Actions workflow. It performs various tasks defined in the action script, like cloning the code directory, building the code, testing the code, and installing various tools and software required to run the GitHub action workflow.

There are 2 Primary Types of Runners:

  1. GitHub Hosted Runners:

    These are virtual machines provided by GitHub to run workflows. Each machine comes pre-configured with the environment, tools, and settings required for GitHub Actions. GitHub-hosted runners support various operating systems, such as Ubuntu Linux, Windows, and macOS.

  2. Self-Hosted Runners:

    A self-hosted runner is a system deployed and managed by the user to execute GitHub Actions jobs. Compared to GitHub-hosted runners, self-hosted runners offer more flexibility and control over hardware, operating systems, and software tools. Users can customize hardware configurations, install software from their local network, and choose operating systems not provided by GitHub-hosted runners. Self-hosted runners can be physical machines, virtual machines, containers, on-premises servers, or cloud-based instances.

Why Do We Need a Self-hosted Runner?

Self-hosted runners play a crucial role in deploying applications on the on-prem server using GitHub Action Scripts and establishing connectivity with an on-prem server. These runners can be created at different management levels within GitHub: repository, organization, and enterprise.

By leveraging self-hosted runners for deployment, organizations can optimize control, customization, performance, and cost-effectiveness while meeting compliance requirements and integrating seamlessly with existing infrastructure and tools. Here are few advantages of self-hosted runners as given below.

  1. Control and Security:

    Self-hosted runners allow organizations to maintain control over their infrastructure and deployment environment. This includes implementing specific security measures tailored to the organization’s requirements, such as firewall rules and access controls.

  2. Customization:

    With self-hosted runners, you have the flexibility to customize the hardware and software environment to match your specific needs. This can include installing specific libraries, tools, or dependencies required for your applications or services.

  3. Performance:

    Self-hosted runners can offer improved performance compared to cloud-based alternatives, especially for deployments that require high computational resources or low-latency connections to local resources.

  4. Cost Management:

    While cloud-based solutions often incur ongoing costs based on usage and resource consumption, self-hosted runners can provide cost savings by utilizing existing infrastructure without incurring additional cloud service charges.

  5. Compliance:

    For organizations operating in regulated industries or regions with strict compliance requirements, self-hosted runners offer greater control and visibility over where code is executed and how data is handled, facilitating compliance efforts.

  6. Offline Deployment:

    In environments where internet connectivity is limited or unreliable, self-hosted runners enable deployment workflows to continue functioning without dependency on external cloud services or repositories.

  7. Scalability:

    Self-hosted runners can be scaled up or down according to demand, allowing organizations to adjust resource allocation based on workload fluctuations or project requirements.

  8. Integration with Existing Tools:

    Self-hosted runners seamlessly integrate with existing on-premises tools and infrastructure, facilitating smoother adoption and interoperability within the organization’s ecosystem.

Getting Started With a Self-hosted Runner

Follow the steps below to create and utilize a self-hosted runner.

Repository Level Self-hosted Runner:

  1. Log in to your GitHub account and navigate to the desired repository.
  2. Go to the repository’s settings tab and select the “Runners” menu under the “Actions” menu.
    Step 1
  3. Click on the “New self-hosted runner” button to initiate the creation process.
    Image 2
  4. Based on your system requirements, choose the appropriate runner image. For instance, if your self-hosted runner will run on Windows, select the Windows runner image.
    Image 3
  5. Open Windows PowerShell and execute the following command to create the actions-runner folder:
    mkdir actions-runner; cd actions-runner
  6. Download the latest runner package by running the following command:
    Invoke-WebRequest -Uri https://github.com/actions/runner/releases/download/v2.316.0/actions-runner-win-x64-2.316.0.zip -OutFile actions-runner-win-x64-2.316.0.zip
  7. Extract the downloaded package and configure the self-hosted runner according to your deployment needs.
    Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory("$PWD/actions-runner-win-x64-2.316.0.zip", "$PWD")
  8. Configure the runner using the below command. Replace the placeholder with actual values.
    ./config.cmd --url https://github.com/<owner>/<repo_name> --token <token>
  9. To run the runner, use the below command.
    ./run.cmd
  10. Now your self-hosted runner is ready to use in your GitHub action script.
    # Use below YAML code snippet in your workflow file for each job
    runs-on: self-hosted

    Image 4

Organization Level and Enterprise Level Self-hosted Runners:

The process for creating organization-level and enterprise-level self-hosted runners follows similar steps. Still, the runners created at these levels can serve multiple repositories or organizations within the account. The setup process generally involves administrative permissions and configuration at a broader level.

By following these steps, you can set up self-hosted runners to enable connectivity between your on-prem server and GitHub Action Scripts, facilitating on-prem deployments seamlessly.

]]>
https://blogs.perficient.com/2024/06/05/github-on-prem-server-connectivity-using-self-hosted-runners/feed/ 0 363835
Perficient Achieves AWS DevOps Competency https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/ https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/#respond Tue, 04 Jun 2024 18:48:31 +0000 https://blogs.perficient.com/?p=363795

Perficient is excited to announce our achievement in Amazon Web Services (AWS) DevOps Competency for AWS partners. This designation recognizes Perficient as an AWS partner that has demonstrated expertise in delivering DevSecOps solutions. This competency highlights Perficient’s ability to drive innovation, meet business objectives, and get the most out of your AWS services. 

What does this mean for Perficient? 

Achieving the AWS DevOps Competency status differentiates Perficient as an AWS Partner Network (APN) member that provides modern product engineering solutions designed to help enterprises adopt, develop, and deploy complex projects faster on AWS. To receive the designation, APN members must possess deep AWS expertise and deliver solutions seamlessly on AWS. 

This competency empowers our delivery teams to break down traditional silos, shorten feedback loops, and respond more effectively to changes, ultimately increasing speed to market by up to 75%.  

What does this mean for you? 

With our partnership with AWS, we can modernize our clients’ processes to improve product quality, scalability, and performance, and significantly reduce release costs by up to 97%. This achievement ensures that our CI/CD processes and IT governance are sustainable and efficient, benefiting organizations of any size.  

At Perficient, we strive to be the place where great minds and great companies converge to boldly advance business, and this achievement is a testament to that vision!  

]]>
https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/feed/ 0 363795
Unleash the Power of Your CloudFront Logs: Analytics with AWS Athena https://blogs.perficient.com/2024/05/22/unleash-the-power-of-your-cloudfront-logs-analytics-with-aws-athena/ https://blogs.perficient.com/2024/05/22/unleash-the-power-of-your-cloudfront-logs-analytics-with-aws-athena/#comments Wed, 22 May 2024 06:48:07 +0000 https://blogs.perficient.com/?p=362976

CloudFront, Amazon’s Content Delivery Network (CDN), accelerates website performance by delivering content from geographically distributed edge locations. But how do you understand how users interact with your content and optimize CloudFront’s performance? The answer lies in CloudFront access logs, and a powerful tool called AWS Athena can help you unlock valuable insights from them. In this blog post, we’ll explore how you can leverage Amazon Athena to simplify log analysis for your CloudFront CDN service.

Why Analyze CloudFront Logs?

CloudFront delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. However, managing and analyzing the logs generated by CloudFront can be challenging due to their sheer volume and complexity.

These logs contain valuable information such as request details, response status codes, and latency metrics, which can help you gain insights into your application’s performance, user behavior, and security incidents. Analyzing this data manually or using traditional methods like log parsing scripts can be time-consuming and inefficient.

By analyzing these logs, you gain a deeper understanding of:

  • User behaviour and access patterns: Identify popular content, user traffic patterns, and potential areas for improvement.
  • Content popularity and resource usage: See which resources are accessed most frequently and optimize caching strategies.
  • CDN performance metrics: Measure CloudFront’s effectiveness by analyzing hit rates, latency, and potential bottlenecks.
  • Potential issues: Investigate spikes in errors, identify regions with slow response times, and proactively address issues.

Introducing AWS Athena: Your CloudFront Log Analysis Hero

Amazon Athena is a serverless query service that allows you to analyze data stored in Amazon S3 using standard SQL. Here’s why Athena is perfect for CloudFront logs:

  • Cost-Effective: You only pay for the queries you run, making it a budget-friendly solution.
  • Serverless: No infrastructure to manage – Athena takes care of everything.
  • Familiar Interface: Use standard SQL queries, eliminating the need to learn complex new languages.

Architecture:

Arcgi

Getting Started with Athena and CloudFront Logs

To begin using Amazon Athena for CloudFront log analysis, follow these steps:

1. Enable Logging in Amazon CloudFront

If you haven’t already done so, enable logging for your CloudFront distribution. This will start capturing detailed access logs for all requests made to your content.

2. Store Logs in Amazon S3

Configure CloudFront to store access logs in a designated Amazon S3 bucket. Ensure that you have the necessary permissions to access this bucket from Amazon Athena.

3. Create an Athena Table

Create an external table in Amazon Athena, specifying the schema that matches the structure of your CloudFront log files.

Below is the sample query we have used to create a Table :

 CREATE EXTERNAL TABLE IF NOT EXISTS cloudfront_logs (

  date STRING,

  time STRING,

  location STRING,

  bytes BIGINT,

  request_ip STRING,

  method STRING,

  host STRING,

  uri STRING,

  status INT,

  referrer STRING,

  user_agent STRING,

  query_string STRING,

  cookie STRING,

  result_type STRING,

  request_id STRING,

  host_header STRING,

  request_protocol STRING,

  request_bytes BIGINT,

  time_taken FLOAT,

  xforwarded_for STRING,

  ssl_protocol STRING,

  ssl_cipher STRING,

  response_result_type STRING,

  http_version STRING,

  fle_encrypted_fields STRING,

  fle_status STRING,

  unique_id STRING

)

ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t’ ESCAPED BY ‘\’ LINES TERMINATED BY ‘\n’

LOCATION ‘paste your s3 URI here’;

Click on the run button!

Query

Extracting Insights with Athena Queries

Now comes the fun part – using Athena to answer your questions about CloudFront performance. Here are some sample queries to get you going:

Total Requests

Find the total number of requests served by CloudFront for a specific date range.

SQL

SELECT

    COUNT(*) AS total_requests

FROM

    cloudfront_logs

WHERE

    date BETWEEN ‘2023-12-01’ AND ‘2023-12-31’;

 

Most Requested Resources

Identify the top 10 most requested URLs from your CloudFront distribution. This query will give you a list of the top 10 most requested URLs along with their corresponding request counts. You can use this information to identify popular content and analyze user behavior on your CloudFront distribution.

SQL

SELECT

    uri,

    COUNT(*) AS request_count

FROM

    assetscs_cdn_logs

GROUP BY

    uri

ORDER BY

    request_count DESC

LIMIT 10;

Traffic by Region

Analyze traffic patterns by user location.

This query selects the location field from your CloudFront logs (which typically represents the geographical region of the user) and counts the number of requests for each location. It then groups the results by location and orders them in descending order based on the request count. This query will give you a breakdown of traffic by region, allowing you to analyze which regions generate the most requests to your CloudFront distribution. You can use this information to optimize content delivery, allocate resources, and tailor your services based on geographic demand.

SQL

SELECT

    location,

    COUNT(*) AS request_count

FROM

    cloudfront_logs

GROUP BY

    location

ORDER BY

    request_count DESC;

 

Average Response Time

Calculate the average response time for CloudFront requests. Executing this query will give you the average response time for all requests served by your CloudFront distribution. You can use this metric to monitor the performance of your CDN and identify any potential performance bottlenecks.

SQL

SELECT

    AVG(time_taken) AS average_response_time

FROM

    cloudfront_logs;

 

Number of Requests According to Status

The below query will provide you with a breakdown of the number of requests for each HTTP status code returned by CloudFront, allowing you to identify any patterns or anomalies in your CDN’s behavior.

SQL

SELECT status, COUNT(*) as count

FROM cloudfront_logs

GROUP BY status

ORDER BY count DESC;

Athena empowers you to create even more complex queries involving joins, aggregations, and filtering to uncover deeper insights from your CloudFront logs.

Optimizing CloudFront with Log Analysis

By analyzing CloudFront logs, you can identify areas for improvement:

  • Resource Optimization: Resources with consistently high latency or low hit rates might benefit from being cached at more edge locations.
  • Geographic Targeting: Regions with high traffic volume might warrant additional edge locations to enhance user experience.

Conclusion

AWS Athena and CloudFront access logs form a powerful duo for unlocking valuable insights into user behavior and CDN performance. With Athena’s cost-effective and user-friendly approach, you can gain a deeper understanding of your content delivery and make data-driven decisions to optimize your CloudFront deployment.

Ready to Unleash the Power of Your Logs?

Get started with AWS Athena today and unlock the hidden potential within your CloudFront logs. With its intuitive interface and serverless architecture, Athena empowers you to transform data into actionable insights for a faster, more performant CDN experience.

]]>
https://blogs.perficient.com/2024/05/22/unleash-the-power-of-your-cloudfront-logs-analytics-with-aws-athena/feed/ 1 362976
Client Success Story: Ensuring the Safety and Efficacy of Clinical Trials https://blogs.perficient.com/2023/10/04/ensuring-the-safety-and-efficacy-of-clinical-trials/ https://blogs.perficient.com/2023/10/04/ensuring-the-safety-and-efficacy-of-clinical-trials/#comments Wed, 04 Oct 2023 14:30:09 +0000 https://blogs.perficient.com/?p=346230

Client  

Our client is an American multinational corporation that develops medical devices, pharmaceuticals, and consumer packaged goods.

Industry Background

Better understanding and engaging patients and members has never been more critical than it is today. To meet clinical, business, and evolving consumer needs, healthcare, and life sciences organizations are focused on care delivery that enables innovation in patient engagement, data and analytics, and virtual care.

Project Background

Since 2008, Perficient has delivered multiple infrastructure platforming, commerce, and management consulting projects for this multinational corporation. In early 2021, our life sciences experts presented a clinical data review platform (CDRP) demonstrating our strong industry perspective and expertise.

Leveraging AWS, Databricks, and more, our platform enables humans assisted with machine intelligence to enhance the data cleaning, reviewing, and analyzing process and provide an accurate picture of study status and what is needed to achieve a milestone. These services simplify provisioning and managing infrastructure, deploying application code, automating software release processes, and monitoring your application and infrastructure performance.

Scope and Success of the Engagement         

Our custom-built clinical data review and cleaning environment will provide an accurate, real-time view of clinical studies, allowing the enterprise to holistically monitor progress and drive critical decision-making. It will provide:

  • A frictionless, consistent UI design for a seamless user experience across platforms
  • Cloud-based platforming leveraging Databricks on AWS and intelligent integrations to ingest data from key sources
  • A central source of truth for clinical data review and discrepancy management
  • An improved data review and visualization environment merging Databricks and Spotfire into a single custom web UI to equip users with intuitive clinical data review and collaboration capabilities
  • AI and machine learning to detect anomalies in clinical data and, in future iterations, alert data managers and medical reviewers to problematic trends
  • Training plans and materials for pilot and business releases to support users in key persona groups

Perficient + Healthcare

Perficient has been trusted by 14 of the 20 largest pharma/biotech firms and 6 of the top 10 CROs. Our secure, easy-to-use, flexible, and scalable AWS application hosting and architecture solutions will set you up for innovation success. Contact us today for more information.

]]>
https://blogs.perficient.com/2023/10/04/ensuring-the-safety-and-efficacy-of-clinical-trials/feed/ 1 346230
Deploying Azure Infrastructure With Terraform Using Azure DevOps Pipelines https://blogs.perficient.com/2023/07/13/deploying-azure-infrastructure-with-terraform-using-azure-devops-pipelines/ https://blogs.perficient.com/2023/07/13/deploying-azure-infrastructure-with-terraform-using-azure-devops-pipelines/#comments Thu, 13 Jul 2023 15:16:25 +0000 https://blogs.perficient.com/?p=340087

In this blog post, my objective is to provide a comprehensive walkthrough of the elements required for effectively implementing Azure Infrastructure with Terraform using an Azure DevOps Pipeline.

The main purpose is to assist you in grasping the concept of automating the deployment and maintenance of your cloud infrastructure residing in Azure.

Before delving into the provided examples, taking a step back and comprehending the underlying reasons for the aforementioned concepts would be beneficial.

Undoubtedly, we are incorporating various technologies in this context, each with its own advantages and disadvantages. The purpose of this article, I believe, is to enhance our fundamental understanding of each aspect and strive for a deployment approach that is repeatable, secure, and reliable.

What’s the Rationale Behind Choosing “GitHub”?

GitHub serves as a publicly accessible Source Code control platform. I have established a “public” repository to make my code accessible for this article.

Keep in mind that GitHub is not the only option available, as Azure DevOps Repos offers similar Git functionality. I won’t delve into the reasons for using Git here, as there are numerous articles that explain it much better. However, it is generally agreed upon that having a source code repository for control, auditing, and version management is highly beneficial.

Why Opt for “Azure”?

As a cloud platform, Azure provides businesses with opportunities for growth and scalability while effectively managing costs and capacity. The advantages of cloud computing are vast, and although I won’t delve into the specifics here, let’s assume that Azure is a favorable environment to work in.

Why Choose “Terraform”?

Terraform, in simple terms, allows IT professionals and developers to utilize infrastructure as code (IaC) tools in a single language to effortlessly deploy to various cloud platforms. These platforms, referred to as “Providers” in Terraform, encompass a wide range of options, and Terraform boasts hundreds of providers, including Azure.

Terraform simplifies the deployment, destruction, and redeployment process by utilizing a “tfstate” file, which we will discuss further in this article. This file enables Terraform to keep track of the deployment’s state since the last update and implement only the necessary changes implied by code updates. Additionally, Terraform includes a feature called “PLAN” that provides a report on the anticipated changes before you proceed to “APPLY” them.

Furthermore, Terraform inherently offers benefits such as source control and version control by allowing you to define your infrastructure as code.

Why Opt for “Azure DevOps”?

Azure DevOps is a collection of technologies designed to enhance business productivity, reliability, scalability, and robustness when utilized correctly. DevOps is a complex concept that requires thorough understanding, as evident in my other blog posts. From my perspective, DevOps revolves around three fundamental principles: people, process, and technology. Azure DevOps primarily falls under the “Technology” aspect of this triad.

Azure 1

Azure DevOps provides a range of tools, and for the purpose of this article, we will be utilizing “Pipelines.” This tooling, combined with Azure DevOps, offers features that automate infrastructure deployment with checks based on triggers. Consequently, it ensures that our code undergoes testing and deployment within a designated workflow, if necessary. By doing so, it establishes an auditable, repeatable, and reliable mechanism, mitigating the risk of human errors and other potential issues.

Let’s Outline Our Plan:

By bringing together the four key components discussed in this article (GitHub, Azure, Terraform, and Azure DevOps), we can harness a set of technologies that empower us to design and automate the deployment and management of infrastructure in Azure. As IT professionals, we can all appreciate the value and advantages of streamlining the design, deployment, and automation processes for any company.

Azure 2

We will focus on deploying a foundational landing zone into our Azure Subscription. Here are the essential components required to achieve this:

1. GitHub Repository: We’ll utilize a GitHub repository to store our code and make it accessible.

2. Azure Subscription: We need an Azure Subscription to serve as the environment where we will deploy our infrastructure.

3.1. Terraform Code (local deployment): We’ll use Terraform code, executed from our local machine (surajsingh-app01), to deploy the following Azure infrastructure components:

  • Resource Group
  • Virtual Network
  • Virtual Machine
  • Storage Account

3.2. Terraform Code (shared state deployment): Additionally, we’ll employ Terraform code that deploys Azure infrastructure while utilizing a shared state file.

Azure DevOps Organization: We’ll set up an Azure DevOps Organization, which provides a platform for managing our development and deployment processes.

Azure DevOps Pipeline: Within our Azure DevOps Organization, we will configure a pipeline to automate the deployment of our infrastructure.

By following this approach, we can establish a solid foundation for our Azure environment, allowing for efficient management and automation of infrastructure deployment.

1.1 – Establish Your GitHub Repository and Duplicate It to Your Local Computer

After logging into github.com, I successfully created a basic repository containing a README.md file. You can access the repository at the following URL: https://github.com/sunsunny-hub/AzureTerraformDeployment.

Azure 3

1.2 – Duplicate the Repository to Your Local Machine for Utilization in VSCode

To facilitate interaction and modification of your Terraform code on your local computer, you can clone the recently created GitHub Repository and employ your local machine to edit files and commit changes back to the repository on GitHub.

  • To begin, launch VSCode
  • Press CTRL + Shift + p
  • Enter the URL of your GitHub Repository. (Note: If you do not have the GitHub VSCode extension, you can install it from the extensions tab in VSCode.)

Azure 4

  • Choose a location on your local machine for the cloned repository.

Azure 5

  • Open the cloned repository on your local machine using VSCode.

Azure 6

Azure 7

2 – Azure Subscription

Ensure that you have the necessary permissions and access to an Azure Subscription that can be used for deploying infrastructure. If you do not currently possess a subscription, you have the option to sign up for a complimentary trial by visiting the AZURE FREE TRIAL page.

This free trial gives you the following:

Azure 8

3.1 – Terraform Code for Deploying Azure Infrastructure from Local Machine

Now, let’s proceed with creating our Terraform code. We will develop and test it on our local machine before making any modifications for deployment through Azure DevOps Pipelines. The steps below assume that you have already installed the latest Terraform module on your local machine. If you haven’t done so, you can refer to the provided guide for assistance. Additionally, it is assumed that you have installed the AZCLI (Azure Command-Line Interface).

Azure 9

To begin, open the VSCode Terminal and navigate to the folder of your newly cloned repository on your local machine. Once there, type ‘code .’ (including the period) to open our working folder in VS Code.

Next, enter the command ‘az login’ in the Terminal.

Azure 10

This action will redirect you to an OAUTH webpage, where you can enter your Azure credentials to authenticate your terminal session. It is important to note that, at this stage, we are authenticating our local machine in order to test our Terraform code before deploying it using Azure DevOps Pipelines.

In some accounts you have MFA enabled so you may need to log in with Tenant Id, use below cmd for logging in ‘az login –tenant TENANT_ID’.

Azure 11

After successful authentication, you will receive your subscription details in JSON format. If you have multiple subscriptions, you will need to set the context to the desired subscription. This can be done using either the Subscription ID or the Subscription name.

For example, to set the context for my subscription, I would use the following command: ‘az account set –subscription ” Subscription ID or Subscription name “‘

Now, let’s move on to our Terraform Code. In order to keep this deployment simple, I will store all the configurations in a single file named ‘main.tf’. To create this file, right-click on your open folder and select ‘New File’, then name it ‘main.tf’.

Azure 12

The initial Terraform code I will use to create the infrastructure is as follows:

main.tf

terraform {

required_providers {

azurerm = {

Specify what version of the provider we are going to utilise

source = “hashicorp/azurerm”

version = “>= 2.4.1”

}

}

}

provider “azurerm” {

features {

key_vault {

purge_soft_delete_on_destroy = true

}

resource_group {

prevent_deletion_if_contains_resources = false

}

}

}

data “azurerm_client_config” “current” {}

Create our Resource Group – surajsingh-RG

resource “azurerm_resource_group” “rg” {

name     = “surajsingh-app01”

location = “UK South”

}

Create our Virtual Network – surajsingh-VNET

resource “azurerm_virtual_network” “vnet” {

name                = “surajsinghvnet”

address_space       = [“10.0.0.0/16”]

location            = azurerm_resource_group.rg.location

resource_group_name = azurerm_resource_group.rg.name

}

Create our Subnet to hold our VM – Virtual Machines

resource “azurerm_subnet” “sn” {

name                 = “VM”

resource_group_name  = azurerm_resource_group.rg.name

virtual_network_name = azurerm_virtual_network.vnet.name

address_prefixes       = [“10.0.1.0/24”]

}

Create our Azure Storage Account – surajsinghsa

resource “azurerm_storage_account” “surajsinghsa” {

name                     = “surajsinghsa”

resource_group_name      = azurerm_resource_group.rg.name

location                 = azurerm_resource_group.rg.location

account_tier             = “Standard”

account_replication_type = “LRS”

tags = {

environment = “surajsinghrox”

}

}

Create our vNIC for our VM and assign it to our Virtual Machines Subnet

resource “azurerm_network_interface” “vmnic” {

name                = “surajsinghvm01nic”

location            = azurerm_resource_group.rg.location

resource_group_name = azurerm_resource_group.rg.name

 

ip_configuration {

name                          = “internal”

subnet_id                     = azurerm_subnet.sn.id

private_ip_address_allocation = “Dynamic”

}

}

Create our Virtual Machine – surajsingh-VM01

resource “azurerm_virtual_machine” “surajsinghvm01” {

name                  = “surajsinghvm01”

location              = azurerm_resource_group.rg.location

resource_group_name   = azurerm_resource_group.rg.name

network_interface_ids = [azurerm_network_interface.vmnic.id]

vm_size               = “Standard_B2s”

storage_image_reference {

publisher = “MicrosoftWindowsServer”

offer     = “WindowsServer”

sku       = “2016-Datacenter-Server-Core-smalldisk”

version   = “latest”

}

storage_os_disk {

name              = “surajsinghvm01os”

caching           = “ReadWrite”

create_option     = “FromImage”

managed_disk_type = “Standard_LRS”

}

os_profile {

computer_name      = “surajsinghvm01”

admin_username     = “surajsingh”

admin_password     = “Password123$”

}

os_profile_windows_config {

}

}

We will begin by executing the ‘Terraform INIT’ command.

Azure 13.1

Next, we will assess the actions that Terraform intends to perform in our Azure environment by running the ‘Terraform PLAN’ command. Although the actual output exceeds the content displayed in this screenshot, the provided snippet represents the initial portion, while the following snippet represents the concluding part.

Azure 13

Upon examining the output, it becomes evident that the ‘PLAN’ command displays on the screen the operations that will be executed in our environment. In my case, it involves adding a total of 6 items.

Azure 14

Now, let’s test the successful deployment from our local machine using the ‘Terraform APPLY’ command. The execution of this command will take a couple of minutes, but upon completion, you should observe that all the expected resources are present within the resource group.

Azure 15

At this point, we have verified the functionality of our Terraform code, which is excellent news. However, it is worth noting that during the execution of the ‘Terraform APPLY’ command, several new files were generated in our local folder.

Azure 16

Of particular importance is the ‘terraform.tfstate’ file, which contains the current configuration that has been deployed to your subscription. This file serves as a reference for comparing any discrepancies between your Terraform code and the ‘main.tf’ file. Therefore, it is crucial to recognize that currently, making any changes to our environment requires the use of the local PC. While this approach suffices for personal or testing purposes in a small-scale environment, it becomes inadequate for collaboration or utilizing services like Azure DevOps Pipelines to execute commands. Consequently, there is a need to store the state file in a centralized location accessible to all stakeholders, ensuring the secure storage of credentials and appropriate updates to the Terraform code.

This is precisely what we will explore in the upcoming section. In preparation, we can leverage the ‘Terraform DESTROY’ command to remove all infrastructure from our subscription, thereby enabling us to focus on relocating our state file to a centralized location.

3.2 – Terraform Code for Deploying Azure Infrastructure with a Shared State File

The subsequent phase in this process involves segregating the Terraform State file and relocating it to a centralized location, such as an Azure Storage account.

This endeavor entails considering a few essential aspects:

  • The storage account must be established prior to applying the Terraform code. To accomplish this, we will utilize a bash script as a one-time activity.
  • Terraform accesses this Storage account through a Shared Secret key, which necessitates proper protection. It should not be stored within a script or, certainly, within a Git Repository. We will explore alternative options for securely storing this key.

To begin, our initial task is to create the storage account and container that will house our Terraform State File. This can be achieved by executing the following Bash script:

#!/bin/bash

RESOURCE_GROUP_NAME=surajsingh-infra

STORAGE_ACCOUNT_NAME=surajsinghtstate

CONTAINER_NAME=tstate

Create resource group

az group create –name $RESOURCE_GROUP_NAME –location uksouth

Create storage account

az storage account create –resource-group $RESOURCE_GROUP_NAME –name $STORAGE_ACCOUNT_NAME –sku Standard_LRS –encryption-services blob

Get storage account key

ACCOUNT_KEY=$(az storage account keys list –resource-group $RESOURCE_GROUP_NAME –account-name $STORAGE_ACCOUNT_NAME –query ‘[0].value’ -o tsv)

Create blob container

az storage container create –name $CONTAINER_NAME –account-name $STORAGE_ACCOUNT_NAME –account-key $ACCOUNT_KEY

echo “storage_account_name: $STORAGE_ACCOUNT_NAME”

echo “container_name: $CONTAINER_NAME”

echo “access_key: $ACCOUNT_KEY”

After executing the script, it is crucial to take note of the exported values for future use:

  • storage_account_name: surajsinghtstate
  • container_name: tstate
  • access_key: UeJRrCh0cgcw1H6OMrm8s+B/AGCCZIbER5jaJUAYnE8V2tkzzm5/xSCILXikTOIBD6hrcnYGQXbk+AStxPXv+g==

The access_key attribute represents the name of our state file, which is automatically generated during the initial run. Take note of the forward slash (‘/’) within the path provided.

Upon checking our Azure subscription, we can confirm the successful creation of the storage account and container, which are now ready to accommodate our Terraform State file.

Azure 17

Azure 18

Configure State Backend in Terraform

Excellent! Our next objective is to modify the main.tf Terraform script to enable Terraform to utilize the shared state location and access it through the Key Vault. This can be achieved by configuring what is commonly referred to as the ‘state backend’. As mentioned earlier, one option would be to directly include the Storage Account access key in our Terraform file. However, this approach is not considered best practice since our main.tf file will be stored in a Git Repository, raising security concerns. Hence, the implementation of a Key Vault.

For now, until we transition to Azure DevOps Pipelines, we will create the backend configuration using the raw Access Key. This step is performed to showcase the process.

To achieve this, we simply need to add the following code snippet to our terraform main.tf file. By doing so, Terraform will be able to store the state file in a centralized location, namely our Azure Storage Account.

backend “azurerm” {

resource_group_name   = “surajsingh-infra”

storage_account_name  = “surajsinghtstate”

container_name        = “tstate”

key                   = “UeJRrCh0cgcw1H6OMrm8s+B/AGCCZIbER5jaJUAYnE8V2tkzzm5/xSCILXikTOIBD6hrcnYGQXbk+AStxPXv+g==”

}

If we execute Terraform INIT and Terraform PLAN now, we should witness a successful creation of the plan:

Azure 19

In fact, our state file no longer exists locally. If we check the container within our Azure storage account, we can confirm its presence!

Azure 20

This is a success!

Ensure that you commit and push your changes to your GitHub Repo. For this particular step, I have included a ‘.gitignore’ file to prevent the upload of certain files, such as the Terraform Provider EXE, into GitHub.

Azure 21

4 – Azure DevOps Organization

Now that we have successfully deployed our infrastructure using a shared location for our Terraform State, our next step is to automate this process based on triggers from the ‘main’ branch of our GitHub repo.

Additionally, we need to remove the Storage Account Access Key as part of the following procedure.

To begin, we must set up an Azure DevOps Organization. Visit the following site:

Azure 22

I have set up my Organization as follows:

Azure 23

Surajsingh5233 and azure_autodeploy_terraform are the Organizations and Projects I have created for the purposes of this article.

5 – Azure DevOps Pipeline

Firstly, we need to create a Service Principal Name (SPN) to allow our Azure DevOps Organization project to deploy our environment.

Within our Azure DevOps Project, navigate to Project Settings -> Service Connections.

Azure 24

Click on ‘Create Service Connection’ -> ‘Azure Resource Manager’ -> ‘Next’.

Azure 25

Then select ‘Service principal (automatic)’ -> ‘Next’.

Azure 26

These are the scope settings used for my SPN:

Azure 27

You can verify the configuration of your SPN by reviewing the following output:

Azure 28

Here is our Managed Service Principal in Azure:

Azure 29

For the purpose of this article, I will grant this SPN Contributor access to my subscription.

Azure 30

With all these components in place, it is now time to create our pipeline.

Select ‘Pipelines’ -> ‘Create Pipeline’.

Azure 31

For this example, I will use the classic editor as it simplifies the process for those unfamiliar with YAML files.

Azure 32

Select ‘GitHub’ and log in.

Azure 33

Log in to your GitHub Account.

Azure 34

Scroll down to ‘Repository Access’ and select your repo, then click ‘Approve and Install’.

This will authorize Azure DevOps to access your GitHub Repo. Next, we want to select ‘GitHub’.

Azure 35

For the purpose of this article, we will set up a single stage in our pipeline which will run the following tasks:

  1. Install Terraform
  2. Run Terraform INIT
  3. Run Terraform PLAN
  4. Run Terraform VALIDATE
  5. Run Terraform APPLY to deploy our infrastructure to our subscription.

This pipeline run will trigger a code commit to our ‘main’ branch in the repo.

To begin creating our pipeline, select ‘Empty Pipeline’.

Azure 36

We are then presented with a pipeline to start building.

Azure 37

Next, we want to select each task and configure them as follows:

Install Terraform

Azure 38

Terraform: INIT

Azure 39

In this task, we can configure the Terraform backend that we have in our main.tf as follows:

Azure 40

Terraform: PLAN

Azure 41

Make sure you provide the proper subscription in the Providers option.

Azure 42

Terraform: VALIDATE

Azure 43

Terraform: APPLY

Azure 44

Once we have completed the configuration, we can save it, and a pipeline ready to run will be displayed.

Azure 45

To manually start the pipeline, we can select:

Azure 46

However, in the spirit of CI/CD, we can modify the CI-enabled flag on our pipeline.

Azure 47

Now, when we modify our code and commit it to our ‘master’ branch in GitHub, this pipeline will automatically run and deploy our environment for us. I commit a change via VS Code and push it to my GitHub Repo.

There we have it! Our pipeline is triggered by the commit and push.

Azure 48

We need to wait for all our tasks to complete and hope that there are no errors.

Azure 49

Our job has been completed successfully.

If we check our Azure Subscription, we can see that our application infrastructure has been deployed as expected.

Azure 50

SUCCESS!

Conclusion

Congratulations on making it to the end of this article, and thank you for following along! I genuinely hope that this guide has been helpful in assisting you with creating your first Azure DevOps Pipeline.

Although we haven’t explored YAML in this article, it is worth mentioning that the pipeline is actually created as a file with a .yaml extension. This opens up even more intriguing concepts, which I won’t delve into here.

The next steps from here would be to explore YAML and the ability to check it into your Git Repo.

Additionally, we could delve into the capability of Azure DevOps to apply branch protection. In reality, you wouldn’t directly commit changes to the ‘main’ or ‘master’ branch. Implementing measures such as requiring approvals and using pull requests can help ensure that our main application isn’t accidentally overwritten. Once again, congratulations on reaching the end, and best of luck with your future endeavors!

 

 

]]>
https://blogs.perficient.com/2023/07/13/deploying-azure-infrastructure-with-terraform-using-azure-devops-pipelines/feed/ 3 340087
Dealing with Wildcard SSL Certificates on Azure and Kubernetes https://blogs.perficient.com/2023/06/28/dealing-with-wildcard-ssl-certificates-on-azure-and-kubernetes/ https://blogs.perficient.com/2023/06/28/dealing-with-wildcard-ssl-certificates-on-azure-and-kubernetes/#respond Wed, 28 Jun 2023 17:34:53 +0000 https://blogs.perficient.com/?p=338708

It is almost certain that any DevOps approaches the challenges of implementing SSL certificates at some time.

Of course, there are free certificates, such as the well-known Lets Encrypt. As with any free solution, it has a number of limitations, all the restrictions are detailed on the certificate provider page for you to read. Some of the inconveniences encountered:

  • Certificates must be reissued with a maximum validity period is 3 months
  • When using Kubernetes, certificates have to be stored in the K8S itself and constantly regenerated
  • There are a number of nuances with using and reissuing wildcard certificates
  • Certain other features of the usage of protocols and encryption algorithms

Facing these issues from time to time, I came up with my own customization of the certificate solution, which I would like to share.

You may have heard about cert-manager, let’s install it with helm (my preferred way):

helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.11.2/cert-manager.crds.yaml
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.11.2

so that you can create ClusterIssue as below:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-cluster-issuer
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: martin.miles@perficient.com   #replace with your e-mail
    privateKeySecretRef:
      name: letsencrypt-cluster-issuer
    solvers:
      - http01:
          ingress:
            class: nginx

At this stage, you got two options for issuing the certificates:

  • via adding kind: certificate
  • via ingress

1. In the first case, here’s what my yaml looks like:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: myservice
  namespace: test
Spec:
  duration: 2160h # 30 days
  renewBefore: 72h
  dnsNames:
    - replace_with_your.hostname.com  # replace with yours
  secretName: myservice-tls
  issuerRef:
    name: letsencrypt-cluster-issuer
    kind: ClusterIssuer

In that case, your ingress only references secretName: myservice-tls at the tls section for the desired service. The above file got helpful parameters:

  • duration – a lifetime in hours
  • renewBefore – how far from the certificate expiration you can renew an existing certificate

Tip: you can inspect the certificate in more detail by using the below kubectl command:

kubectl describe certificates <certificate name> -n <namespace>

2. Working with Let’s Encrypt certificates using Ingress seems to be more comfortable and reliable. In addition to secretName and hostname in tls section, you will also need to add only annotations:

annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-cluster-issuer"
    cert-manager.io/renew-before: 72h

And that’s it! Certificates are reissued automatically (within 3 days prior to the expirations, as stated above), upon the renewal which by default is 90 days.

Azure Key Vault

When developing a project for Azure, you’ll likely store your certificates at Azure Key Vault. Once purchase a certificate from Azure you’ll get prompted on how to add it to Azure Key Vault (also known as AKV, as abbreviated) – there’s nothing specific, just steps to prove and verify your domain ownership. Once completing all stages and collected all the green ticks, your certificate will show up at Secrets from AKV.

That approach benefits from an auto-update of certificates. A year later an updated certificate appears in AKV and automatically synchronizes with Secret in Kubernetes.

However, for Kubernetes to be able to use this cert we need to grant permissions. First, we need to obtain identityProfile.kubeletidentity.objectId of the cluster:

az aks show -g <ResourceGroup> -n <AKS_cluster_name>

the above returns an ID we require to provide in order to grant permission to secrets:

az keyvault set-policy --name <AKV_name> --object-id <identityProfile.kubeletidentity.objectId from the past step> --secret-permissions get

At this stage, we can install akv2k8s – a tool that takes care of Azure Key Vault secrets, certificates, and keys available in Kubernetes and/or your application – in a simple and secure way (here’s the installation guide with helm).

Next, synchronize the certificate from Azure Key Vault to Secret as per the official documentation.

apiVersion: spv.no/v1
kind: AzureKeyVaultSecret
metadata:
  name: wildcard-cert # any name of your preference
  namespace: default
spec:
  vault:
    name: SandboxKeyVault  # you certificate storage name in Azure
    object:
      name: name_object_id #object id from Azure AKV for the certificate
      type: secret
  output:
    secret:
      name: wildcard-cert # any name of secret within your namespace
      type: kubernetes.io/tls
      chainOrder: ensureserverfirst # this line is important - read below!

The last line is extremely important. The original problem was that despite the certificate being passed to Kubernetes correctly, it still did not work, and it appeared to be a non-trivial problem. The reason for it appeared to be while exporting a PFX certificate from Key Vault, the server certificate appears at the end of a chain, rather than at the beginning where you expect it to be. If using it together with ingress-nginx, the certificate won’t get loaded and will default. Specifying chainOrder: ensureserverfirst actually resolves this issue by placing the server certificate first in the chain, which otherwise has the following order:

  1. Intermediate
  2. Root
  3. Server

 Wildcard Certificates

It is possible to purchase a certificate at Azure directly (actually served by GoDaddy) with two potential options:

  • for a specific domain
  • wildcard certificates

Notable that wildcard certificates only cover one level down, but not two or more – *.domain.com is not equal to *.*.domain.com. For example, this is not convenient when you would like to set up lover-level API endpoints for your subdomain-occupied websites. Without purchasing additional nested certificates, the only way to resolve this is by adding SAN (Subject Alternative Name) records to the certificate. Unfortunately, doing that is not easily possible, even through Azure support, which is hard to believe. That contrasts with AWS Certificate Manager, which in opposite, supports up to 10 SAN with a wildcard (*). Sad but true…

Azure Front Door

Azure Front Door (AFD) is a globally distributes application acceleration service provided by Microsoft. It acts as a cloud-based entry point for applications, allowing you to optimize and secure the delivery of your web applications, APIs, and content to users around the world. Azure Front Door operates at Layer 7 (HTTP/HTTPS) and can handle SSL/TLS encryption/decryption on behalf of your application, offloading the compute overhead from your backend servers. It also supports custom domain and certificate management and that is what we’re interested in.

When working with HTTPS you can also generate the certificate at AFD, upload your own, or sync the one from AKV (however you still require to grant AFD permission to AKV in order to access the certificate). The last approach allows selecting to rely on the latest version of the secret – that, in fact, takes all the pain of auto-upgrading certificates, an updated cert will be in play, once issued.

Tip:

When creating a backend pool and specifying your external AKS cluster IP address, make sure to leave the “Backend host header” field empty. It will fill in automatically with the values from the input box above.

AKS Cluster external IP Address

An alternative option would be to route the whole HTTPS traffic from AFD to AKS, without SSL offloading at AFD. In order for AFD to work you must specify DNS name matching your AKS cluster (because of SNI and hc), otherwise it won’t work.

That introduces additional work. Say, you’ve already got AKS clusters without any name, working directly, which you now want routing through AFD. To make this work you need to end up with a separate DNS name for AKS cluster, setup DNS and create a service with a certificate attached to ingress. Only once that is done, HTTPS traffic redirect to AKS cluster would work perfectly well.

Tip: Another thing you may want to do – is to increase security for the above case by restricting AKS access to only AFD IP addresses within your Network Security Group for AKS. In addition, you may instruct ingress to only accept requests having a header from your Azure Front Door by id (X-Azure-FDID).

Lessons Learned

  1. Azure Front Door is a pretty flexible routing service with powerful configuring options
  2. Wildcard certificates only serve subdomains one level down, for other cases use SAN
  3. SAN records are however not supported with Azure-purchase certificates, so use other vendors
  4. Lets Encrypt certificates are still ok to use with auto-update, they’re free and allow wildcards
]]>
https://blogs.perficient.com/2023/06/28/dealing-with-wildcard-ssl-certificates-on-azure-and-kubernetes/feed/ 0 338708