Automating deployments is crucial for efficiency and reliability in today’s fast-paced development environment. GitHub Actions provides a seamless way to implement CI/CD pipelines, allowing developers to automate the deployment of static websites without manual intervention.
In this blog, we will explore how to deploy a static website to an AWS S3 bucket using GitHub Actions. We’ll cover setting up an S3 bucket, configuring IAM roles for secure authentication, and leveraging GitHub Actions workflows to streamline deployment. By the end, you’ll have a fully automated pipeline that ensures quick and secure deployments with minimal effort.
First, create a GitHub repository. I already made one with the same name, which is why it exists.
You can clone the repository from the URL below and put it into your local system. I have added the website-related code to my GitHub repository, so you just need to clone it: https://github.com/Kunal2795/Static-Website.git.
Push the code to host this static website with your changes, such as updating the bucket name and AWS region. I already have it locally, so you just need to push it using the Git commands below:
Once the changes are pushed to your GitHub repository, ensure the main. The yaml file is in the .github/workflows directory.
If the main.yaml file is not present in the .github/workflows/ directory. Create it and add a job to run the static website pipeline in GitHub Actions. The main.yaml file is the primary configuration file in GitHub Actions that runs the entire pipeline.
Add the following job code to the main.yaml file in the .github/workflows/ directory:
name: Portfolio Deployment2
on:
push:
branches:
– main
jobs:
build-and-deploy:
runs-on: [self-hosted, silver]
steps:
– name: Checkout
uses: actions/checkout@v1
– name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
– name: Deploy static site to S3 bucket
run: aws s3 sync . s3://kc-devops –delete
You need to make some modifications in the above jobs, such as:
Launch an EC2 instance with Ubuntu OS using a simple configuration.
After that, create a self-hosted runner using specific commands. To get these commands, go to Settings in GitHub, navigate to Actions, click on Runners, and then select Create New Self-Hosted Runner.
Select Linux as the runner image.
Once the runner is downloaded and configured, check its status to ensure it is idle or offline. If it is offline, start the GitHub Runner service on your EC2 server.
Also, ensure that AWS CLI is installed on your server.
Create an IAM user and grant it full access to EC2 and S3 services.
Then, go to Security Credentials, create an Access Key and Secret Access Key, and securely copy and store both the Access Key and Secret Access Key in a safe place.
Next, navigate to GitHub Actions → Secrets & Variables → Actions, then add your AWS Access Key ID and Secret Access Key securely.
After adding the Access Key ID and Secret Access Key, proceed to the next section: S3.
Create an S3 bucket—I have created one with the name kc-devops.
Add the policy below to your S3 bucket and update the bucket name with your own bucket name.
After setting up everything, go to GitHub Actions, open the main. In the yaml file, update the bucket name and commit the changes.
Then, click the Actions tab to see all your triggered workflows and their status.
We can see that all the steps for the build and deploy jobs have been successfully completed.
Lastly, sign in to the AWS Management Console and open the Amazon S3 console. Check all the codes are stored in your bucket.
Then, go to the Properties tab. Under Static website hosting, find and click on the Endpoint URL. (Bucket Website endpoint)
This Endpoint URL is the Amazon S3 website endpoint for your bucket.
Finally, we have successfully deployed and hosted a static website using automation to the Amazon S3 bucket.
With this setup, whenever you push changes to your GitHub repository, GitHub Actions automatically trigger the deployment process. This ensures that your static website is seamlessly updated and deployed to your AWS S3 bucket without any manual intervention. This automation streamlines the deployment workflow, making it more efficient and error-free.
]]>
Sitecore frequently releases hotfixes to address reported issues, including critical security vulnerabilities or urgent problems. Having a quick, automated process to apply these updates is crucial. By automating the deployment of Sitecore hotfixes with an Azure DevOps pipeline, you can ensure faster, more reliable updates while reducing human error and minimizing downtime. This approach allows you to apply hotfixes quickly and consistently to your Azure PaaS environment, ensuring your Sitecore instance remains secure and up to date without manual intervention. In this post, we’ll walk you through how to automate this process using Azure DevOps.
Before diving into the pipeline setup, make sure you have the following prerequisites in place:
Steps to Automate Sitecore Hotfix Deployment
Automating the deployment of Sitecore hotfixes to Azure PaaS with an Azure DevOps pipeline saves time and ensures consistency and accuracy across environments. By storing the hotfix WDP in an Azure Storage Account, you create a centralized, secure location for all your hotfixes. The Azure DevOps pipeline then handles the rest—keeping your Sitecore environment up to date.
This process makes applying Sitecore hotfixes faster, more reliable, and less prone to error, which is exactly what you need in a production environment.
]]>Visual Studio Code (VS Code) has become a ubiquitous tool in the software development world, prized for its speed, versatility, and extensive customization options. At its heart, VS Code is a lightweight, open-source code editor that supports a vast ecosystem of extensions. These extensions are the key to unlocking the true potential of VS Code, transforming it from a simple editor into a powerful, tailored IDE (Integrated Development Environment).
This blog post will explore the world of VS Code extensions, focusing on how they can enhance your development team’s productivity, code quality, and overall efficiency. We’ll cover everything from selecting the right extensions to managing them effectively and even creating your own custom extensions to meet specific needs.
Extensions are essentially plugins that add new features and capabilities to VS Code. They can range from simple syntax highlighting and code completion tools to more complex features like debuggers, linters, and integration with external services. The Visual Studio Code Marketplace hosts thousands of extensions, catering to virtually every programming language, framework, and development workflow imaginable.
Popular examples include Prettier for automatic code formatting, ESLint for identifying and fixing code errors, and Live Share for real-time collaborative coding.
The benefits of using VS Code extensions are numerous and can significantly impact your development team’s performance.
As software development teams grow and projects become more complex, managing IDE tools effectively becomes crucial. A well-managed IDE environment can significantly impact a team’s ability to deliver high-quality software on time and within budget.
Effectively managing VS Code extensions within a team requires a strategic approach. Here are some best practices to consider:
While VS Code extensions offer numerous benefits, they can also introduce security risks if not managed properly. It’s crucial to be aware of these risks and take steps to mitigate them.
In some cases, existing extensions may not fully meet your team’s specific needs. Creating custom VS Code extensions can be a powerful way to add proprietary capabilities to your IDE and tailor it to your unique workflow. One exciting area is integrating AI Chatbots directly into VS Code for code generation, documentation, and more.
Identify the Need: Start by identifying the specific functionality that your team requires. This could be anything from custom code snippets and templates to integrations with internal tools and services. For this example, we’ll create an extension that allows you to highlight code, right-click, and generate documentation using a custom prompt sent to an AI Chatbot.
Learn the Basics: Familiarize yourself with the Visual Studio Code Extension API and the tools required to develop extensions. The API documentation provides comprehensive guides and examples to help you get started.
Set Up Your Development Environment: Install the necessary tools, such as Node.js and Yeoman, to create and test your extensions. The Yeoman generator for Visual Studio Code extensions can help you quickly scaffold a new project.
Develop Your Extension: Write the code for your extension, leveraging the Visual Studio Code Extension API to add the desired functionality. Be sure to follow best practices for coding and testing to ensure that your extension is reliable, maintainable, and secure.
Test Thoroughly: Test your extension in various scenarios to ensure that it works as expected and doesn’t introduce any new issues. This includes testing with different configurations, environments, and user roles.
Distribute Your Extension: Once your extension is ready, you can distribute it to your team. You can either publish it to the Visual Studio Code Marketplace or share it privately within your organization. Consider using a private extension registry to manage and distribute your custom extensions securely.
Developing robust and efficient VS Code extensions requires careful attention to best practices. Here are some key considerations:
Resource Management:
context.subscriptions.push()
method to register disposables, which will be automatically disposed of when the extension is deactivated.deactivate()
function to clean up any resources that need to be explicitly released when the extension is deactivated.Asynchronous Operations:
async/await
to handle asynchronous operations in a clean and readable way. This makes your code easier to understand and maintain.try/catch
blocks. Log errors and provide informative messages to the user.vscode.window.withProgress
to provide feedback to the user during long operations.Security:
Performance:
Code Quality:
User Experience:
By following these best practices, you can develop robust, efficient, and user-friendly VS Code extensions that enhance the development experience for yourself and others.
Let’s walk through creating a custom VS Code extension that integrates with an AI Chatbot to generate documentation for selected code. This example assumes you have access to an AI Chatbot API (like OpenAI’s GPT models). You’ll need an API key. Remember to handle your API key securely and do not commit it to your repository.
1. Scaffold the Extension:
First, use the Yeoman generator to create a new extension project:
yo code
2. Modify the Extension Code:
Open the generated src/extension.ts
file and add the following code to create a command that sends selected code to the AI Chatbot and displays the generated documentation:
import * as vscode from 'vscode'; import axios from 'axios'; export function activate(context: vscode.ExtensionContext) { let disposable = vscode.commands.registerCommand('extension.generateDocs', async () => { const editor = vscode.window.activeTextEditor; if (editor) { const selection = editor.selection; const selectedText = editor.document.getText(selection); const apiKey = 'YOUR_API_KEY'; // Replace with your actual API key const apiUrl = 'https://api.openai.com/v1/engines/davinci-codex/completions'; try { const response = await axios.post( apiUrl, { prompt: `Generate documentation for the following code:\n\n${selectedText}`, max_tokens: 150, n: 1, stop: null, temperature: 0.5, }, { headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, } ); const generatedDocs = response.data.choices[0].text; vscode.window.showInformationMessage('Generated Documentation:\n' + generatedDocs); } catch (error) { vscode.window.showErrorMessage('Error generating documentation: ' + error.message); } } }); context.subscriptions.push(disposable); } export function deactivate() {}
3. Update package.json
:
Add the following command configuration to the contributes
section of your package.json
file:
"contributes": { "commands": [ { "command": "extension.generateDocs", "title": "Generate Documentation" } ] }
4. Run and Test the Extension:
Press F5
to open a new VS Code window with your extension loaded. Highlight some code, right-click, and select “Generate Documentation” to see the AI-generated documentation.
Once you’ve developed and tested your custom VS Code extension, you’ll likely want to share it with your team or the wider community. Here’s how to package and distribute your extension, including options for local and private distribution:
1. Package the Extension:
VS Code uses the vsce
(Visual Studio Code Extensions) tool to package extensions. If you don’t have it installed globally, install it using npm:
npm install -g vsce
Navigate to your extension’s root directory and run the following command to package your extension:
vsce package
This will create a .vsix
file, which is the packaged extension.
2. Publish to the Visual Studio Code Marketplace:
To publish your extension to the Visual Studio Code Marketplace, you’ll need to create a publisher account and obtain a Personal Access Token (PAT). Follow the instructions on the Visual Studio Code Marketplace to set up your publisher account and generate a PAT.
Once you have your PAT, run the following command to publish your extension:
vsce publish
You’ll be prompted to enter your publisher name and PAT. After successful authentication, your extension will be published to the marketplace.
3. Share Privately:
If you prefer to share your extension privately within your organization, you can distribute the .vsix
file directly to your team members. They can install the extension by running the following command in VS Code:
code --install-extension your-extension.vsix
Alternatively, you can set up a private extension registry using tools like Azure DevOps Artifacts or npm Enterprise to manage and distribute your custom extensions securely.
Visual Studio Code extensions are a powerful tool for enhancing the capabilities of your development environment and improving your team’s productivity, code quality, and overall efficiency. By carefully selecting, managing, and securing your extensions, you can create a tailored IDE that meets your specific needs and helps your team deliver high-quality software on time and within budget. Whether you’re using existing extensions from the marketplace or creating your own custom solutions, the possibilities are endless. Embrace the power of VS Code extensions and unlock the full potential of your development team.
Transferring Route 53 hosted zone records between AWS accounts using the CLI involves exporting the records from one account and then importing them to another. Here’s a step-by-step guide:
The primary objective of this process is to migrate Route 53 hosted zone records seamlessly between AWS accounts while ensuring minimal disruption to DNS functionality. This involves securely transferring DNS records, preserving their integrity, maintaining availability, and ensuring linked AWS resources remain accessible. Additionally, cross-account DNS access may be implemented as needed to meet business requirements.
Wget https://github.com/barnybug/cli53/releases/download/0.8.16/cli53-linux-amd64
Note: Linux can also be used, but it requires cli53 dependency and AWS credentials
Note: Created Microsoft.com as dummy hosted zone.
When migrating Route 53 hosted zones between AWS accounts, applying best practices helps ensure a smooth transition with minimal disruption. Here are key best practices for a successful Route 53 hosted zone migration:
Migrating a Route 53 hosted zone between AWS accounts involves careful planning, especially to ensure DNS records are exported and imported correctly. After migrating, testing is crucial to confirm that DNS resolution works as expected. Cross-account setups may require additional configuration, such as Route 53 Resolver rules, to ensure seamless DNS functionality across environments.
]]>Unit testing is a crucial aspect of software development, especially in complex applications like Android apps. It involves testing individual units of code, such as methods or classes, in isolation. This ensures the correctness of each component, leading to a more robust and reliable application.
The Model-View-ViewModel (MVVM) architectural pattern is widely adopted in Android app development. It separates the application into three distinct layers:
Unit testing each layer in an MVVM architecture offers numerous benefits:
testImplementation 'junit:junit:4.13.2' androidTestImplementation 'androidx.test.ext:junit:1.1.5' androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1'
Example:
@RunWith(AndroidJUnit4::class) class MyViewModelTest { @Test fun `should update the UI when data is fetched successfully`() { // ... (Arrange) val viewModel = MyViewModel(mockRepository) // ... (Act) viewModel.fetchData() // ... (Assert) viewModel.uiState.observeForever { uiState -> assertThat(uiState.isLoading).isFalse() assertThat(uiState.error).isNull() assertThat(uiState.data).isEqualTo(expectedData) } } }
Example:
@RunWith(AndroidJUnit4::class) class MyRepositoryTest { @Test fun `should fetch data from remote source successfully`() { // ... (Arrange) val mockApi = mock(MyApi::class.java) val repository = MyRepository(mockApi) // ... (Act) repository.fetchData() // ... (Assert) verify(mockApi).fetchData() } }
SonarQube is a powerful tool for code quality and security analysis. Here’s a detailed guide on how to integrate SonarQube with your Android project:
sonar.host.url=http://localhost:9000 sonar.login=your_sonar_login sonar.password=your_sonar_password sonar.projectKey=my-android-project sonar.projectName=My Android Project sonar.sources=src/main/java sonar.java.binaries=build/intermediates/javac/release/classes
plugins { id 'org.sonarsource.scanner-gradle' version '3.3' }
Configure the plugin with your SonarQube server URL and authentication token.
Test coverage measures the percentage of your code that is covered by tests. It’s a crucial metric to assess the quality of your test suite. Here’s how to measure test coverage with Bitrise:
By following these guidelines and incorporating unit testing into your development process, you can significantly improve the quality and reliability of your Android apps.
]]>Last week, I noticed that deployments to Sitecore XM Cloud were failing on one of my projects. In this blog post, I’ll review the troubleshooting steps I went through and what the issue turned out to be. To provide a bit more context on the DevOps setup for this particular project, an Azure DevOps pipeline runs a script. That script uses the Sitecore CLI and the Sitecore XM Cloud plugin’s cloud deployment command to deploy to XM Cloud. The last successful deployment was just a few days prior and there hadn’t been many code changes since. Initially, I was pretty stumped but, hey, what can you do except start from the top…
msbuild
had changed between the last successful deployment and the more recent failed deployments. I downloaded the same, newer version of msbuild
and verified, once again, that I could restore NuGet packages and build the solution.At this point, while I continued to analyze the deployment logs, I opened a Sitecore support ticket to have them weigh in . I provided support with the last known working build logs, the latest failed build logs, and the list of my troubleshooting steps up to that point.
After hearing back from Sitecore support, it turned out that Sitecore had recently made a change to how the buildTargets
property in the xmcloud.build.json
file was consumed and used as part of deployments. To quote the support engineer:
There were some changes in the build process, and now the build targets are loaded from the “buildTargets ” list. The previous working builds were using the “.sln” file directly.
It looks like that resulted in the build not working properly for some projects.
The suggested fix was to specifically target the Visual Studio solution file to ensure that the XM Cloud Deployment NuGet package restore and compilation worked as expected. My interpretation of the change was “XM Cloud Deploy used to not care about/respect buildTargets
…but now it does.”
After creating a pull request to change the buildTargets
property from this (targeting the specific, top-level project):
{ ... "buildTargets": [ "./src/platform/Project/Foo.Project.Platform/Platform.csproj" ] ... }
…to this (targeting the solution):
{ ... "buildTargets": [ "./Foo.sln" ] ... }
…the next deployment to XM Cloud (via CI/CD) worked as expected.
After asking the Sitecore support engineer where this change was documented, they graciously escalated internally and posted a new event to the Sitecore Status Page to acknowledge the change/issue: Deployment is failing to build.
If you’re noticing that your XM Cloud deployments are failing on the build step while compiling your Visual Studio solution, make sure you’re targeting the solution file (.sln
) and not a specific project file (.csproj
) in the buildTargets
property in the xmcloud.build.json
file…because it matters now, apparently .
Thanks for the read!
Various deployment methods, including cloud-based (e.g., CloudHub) and on-premises, are available to meet diverse infrastructure needs. GitHub, among other tools, supports versioning and code backup, while CI/CD practices automate integration and deployment processes, enhancing code quality and speeding up software delivery.
GitHub Actions, an automation platform by GitHub, streamlines building, testing, and deploying software workflows directly from repositories. Although commonly associated with cloud deployments, GitHub Actions can be adapted for on-premises setups with self-hosted runners. These runners serve as the execution environment, enabling deployment tasks on local infrastructure.
Configuring self-hosted runners allows customization of GitHub Actions workflows for on-premises deployment needs. Workflows can automate tasks like Docker image building, artifact pushing to private registries, and application deployment to local servers.
Leveraging GitHub Actions for on-premises deployment combines the benefits of automation, version control, and collaboration with control over infrastructure and deployment processes.
A runner refers to the machine responsible for executing tasks within a GitHub Actions workflow. It performs various tasks defined in the action script, like cloning the code directory, building the code, testing the code, and installing various tools and software required to run the GitHub action workflow.
These are virtual machines provided by GitHub to run workflows. Each machine comes pre-configured with the environment, tools, and settings required for GitHub Actions. GitHub-hosted runners support various operating systems, such as Ubuntu Linux, Windows, and macOS.
A self-hosted runner is a system deployed and managed by the user to execute GitHub Actions jobs. Compared to GitHub-hosted runners, self-hosted runners offer more flexibility and control over hardware, operating systems, and software tools. Users can customize hardware configurations, install software from their local network, and choose operating systems not provided by GitHub-hosted runners. Self-hosted runners can be physical machines, virtual machines, containers, on-premises servers, or cloud-based instances.
Self-hosted runners play a crucial role in deploying applications on the on-prem server using GitHub Action Scripts and establishing connectivity with an on-prem server. These runners can be created at different management levels within GitHub: repository, organization, and enterprise.
By leveraging self-hosted runners for deployment, organizations can optimize control, customization, performance, and cost-effectiveness while meeting compliance requirements and integrating seamlessly with existing infrastructure and tools. Here are few advantages of self-hosted runners as given below.
Self-hosted runners allow organizations to maintain control over their infrastructure and deployment environment. This includes implementing specific security measures tailored to the organization’s requirements, such as firewall rules and access controls.
With self-hosted runners, you have the flexibility to customize the hardware and software environment to match your specific needs. This can include installing specific libraries, tools, or dependencies required for your applications or services.
Self-hosted runners can offer improved performance compared to cloud-based alternatives, especially for deployments that require high computational resources or low-latency connections to local resources.
While cloud-based solutions often incur ongoing costs based on usage and resource consumption, self-hosted runners can provide cost savings by utilizing existing infrastructure without incurring additional cloud service charges.
For organizations operating in regulated industries or regions with strict compliance requirements, self-hosted runners offer greater control and visibility over where code is executed and how data is handled, facilitating compliance efforts.
In environments where internet connectivity is limited or unreliable, self-hosted runners enable deployment workflows to continue functioning without dependency on external cloud services or repositories.
Self-hosted runners can be scaled up or down according to demand, allowing organizations to adjust resource allocation based on workload fluctuations or project requirements.
Self-hosted runners seamlessly integrate with existing on-premises tools and infrastructure, facilitating smoother adoption and interoperability within the organization’s ecosystem.
Follow the steps below to create and utilize a self-hosted runner.
mkdir actions-runner; cd actions-runner
Invoke-WebRequest -Uri https://github.com/actions/runner/releases/download/v2.316.0/actions-runner-win-x64-2.316.0.zip -OutFile actions-runner-win-x64-2.316.0.zip
Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory("$PWD/actions-runner-win-x64-2.316.0.zip", "$PWD")
./config.cmd --url https://github.com/<owner>/<repo_name> --token <token>
./run.cmd
# Use below YAML code snippet in your workflow file for each job runs-on: self-hosted
The process for creating organization-level and enterprise-level self-hosted runners follows similar steps. Still, the runners created at these levels can serve multiple repositories or organizations within the account. The setup process generally involves administrative permissions and configuration at a broader level.
By following these steps, you can set up self-hosted runners to enable connectivity between your on-prem server and GitHub Action Scripts, facilitating on-prem deployments seamlessly.
]]>Perficient is excited to announce our achievement in Amazon Web Services (AWS) DevOps Competency for AWS partners. This designation recognizes Perficient as an AWS partner that has demonstrated expertise in delivering DevSecOps solutions. This competency highlights Perficient’s ability to drive innovation, meet business objectives, and get the most out of your AWS services.
Achieving the AWS DevOps Competency status differentiates Perficient as an AWS Partner Network (APN) member that provides modern product engineering solutions designed to help enterprises adopt, develop, and deploy complex projects faster on AWS. To receive the designation, APN members must possess deep AWS expertise and deliver solutions seamlessly on AWS.
This competency empowers our delivery teams to break down traditional silos, shorten feedback loops, and respond more effectively to changes, ultimately increasing speed to market by up to 75%.
With our partnership with AWS, we can modernize our clients’ processes to improve product quality, scalability, and performance, and significantly reduce release costs by up to 97%. This achievement ensures that our CI/CD processes and IT governance are sustainable and efficient, benefiting organizations of any size.
At Perficient, we strive to be the place where great minds and great companies converge to boldly advance business, and this achievement is a testament to that vision!
]]>CloudFront, Amazon’s Content Delivery Network (CDN), accelerates website performance by delivering content from geographically distributed edge locations. But how do you understand how users interact with your content and optimize CloudFront’s performance? The answer lies in CloudFront access logs, and a powerful tool called AWS Athena can help you unlock valuable insights from them. In this blog post, we’ll explore how you can leverage Amazon Athena to simplify log analysis for your CloudFront CDN service.
CloudFront delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. However, managing and analyzing the logs generated by CloudFront can be challenging due to their sheer volume and complexity.
These logs contain valuable information such as request details, response status codes, and latency metrics, which can help you gain insights into your application’s performance, user behavior, and security incidents. Analyzing this data manually or using traditional methods like log parsing scripts can be time-consuming and inefficient.
By analyzing these logs, you gain a deeper understanding of:
Amazon Athena is a serverless query service that allows you to analyze data stored in Amazon S3 using standard SQL. Here’s why Athena is perfect for CloudFront logs:
To begin using Amazon Athena for CloudFront log analysis, follow these steps:
If you haven’t already done so, enable logging for your CloudFront distribution. This will start capturing detailed access logs for all requests made to your content.
Configure CloudFront to store access logs in a designated Amazon S3 bucket. Ensure that you have the necessary permissions to access this bucket from Amazon Athena.
Create an external table in Amazon Athena, specifying the schema that matches the structure of your CloudFront log files.
Below is the sample query we have used to create a Table :
CREATE EXTERNAL TABLE IF NOT EXISTS cloudfront_logs (
date
STRING,
time STRING,
location STRING,
bytes BIGINT,
request_ip STRING,
method STRING,
host STRING,
uri STRING,
status INT,
referrer STRING,
user_agent STRING,
query_string STRING,
cookie STRING,
result_type STRING,
request_id STRING,
host_header STRING,
request_protocol STRING,
request_bytes BIGINT,
time_taken FLOAT,
xforwarded_for STRING,
ssl_protocol STRING,
ssl_cipher STRING,
response_result_type STRING,
http_version STRING,
fle_encrypted_fields STRING,
fle_status STRING,
unique_id STRING
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t’ ESCAPED BY ‘\’ LINES TERMINATED BY ‘\n’
LOCATION ‘paste your s3 URI here’;
Click on the run button!
Now comes the fun part – using Athena to answer your questions about CloudFront performance. Here are some sample queries to get you going:
Find the total number of requests served by CloudFront for a specific date range.
SELECT
COUNT(*) AS total_requests
FROM
cloudfront_logs
WHERE
date BETWEEN ‘2023-12-01’ AND ‘2023-12-31’;
Identify the top 10 most requested URLs from your CloudFront distribution. This query will give you a list of the top 10 most requested URLs along with their corresponding request counts. You can use this information to identify popular content and analyze user behavior on your CloudFront distribution.
SELECT
uri,
COUNT(*) AS request_count
FROM
assetscs_cdn_logs
GROUP BY
uri
ORDER BY
request_count DESC
LIMIT 10;
Analyze traffic patterns by user location.
This query selects the location field from your CloudFront logs (which typically represents the geographical region of the user) and counts the number of requests for each location. It then groups the results by location and orders them in descending order based on the request count. This query will give you a breakdown of traffic by region, allowing you to analyze which regions generate the most requests to your CloudFront distribution. You can use this information to optimize content delivery, allocate resources, and tailor your services based on geographic demand.
SELECT
location,
COUNT(*) AS request_count
FROM
cloudfront_logs
GROUP BY
location
ORDER BY
request_count DESC;
Calculate the average response time for CloudFront requests. Executing this query will give you the average response time for all requests served by your CloudFront distribution. You can use this metric to monitor the performance of your CDN and identify any potential performance bottlenecks.
SELECT
AVG(time_taken) AS average_response_time
FROM
cloudfront_logs;
The below query will provide you with a breakdown of the number of requests for each HTTP status code returned by CloudFront, allowing you to identify any patterns or anomalies in your CDN’s behavior.
SELECT status, COUNT(*) as count
FROM cloudfront_logs
GROUP BY status
ORDER BY count DESC;
Athena empowers you to create even more complex queries involving joins, aggregations, and filtering to uncover deeper insights from your CloudFront logs.
By analyzing CloudFront logs, you can identify areas for improvement:
AWS Athena and CloudFront access logs form a powerful duo for unlocking valuable insights into user behavior and CDN performance. With Athena’s cost-effective and user-friendly approach, you can gain a deeper understanding of your content delivery and make data-driven decisions to optimize your CloudFront deployment.
Get started with AWS Athena today and unlock the hidden potential within your CloudFront logs. With its intuitive interface and serverless architecture, Athena empowers you to transform data into actionable insights for a faster, more performant CDN experience.
]]>Our client is an American multinational corporation that develops medical devices, pharmaceuticals, and consumer packaged goods.
Better understanding and engaging patients and members has never been more critical than it is today. To meet clinical, business, and evolving consumer needs, healthcare, and life sciences organizations are focused on care delivery that enables innovation in patient engagement, data and analytics, and virtual care.
Since 2008, Perficient has delivered multiple infrastructure platforming, commerce, and management consulting projects for this multinational corporation. In early 2021, our life sciences experts presented a clinical data review platform (CDRP) demonstrating our strong industry perspective and expertise.
Leveraging AWS, Databricks, and more, our platform enables humans assisted with machine intelligence to enhance the data cleaning, reviewing, and analyzing process and provide an accurate picture of study status and what is needed to achieve a milestone. These services simplify provisioning and managing infrastructure, deploying application code, automating software release processes, and monitoring your application and infrastructure performance.
Our custom-built clinical data review and cleaning environment will provide an accurate, real-time view of clinical studies, allowing the enterprise to holistically monitor progress and drive critical decision-making. It will provide:
Perficient has been trusted by 14 of the 20 largest pharma/biotech firms and 6 of the top 10 CROs. Our secure, easy-to-use, flexible, and scalable AWS application hosting and architecture solutions will set you up for innovation success. Contact us today for more information.
]]>In this blog post, my objective is to provide a comprehensive walkthrough of the elements required for effectively implementing Azure Infrastructure with Terraform using an Azure DevOps Pipeline.
The main purpose is to assist you in grasping the concept of automating the deployment and maintenance of your cloud infrastructure residing in Azure.
Before delving into the provided examples, taking a step back and comprehending the underlying reasons for the aforementioned concepts would be beneficial.
Undoubtedly, we are incorporating various technologies in this context, each with its own advantages and disadvantages. The purpose of this article, I believe, is to enhance our fundamental understanding of each aspect and strive for a deployment approach that is repeatable, secure, and reliable.
GitHub serves as a publicly accessible Source Code control platform. I have established a “public” repository to make my code accessible for this article.
Keep in mind that GitHub is not the only option available, as Azure DevOps Repos offers similar Git functionality. I won’t delve into the reasons for using Git here, as there are numerous articles that explain it much better. However, it is generally agreed upon that having a source code repository for control, auditing, and version management is highly beneficial.
As a cloud platform, Azure provides businesses with opportunities for growth and scalability while effectively managing costs and capacity. The advantages of cloud computing are vast, and although I won’t delve into the specifics here, let’s assume that Azure is a favorable environment to work in.
Terraform, in simple terms, allows IT professionals and developers to utilize infrastructure as code (IaC) tools in a single language to effortlessly deploy to various cloud platforms. These platforms, referred to as “Providers” in Terraform, encompass a wide range of options, and Terraform boasts hundreds of providers, including Azure.
Terraform simplifies the deployment, destruction, and redeployment process by utilizing a “tfstate” file, which we will discuss further in this article. This file enables Terraform to keep track of the deployment’s state since the last update and implement only the necessary changes implied by code updates. Additionally, Terraform includes a feature called “PLAN” that provides a report on the anticipated changes before you proceed to “APPLY” them.
Furthermore, Terraform inherently offers benefits such as source control and version control by allowing you to define your infrastructure as code.
Azure DevOps is a collection of technologies designed to enhance business productivity, reliability, scalability, and robustness when utilized correctly. DevOps is a complex concept that requires thorough understanding, as evident in my other blog posts. From my perspective, DevOps revolves around three fundamental principles: people, process, and technology. Azure DevOps primarily falls under the “Technology” aspect of this triad.
Azure DevOps provides a range of tools, and for the purpose of this article, we will be utilizing “Pipelines.” This tooling, combined with Azure DevOps, offers features that automate infrastructure deployment with checks based on triggers. Consequently, it ensures that our code undergoes testing and deployment within a designated workflow, if necessary. By doing so, it establishes an auditable, repeatable, and reliable mechanism, mitigating the risk of human errors and other potential issues.
By bringing together the four key components discussed in this article (GitHub, Azure, Terraform, and Azure DevOps), we can harness a set of technologies that empower us to design and automate the deployment and management of infrastructure in Azure. As IT professionals, we can all appreciate the value and advantages of streamlining the design, deployment, and automation processes for any company.
We will focus on deploying a foundational landing zone into our Azure Subscription. Here are the essential components required to achieve this:
1. GitHub Repository: We’ll utilize a GitHub repository to store our code and make it accessible.
2. Azure Subscription: We need an Azure Subscription to serve as the environment where we will deploy our infrastructure.
3.1. Terraform Code (local deployment): We’ll use Terraform code, executed from our local machine (surajsingh-app01), to deploy the following Azure infrastructure components:
3.2. Terraform Code (shared state deployment): Additionally, we’ll employ Terraform code that deploys Azure infrastructure while utilizing a shared state file.
Azure DevOps Organization: We’ll set up an Azure DevOps Organization, which provides a platform for managing our development and deployment processes.
Azure DevOps Pipeline: Within our Azure DevOps Organization, we will configure a pipeline to automate the deployment of our infrastructure.
By following this approach, we can establish a solid foundation for our Azure environment, allowing for efficient management and automation of infrastructure deployment.
After logging into github.com, I successfully created a basic repository containing a README.md file. You can access the repository at the following URL: https://github.com/sunsunny-hub/AzureTerraformDeployment.
To facilitate interaction and modification of your Terraform code on your local computer, you can clone the recently created GitHub Repository and employ your local machine to edit files and commit changes back to the repository on GitHub.
Ensure that you have the necessary permissions and access to an Azure Subscription that can be used for deploying infrastructure. If you do not currently possess a subscription, you have the option to sign up for a complimentary trial by visiting the AZURE FREE TRIAL page.
This free trial gives you the following:
Now, let’s proceed with creating our Terraform code. We will develop and test it on our local machine before making any modifications for deployment through Azure DevOps Pipelines. The steps below assume that you have already installed the latest Terraform module on your local machine. If you haven’t done so, you can refer to the provided guide for assistance. Additionally, it is assumed that you have installed the AZCLI (Azure Command-Line Interface).
To begin, open the VSCode Terminal and navigate to the folder of your newly cloned repository on your local machine. Once there, type ‘code .’ (including the period) to open our working folder in VS Code.
Next, enter the command ‘az login’ in the Terminal.
This action will redirect you to an OAUTH webpage, where you can enter your Azure credentials to authenticate your terminal session. It is important to note that, at this stage, we are authenticating our local machine in order to test our Terraform code before deploying it using Azure DevOps Pipelines.
In some accounts you have MFA enabled so you may need to log in with Tenant Id, use below cmd for logging in ‘az login –tenant TENANT_ID’.
After successful authentication, you will receive your subscription details in JSON format. If you have multiple subscriptions, you will need to set the context to the desired subscription. This can be done using either the Subscription ID or the Subscription name.
For example, to set the context for my subscription, I would use the following command: ‘az account set –subscription ” Subscription ID or Subscription name “‘
Now, let’s move on to our Terraform Code. In order to keep this deployment simple, I will store all the configurations in a single file named ‘main.tf’. To create this file, right-click on your open folder and select ‘New File’, then name it ‘main.tf’.
The initial Terraform code I will use to create the infrastructure is as follows:
main.tf
terraform {
required_providers {
azurerm = {
source = “hashicorp/azurerm”
version = “>= 2.4.1”
}
}
}
provider “azurerm” {
features {
key_vault {
purge_soft_delete_on_destroy = true
}
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
data “azurerm_client_config” “current” {}
resource “azurerm_resource_group” “rg” {
name = “surajsingh-app01”
location = “UK South”
}
resource “azurerm_virtual_network” “vnet” {
name = “surajsinghvnet”
address_space = [“10.0.0.0/16”]
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
}
resource “azurerm_subnet” “sn” {
name = “VM”
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = [“10.0.1.0/24”]
}
resource “azurerm_storage_account” “surajsinghsa” {
name = “surajsinghsa”
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
account_tier = “Standard”
account_replication_type = “LRS”
tags = {
environment = “surajsinghrox”
}
}
resource “azurerm_network_interface” “vmnic” {
name = “surajsinghvm01nic”
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
ip_configuration {
name = “internal”
subnet_id = azurerm_subnet.sn.id
private_ip_address_allocation = “Dynamic”
}
}
resource “azurerm_virtual_machine” “surajsinghvm01” {
name = “surajsinghvm01”
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
network_interface_ids = [azurerm_network_interface.vmnic.id]
vm_size = “Standard_B2s”
storage_image_reference {
publisher = “MicrosoftWindowsServer”
offer = “WindowsServer”
sku = “2016-Datacenter-Server-Core-smalldisk”
version = “latest”
}
storage_os_disk {
name = “surajsinghvm01os”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}
os_profile {
computer_name = “surajsinghvm01”
admin_username = “surajsingh”
admin_password = “Password123$”
}
os_profile_windows_config {
}
}
We will begin by executing the ‘Terraform INIT’ command.
Next, we will assess the actions that Terraform intends to perform in our Azure environment by running the ‘Terraform PLAN’ command. Although the actual output exceeds the content displayed in this screenshot, the provided snippet represents the initial portion, while the following snippet represents the concluding part.
Upon examining the output, it becomes evident that the ‘PLAN’ command displays on the screen the operations that will be executed in our environment. In my case, it involves adding a total of 6 items.
Now, let’s test the successful deployment from our local machine using the ‘Terraform APPLY’ command. The execution of this command will take a couple of minutes, but upon completion, you should observe that all the expected resources are present within the resource group.
At this point, we have verified the functionality of our Terraform code, which is excellent news. However, it is worth noting that during the execution of the ‘Terraform APPLY’ command, several new files were generated in our local folder.
Of particular importance is the ‘terraform.tfstate’ file, which contains the current configuration that has been deployed to your subscription. This file serves as a reference for comparing any discrepancies between your Terraform code and the ‘main.tf’ file. Therefore, it is crucial to recognize that currently, making any changes to our environment requires the use of the local PC. While this approach suffices for personal or testing purposes in a small-scale environment, it becomes inadequate for collaboration or utilizing services like Azure DevOps Pipelines to execute commands. Consequently, there is a need to store the state file in a centralized location accessible to all stakeholders, ensuring the secure storage of credentials and appropriate updates to the Terraform code.
This is precisely what we will explore in the upcoming section. In preparation, we can leverage the ‘Terraform DESTROY’ command to remove all infrastructure from our subscription, thereby enabling us to focus on relocating our state file to a centralized location.
The subsequent phase in this process involves segregating the Terraform State file and relocating it to a centralized location, such as an Azure Storage account.
This endeavor entails considering a few essential aspects:
To begin, our initial task is to create the storage account and container that will house our Terraform State File. This can be achieved by executing the following Bash script:
#!/bin/bash
RESOURCE_GROUP_NAME=surajsingh-infra
STORAGE_ACCOUNT_NAME=surajsinghtstate
CONTAINER_NAME=tstate
az group create –name $RESOURCE_GROUP_NAME –location uksouth
az storage account create –resource-group $RESOURCE_GROUP_NAME –name $STORAGE_ACCOUNT_NAME –sku Standard_LRS –encryption-services blob
ACCOUNT_KEY=$(az storage account keys list –resource-group $RESOURCE_GROUP_NAME –account-name $STORAGE_ACCOUNT_NAME –query ‘[0].value’ -o tsv)
az storage container create –name $CONTAINER_NAME –account-name $STORAGE_ACCOUNT_NAME –account-key $ACCOUNT_KEY
echo “storage_account_name: $STORAGE_ACCOUNT_NAME”
echo “container_name: $CONTAINER_NAME”
echo “access_key: $ACCOUNT_KEY”
After executing the script, it is crucial to take note of the exported values for future use:
The access_key attribute represents the name of our state file, which is automatically generated during the initial run. Take note of the forward slash (‘/’) within the path provided.
Upon checking our Azure subscription, we can confirm the successful creation of the storage account and container, which are now ready to accommodate our Terraform State file.
Excellent! Our next objective is to modify the main.tf Terraform script to enable Terraform to utilize the shared state location and access it through the Key Vault. This can be achieved by configuring what is commonly referred to as the ‘state backend’. As mentioned earlier, one option would be to directly include the Storage Account access key in our Terraform file. However, this approach is not considered best practice since our main.tf file will be stored in a Git Repository, raising security concerns. Hence, the implementation of a Key Vault.
For now, until we transition to Azure DevOps Pipelines, we will create the backend configuration using the raw Access Key. This step is performed to showcase the process.
To achieve this, we simply need to add the following code snippet to our terraform main.tf file. By doing so, Terraform will be able to store the state file in a centralized location, namely our Azure Storage Account.
backend “azurerm” {
resource_group_name = “surajsingh-infra”
storage_account_name = “surajsinghtstate”
container_name = “tstate”
key = “UeJRrCh0cgcw1H6OMrm8s+B/AGCCZIbER5jaJUAYnE8V2tkzzm5/xSCILXikTOIBD6hrcnYGQXbk+AStxPXv+g==”
}
If we execute Terraform INIT and Terraform PLAN now, we should witness a successful creation of the plan:
In fact, our state file no longer exists locally. If we check the container within our Azure storage account, we can confirm its presence!
This is a success!
Ensure that you commit and push your changes to your GitHub Repo. For this particular step, I have included a ‘.gitignore’ file to prevent the upload of certain files, such as the Terraform Provider EXE, into GitHub.
Now that we have successfully deployed our infrastructure using a shared location for our Terraform State, our next step is to automate this process based on triggers from the ‘main’ branch of our GitHub repo.
Additionally, we need to remove the Storage Account Access Key as part of the following procedure.
To begin, we must set up an Azure DevOps Organization. Visit the following site:
I have set up my Organization as follows:
Surajsingh5233 and azure_autodeploy_terraform are the Organizations and Projects I have created for the purposes of this article.
Firstly, we need to create a Service Principal Name (SPN) to allow our Azure DevOps Organization project to deploy our environment.
Within our Azure DevOps Project, navigate to Project Settings -> Service Connections.
Click on ‘Create Service Connection’ -> ‘Azure Resource Manager’ -> ‘Next’.
Then select ‘Service principal (automatic)’ -> ‘Next’.
These are the scope settings used for my SPN:
You can verify the configuration of your SPN by reviewing the following output:
Here is our Managed Service Principal in Azure:
For the purpose of this article, I will grant this SPN Contributor access to my subscription.
With all these components in place, it is now time to create our pipeline.
Select ‘Pipelines’ -> ‘Create Pipeline’.
For this example, I will use the classic editor as it simplifies the process for those unfamiliar with YAML files.
Select ‘GitHub’ and log in.
Log in to your GitHub Account.
Scroll down to ‘Repository Access’ and select your repo, then click ‘Approve and Install’.
This will authorize Azure DevOps to access your GitHub Repo. Next, we want to select ‘GitHub’.
For the purpose of this article, we will set up a single stage in our pipeline which will run the following tasks:
This pipeline run will trigger a code commit to our ‘main’ branch in the repo.
To begin creating our pipeline, select ‘Empty Pipeline’.
We are then presented with a pipeline to start building.
Next, we want to select each task and configure them as follows:
Install Terraform
Terraform: INIT
In this task, we can configure the Terraform backend that we have in our main.tf as follows:
Terraform: PLAN
Make sure you provide the proper subscription in the Providers option.
Terraform: VALIDATE
Terraform: APPLY
Once we have completed the configuration, we can save it, and a pipeline ready to run will be displayed.
To manually start the pipeline, we can select:
However, in the spirit of CI/CD, we can modify the CI-enabled flag on our pipeline.
Now, when we modify our code and commit it to our ‘master’ branch in GitHub, this pipeline will automatically run and deploy our environment for us. I commit a change via VS Code and push it to my GitHub Repo.
There we have it! Our pipeline is triggered by the commit and push.
We need to wait for all our tasks to complete and hope that there are no errors.
Our job has been completed successfully.
If we check our Azure Subscription, we can see that our application infrastructure has been deployed as expected.
SUCCESS!
Congratulations on making it to the end of this article, and thank you for following along! I genuinely hope that this guide has been helpful in assisting you with creating your first Azure DevOps Pipeline.
Although we haven’t explored YAML in this article, it is worth mentioning that the pipeline is actually created as a file with a .yaml extension. This opens up even more intriguing concepts, which I won’t delve into here.
The next steps from here would be to explore YAML and the ability to check it into your Git Repo.
Additionally, we could delve into the capability of Azure DevOps to apply branch protection. In reality, you wouldn’t directly commit changes to the ‘main’ or ‘master’ branch. Implementing measures such as requiring approvals and using pull requests can help ensure that our main application isn’t accidentally overwritten. Once again, congratulations on reaching the end, and best of luck with your future endeavors!
]]>
It is almost certain that any DevOps approaches the challenges of implementing SSL certificates at some time.
Of course, there are free certificates, such as the well-known Lets Encrypt. As with any free solution, it has a number of limitations, all the restrictions are detailed on the certificate provider page for you to read. Some of the inconveniences encountered:
Facing these issues from time to time, I came up with my own customization of the certificate solution, which I would like to share.
You may have heard about cert-manager, let’s install it with helm (my preferred way):
helm repo add jetstack https://charts.jetstack.io helm repo update kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.11.2/cert-manager.crds.yaml helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.11.2
so that you can create ClusterIssue as below:
apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-cluster-issuer spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: martin.miles@perficient.com #replace with your e-mail privateKeySecretRef: name: letsencrypt-cluster-issuer solvers: - http01: ingress: class: nginx
At this stage, you got two options for issuing the certificates:
kind: certificate
1. In the first case, here’s what my yaml looks like:
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: myservice namespace: test Spec: duration: 2160h # 30 days renewBefore: 72h dnsNames: - replace_with_your.hostname.com # replace with yours secretName: myservice-tls issuerRef: name: letsencrypt-cluster-issuer kind: ClusterIssuer
In that case, your ingress only references secretName: myservice-tls
at the tls
section for the desired service. The above file got helpful parameters:
duration
– a lifetime in hoursrenewBefore
– how far from the certificate expiration you can renew an existing certificateTip: you can inspect the certificate in more detail by using the below kubectl
command:
kubectl describe certificates <certificate name> -n <namespace>
2. Working with Let’s Encrypt certificates using Ingress seems to be more comfortable and reliable. In addition to secretName
and hostname in tls section, you will also need to add only annotations:
annotations: cert-manager.io/cluster-issuer: "letsencrypt-cluster-issuer" cert-manager.io/renew-before: 72h
And that’s it! Certificates are reissued automatically (within 3 days prior to the expirations, as stated above), upon the renewal which by default is 90 days.
When developing a project for Azure, you’ll likely store your certificates at Azure Key Vault. Once purchase a certificate from Azure you’ll get prompted on how to add it to Azure Key Vault (also known as AKV, as abbreviated) – there’s nothing specific, just steps to prove and verify your domain ownership. Once completing all stages and collected all the green ticks, your certificate will show up at Secrets from AKV.
That approach benefits from an auto-update of certificates. A year later an updated certificate appears in AKV and automatically synchronizes with Secret in Kubernetes.
However, for Kubernetes to be able to use this cert we need to grant permissions. First, we need to obtain identityProfile.kubeletidentity.objectId
of the cluster:
az aks show -g <ResourceGroup> -n <AKS_cluster_name>
the above returns an ID we require to provide in order to grant permission to secrets:
az keyvault set-policy --name <AKV_name> --object-id <identityProfile.kubeletidentity.objectId from the past step> --secret-permissions get
At this stage, we can install akv2k8s – a tool that takes care of Azure Key Vault secrets, certificates, and keys available in Kubernetes and/or your application – in a simple and secure way (here’s the installation guide with helm).
Next, synchronize the certificate from Azure Key Vault to Secret as per the official documentation.
apiVersion: spv.no/v1 kind: AzureKeyVaultSecret metadata: name: wildcard-cert # any name of your preference namespace: default spec: vault: name: SandboxKeyVault # you certificate storage name in Azure object: name: name_object_id #object id from Azure AKV for the certificate type: secret output: secret: name: wildcard-cert # any name of secret within your namespace type: kubernetes.io/tls chainOrder: ensureserverfirst # this line is important - read below!
The last line is extremely important. The original problem was that despite the certificate being passed to Kubernetes correctly, it still did not work, and it appeared to be a non-trivial problem. The reason for it appeared to be while exporting a PFX certificate from Key Vault, the server certificate appears at the end of a chain, rather than at the beginning where you expect it to be. If using it together with ingress-nginx, the certificate won’t get loaded and will default. Specifying chainOrder: ensureserverfirst
actually resolves this issue by placing the server certificate first in the chain, which otherwise has the following order:
It is possible to purchase a certificate at Azure directly (actually served by GoDaddy) with two potential options:
Notable that wildcard certificates only cover one level down, but not two or more – *.domain.com
is not equal to *.*.domain.com
. For example, this is not convenient when you would like to set up lover-level API endpoints for your subdomain-occupied websites. Without purchasing additional nested certificates, the only way to resolve this is by adding SAN (Subject Alternative Name) records to the certificate. Unfortunately, doing that is not easily possible, even through Azure support, which is hard to believe. That contrasts with AWS Certificate Manager, which in opposite, supports up to 10 SAN with a wildcard (*). Sad but true…
Azure Front Door (AFD) is a globally distributes application acceleration service provided by Microsoft. It acts as a cloud-based entry point for applications, allowing you to optimize and secure the delivery of your web applications, APIs, and content to users around the world. Azure Front Door operates at Layer 7 (HTTP/HTTPS) and can handle SSL/TLS encryption/decryption on behalf of your application, offloading the compute overhead from your backend servers. It also supports custom domain and certificate management and that is what we’re interested in.
When working with HTTPS you can also generate the certificate at AFD, upload your own, or sync the one from AKV (however you still require to grant AFD permission to AKV in order to access the certificate). The last approach allows selecting to rely on the latest version of the secret – that, in fact, takes all the pain of auto-upgrading certificates, an updated cert will be in play, once issued.
When creating a backend pool and specifying your external AKS cluster IP address, make sure to leave the “Backend host header” field empty. It will fill in automatically with the values from the input box above.
An alternative option would be to route the whole HTTPS traffic from AFD to AKS, without SSL offloading at AFD. In order for AFD to work you must specify DNS name matching your AKS cluster (because of SNI and hc), otherwise it won’t work.
That introduces additional work. Say, you’ve already got AKS clusters without any name, working directly, which you now want routing through AFD. To make this work you need to end up with a separate DNS name for AKS cluster, setup DNS and create a service with a certificate attached to ingress. Only once that is done, HTTPS traffic redirect to AKS cluster would work perfectly well.
Tip: Another thing you may want to do – is to increase security for the above case by restricting AKS access to only AFD IP addresses within your Network Security Group for AKS. In addition, you may instruct ingress to only accept requests having a header from your Azure Front Door by id (X-Azure-FDID).