Azure Firewall, a managed, cloud-based network security service, is an essential component of Azure’s security offerings. It comes in three different versions – Basic, Standard, and Premium – each designed to cater to a wide range of customer use cases and preferences. This blog post will provide a comprehensive comparison of these versions, discuss best practices for their use, and delve into their application in hub-spoke and Azure Virtual WAN with Secure Hub architectures.
Azure Firewall is a cloud-native, intelligent network firewall security service designed to protect your Azure cloud workloads. It offers top-tier threat protection and is fully stateful, meaning it can track the state of network connections and make decisions based on the context of the traffic.
In today’s digital landscape, cyber threats are becoming increasingly sophisticated. Organizations need robust security measures to protect their data and applications. Azure Firewall provides enhanced security by inspecting both inbound and outbound traffic, using advanced threat intelligence to block malicious IP addresses and domains. This ensures that your network is protected against a wide range of threats, including malware, phishing, and other cyberattacks.
Managing network security across multiple subscriptions and virtual networks can be a complex and time-consuming process. Azure Firewall simplifies this process by allowing you to centrally create, enforce, and log application and network connectivity policies. This centralized management ensures consistent security policies across your organization, making it easier to maintain and monitor your network security.
Businesses often experience fluctuating traffic volumes, which can strain network resources. Azure Firewall offers unlimited cloud scalability, meaning it can handle varying workloads without compromising performance. This scalability is crucial for businesses that need to accommodate peak traffic periods and ensure continuous protection.
Downtime can be costly for businesses, both in terms of lost revenue and damage to reputation. Azure Firewall’s built-in high availability ensures that your firewall is always operational, minimizing downtime and maintaining continuous protection
Many industries have strict data protection regulations that organizations must comply with. Azure Firewall helps organizations meet these regulatory and compliance requirements by providing detailed logging and monitoring capabilities. This is particularly vital for industries such as finance, healthcare, and government, where data security is of paramount importance.
Deploying multiple firewalls across different networks can be expensive. By deploying Azure Firewall in a central virtual network, organizations can achieve cost savings. This centralized approach reduces the need for multiple firewalls, lowering overall costs while maintaining robust security.
Azure Firewall Basic is recommended for small to medium-sized business (SMB) customers with throughput needs of up to 250 Mbps. It’s a cost-effective solution for businesses that require fundamental network protection.
Azure Firewall Standard is recommended for customers looking for a Layer 3–Layer 7 firewall and need autoscaling to handle peak traffic periods of up to 30 Gbps. It supports enterprise features like threat intelligence, DNS proxy, custom DNS, and web categories.
Azure Firewall Premium is recommended for securing highly sensitive applications, such as those involved in payment processing. It supports advanced threat protection capabilities like malware and TLS inspection. Azure Firewall Premium utilizes advanced hardware and features a higher-performing underlying engine, making it ideal for handling heavier workloads and higher traffic volumes.
Here’s a comparison of the features available in each version of Azure Firewall:
Feature | Basic | Standard | Premium |
---|---|---|---|
Stateful firewall (Layer 3/Layer 4) | Yes | Yes | Yes |
Application FQDN filtering | Yes | Yes | Yes |
Network traffic filtering rules | Yes | Yes | Yes |
Outbound SNAT support | Yes | Yes | Yes |
Threat intelligence-based filtering | No | Yes | Yes |
Web categories | No | Yes | Yes |
Intrusion Detection and Prevention System (IDPS) | No | No | Yes |
TLS Inspection | No | No | Yes |
URL Filtering | No | No | Yes |
Azure Firewall plays a crucial role in the hub-spoke network architecture pattern in Azure. The hub is a virtual network (VNet) in Azure that acts as a central point of connectivity to your on-premises network. The spokes are VNets that peer with the hub and can be used to isolate workloads. Azure Firewall secures and inspects network traffic, but it also routes traffic between VNets .
A secured hub is an Azure Virtual WAN Hub with associated security and routing policies configured by Azure Firewall Manager. Use secured virtual hubs to easily create hub-and-spoke and transitive architectures with native security services for traffic governance and protection.
Azure Firewall operates by using rules and rule collections to manage and filter network traffic. Here are some key concepts:
Azure Firewall integrates with Azure Monitor for viewing and analyzing logs. Logs can be sent to Log Analytics, Azure Storage, or Event Hubs and analyzed using tools like Log Analytics, Excel, or Power BI.
Create a Resource Group
Sign in to the Azure portal:
Create the Firewall:
Create Application Rules
To maximize the performance of your Azure Firewall, it’s important to follow best practices. Here are some recommendations:
Mobile App development is rapidly growing and so is the expectation of robust support. “Mobile first” is the set paradigm for many application development teams. Unlike web deployment, an app release has to go through the review process via App Store Connect and Google Play. Minor or major releases follow the app review same process, which can take 1-4 days. Hot fixes or critical security patches are also bound by the review cycle restrictions. This may lead to service disruptions, negative app and customer reviews.
Let’s say that the latest version of an app is version 1.2. However, a critical bug was identified in version 1.1. The app developers may release version 1.3, but the challenge would be that it may take a while to release the new version (unless a forced update mechanism is implemented for the app). Another potential challenge would be the fact that there is no guarantee that the user would have auto updates on.
Luckily, “Over The Air” updates comes to the rescue in such situations.
The Over The Air (OTA), deployment process for mobile apps allows developers to push updates without going through the traditional review process. The OTA update process enables faster delivery for any hot fix or patch.
While this is very exciting, it does come with a few limitations:
React Native consists of JavaScript and Native code. When the app gets compiled, it creates the JSbundles for Android and iOS apps along with the native builds. OTA also relies on the JavaScript bundles and hence React Native apps are great candidates to take advantage of the OTA update technology.
One of our client’s app has an OTA deployment process implemented using App Center. However, Microsoft has decided to retire App Center as of March 31, 2025. Hence, we started exploring the alternatives. One of the alternate solutions on the the table was provided by App Center and the other was to find a similar PAAS solution from another provider. Since back-end stack was AWS, we chose to go with EAS Update.
EAS Update is a hosted service that serves updates for projects using expo-updates library. Once the EAS Update is configured correctly, the app will be listening for any targeted version of the app on the EAS dev cloud server. Expo provides a great documentation on setup and configuration.
In a nutshell;
OTA deployment process
Additional details can be found at https://docs.expo.dev/eas-update/how-it-works/.
If you are new to React Native app development, this article may help Ramp Up On React/React Native In Less Than a Month. And if you are transitioning from React to React Native, you may find this React Native – A Web Developer’s Perspective on Pivoting to Mobile useful.
I am using my existing React-Native 0.73.7 app. However, one can start a fresh React Native App for your test.
Project configuration requires us to setup expo-modules. The Expo installation guide provides an installer which handles configuration. Our project needed an SDK 50 version of the installer.
"@expo/vector-icons": "^14.0.0", "expo-asset": "~9.0.2", "expo-file-system": "~16.0.9", "expo-font": "~11.10.3", "expo-keep-awake": "~12.8.2", "expo-modules-autolinking": "1.10.3", "expo-modules-core": "1.11.14", "fbemitter": "^3.0.0", "whatwg-url-without-unicode": "8.0.0-3"
"@expo/code-signing-certificates": "0.0.5", "@expo/config": "~8.5.0", "@expo/config-plugins": "~7.9.0", "arg": "4.1.0", "chalk": "^4.1.2", "expo-eas-client": "~0.11.0", "expo-manifests": "~0.13.0", "expo-structured-headers": "~3.7.0", "expo-updates-interface": "~0.15.1", "fbemitter": "^3.0.0", "resolve-from": "^5.0.0"
EAS_CHANNEL=staging RUNTIME_VERSION="7.13" eas update --message "build:[QA] - 7.13.841 - 25.5.9.4 - OTA Test2 commit"
EAS update screen once OTA deployment is successful.
@rnx-kit/metro-serializer
had to be commented out due to compatibility issue with EAS Update bundle process.The headers “expo-runtime-version”, “expo-channel-name”, and “expo-platform” are required. They can also be set with the query parameters “runtime-version”, “channel-name”, and “platform”. Learn more: https://github.com/expo/fyi/blob/main/eas-update-missing-headers.md
The configuration values for iOS app are maintained in Supporting/Expo.plist. The above error indicates that the EXUpdatesRequestHeaders
block in the plist might be missing.
OTA deployment is very useful when large number of customers are using the app and any urgent hot fix or patch needs to be released. You can set this for your lower environments as well as the production.
In my experience, it is very reliable and the expo team is doing great job on maintaining it.
So take advantage of this amazing service and Happy coding!
For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!
]]>TLS certificate lifetimes are being significantly reduced over the next few years as part of an industry-wide push toward greater security and automation. Here’s the phased timeline currently in place:
Now through March 15, 2026: Maximum lifetime is 398 days
Starting March 15, 2026: Reduced to 200 days
Starting March 15, 2027: Further reduced to 100 days
Starting March 15, 2029: Reduced again to just 47 days
For teams managing Sitecore implementations, this is more than a policy shift—it introduces operational urgency. As certificates begin expiring more frequently, any reliance on manual tracking or last-minute renewals could result in costly downtime or broken integrations.
If your Sitecore environment includes secure endpoints, custom domains, or external integrations, now is the time to assess your certificate strategy and move toward automation.
Sitecore projects often involve:
Multiple environments (development, staging, production) with different certificates
Custom domains or subdomains used for CDNs, APIs, headless apps, or marketing campaigns
Third-party integrations that require secure connections
Marketing and personalization features that rely on seamless uptime
A single expired certificate can lead to downtime, loss of customer trust, or failed integrations—any of which could severely impact your digital experience delivery.
Increased risk of missed renewals if teams rely on manual tracking
Broken environments due to expired certs in Azure, IIS, or Kubernetes configurations
Delayed deployments when certificates must be re-issued last minute
SEO and trust damage if browsers start flagging your site as insecure
To stay ahead of the TLS certificate lifecycle changes, here are concrete steps you should take:
Audit all environments and domains using certificates
Include internal services, custom endpoints, and non-production domains
Use a centralized tracking tool (e.g., Azure Key Vault, HashiCorp Vault, or a certificate management platform)
Wherever possible, switch to automated certificate issuance and renewal
Use services like:
Azure App Service Managed Certificates
Let’s Encrypt with automation scripts
ACME protocol integrations for Kubernetes
For Azure-hosted Sitecore instances, leverage Key Vault and App Gateway integrations
Assign clear ownership of certificate management per environment or domain
Document who is responsible for renewals and updates
Add certificate health checks to your DevOps dashboards
Validate certificate validity before deployments
Fail builds if certificates are nearing expiration
Include certificate management tasks as part of environment provisioning
Hold knowledge-sharing sessions with developers, infrastructure engineers, and marketers
Make sure everyone understands the impact of expired certificates on the Sitecore experience
Simulate certificate expiry in non-production environments
Monitor behavior in Sitecore XP and XM environments, including CD and CM roles
Validate external systems (e.g., CDNs, integrations, identity providers) against cert failures
TLS certificate management is no longer a “set it and forget it” task. With shorter lifetimes becoming the norm, proactive planning is essential to avoid downtime and ensure secure, uninterrupted experiences for your users.
Start by auditing your current certificates and work toward automating renewals. Make certificate monitoring part of your DevOps practice, and ensure your Sitecore teams are aware of the upcoming changes.
Action Items for This Week:
Identify all TLS certificates in your Sitecore environments
Document renewal dates and responsible owners
Begin automating renewals for at least one domain
Review Azure and Sitecore documentation for certificate integration options
Securing your Sitecore XM Cloud environment is critical to protecting your content, your users, and your brand. This post walks through key areas of XM Cloud security, including user management, authentication, secure coding, and best practices you can implement today to reduce your security risks.
We’ll also take a step back to look at the Sitecore Cloud Portal—the central control panel for managing user access across your Sitecore organization. Understanding both the Cloud Portal and XM Cloud’s internal security tools is essential for building a strong foundation of security.
The Sitecore Cloud Portal is the gateway to managing user access across all Sitecore DXP tools, including XM Cloud. Proper setup here ensures that only the right people can view or change your environments and content.
Each user you invite to your Sitecore organization is assigned an Organization Role, which defines their overall access level:
Organization Owner – Full control over the organization, including user and app management.
Organization Admin – Can manage users and assign app access, but cannot assign/remove Owners.
Organization User – Limited access; can only use specific apps they’ve been assigned to.
Tip: Assign the “Owner” role sparingly—only to those who absolutely need full administrative control.
Beyond organization roles, users are granted App Roles for specific products like XM Cloud. These roles determine what actions they can take inside each product:
Admin – Full access to all features of the application.
User – More limited, often focused on content authoring or reviewing.
From the Admin section of the Cloud Portal, Organization Owners or Admins can:
Invite new team members and assign roles.
Grant access to apps like XM Cloud and assign appropriate app-level roles.
Review and update roles as team responsibilities shift.
Remove access when team members leave or change roles.
Security Tips:
Review user access regularly.
Use the least privilege principle—only grant what’s necessary.
Enable Multi-Factor Authentication (MFA) and integrate Single Sign-On (SSO) for extra protection.
Within XM Cloud itself, there’s another layer of user and role management that governs access to content and features.
Users: Individual accounts representing people who work in the XM Cloud instance.
Roles: Collections of users with shared permissions.
Domains: Logical groupings of users and roles, useful for managing access in larger organizations.
Recommendation: Don’t assign permissions directly to users—assign them to roles instead for easier management.
Permissions can be set at the item level for things like reading, writing, deleting, or publishing. Access rights include:
Read
Write
Create
Delete
Administer
Each right can be set to:
Allow
Deny
Inherit
Follow the Role-Based Access Control (RBAC) model.
Create custom roles to reflect your team’s structure and responsibilities.
Audit roles and access regularly to prevent privilege creep.
Avoid modifying default system users—create new accounts instead.
XM Cloud supports robust authentication mechanisms to control access between services, deployments, and repositories.
When integrating external services or deploying via CI/CD, you’ll often need to authenticate through client credentials.
Use the Sitecore Cloud Portal to create and manage client credentials.
Grant only the necessary scopes (permissions) to each credential.
Rotate credentials periodically and revoke unused ones.
Use secure secrets management tools to store client IDs and secrets outside of source code.
For Git and deployment pipelines, connect XM Cloud environments to your repository using secure tokens and limit access to specific environments or branches when possible.
Security isn’t just about who has access—it’s also about how your code and data behave in production.
Sanitize all inputs to prevent injection attacks.
Avoid exposing sensitive information in logs or error messages.
Use HTTPS for all external communications.
Validate data both on the client and server sides.
Keep dependencies up to date and monitor for vulnerabilities.
When using visitor data for personalization, be transparent and follow data privacy best practices:
Explicitly define what data is collected and how it’s used.
Give visitors control over their data preferences.
Avoid storing personally identifiable information (PII) unless absolutely necessary.
Securing your XM Cloud environment is an ongoing process that involves team coordination, regular reviews, and constant vigilance. Here’s how to get started:
Audit your Cloud Portal roles and remove unnecessary access.
Establish a role-based structure in XM Cloud and limit direct user permissions.
Implement secure credential management for deployments and integrations.
Train your developers on secure coding and privacy best practices.
]]>The stronger your security practices, the more confidence you—and your clients—can have in your digital experience platform.
Automating deployments is crucial for efficiency and reliability in today’s fast-paced development environment. GitHub Actions provides a seamless way to implement CI/CD pipelines, allowing developers to automate the deployment of static websites without manual intervention.
In this blog, we will explore how to deploy a static website to an AWS S3 bucket using GitHub Actions. We’ll cover setting up an S3 bucket, configuring IAM roles for secure authentication, and leveraging GitHub Actions workflows to streamline deployment. By the end, you’ll have a fully automated pipeline that ensures quick and secure deployments with minimal effort.
First, create a GitHub repository. I already made one with the same name, which is why it exists.
You can clone the repository from the URL below and put it into your local system. I have added the website-related code to my GitHub repository, so you just need to clone it: https://github.com/Kunal2795/Static-Website.git.
Push the code to host this static website with your changes, such as updating the bucket name and AWS region. I already have it locally, so you just need to push it using the Git commands below:
Once the changes are pushed to your GitHub repository, ensure the main. The yaml file is in the .github/workflows directory.
If the main.yaml file is not present in the .github/workflows/ directory. Create it and add a job to run the static website pipeline in GitHub Actions. The main.yaml file is the primary configuration file in GitHub Actions that runs the entire pipeline.
Add the following job code to the main.yaml file in the .github/workflows/ directory:
name: Portfolio Deployment2
on:
push:
branches:
– main
jobs:
build-and-deploy:
runs-on: [self-hosted, silver]
steps:
– name: Checkout
uses: actions/checkout@v1
– name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
– name: Deploy static site to S3 bucket
run: aws s3 sync . s3://kc-devops –delete
You need to make some modifications in the above jobs, such as:
Launch an EC2 instance with Ubuntu OS using a simple configuration.
After that, create a self-hosted runner using specific commands. To get these commands, go to Settings in GitHub, navigate to Actions, click on Runners, and then select Create New Self-Hosted Runner.
Select Linux as the runner image.
Once the runner is downloaded and configured, check its status to ensure it is idle or offline. If it is offline, start the GitHub Runner service on your EC2 server.
Also, ensure that AWS CLI is installed on your server.
Create an IAM user and grant it full access to EC2 and S3 services.
Then, go to Security Credentials, create an Access Key and Secret Access Key, and securely copy and store both the Access Key and Secret Access Key in a safe place.
Next, navigate to GitHub Actions → Secrets & Variables → Actions, then add your AWS Access Key ID and Secret Access Key securely.
After adding the Access Key ID and Secret Access Key, proceed to the next section: S3.
Create an S3 bucket—I have created one with the name kc-devops.
Add the policy below to your S3 bucket and update the bucket name with your own bucket name.
After setting up everything, go to GitHub Actions, open the main. In the yaml file, update the bucket name and commit the changes.
Then, click the Actions tab to see all your triggered workflows and their status.
We can see that all the steps for the build and deploy jobs have been successfully completed.
Lastly, sign in to the AWS Management Console and open the Amazon S3 console. Check all the codes are stored in your bucket.
Then, go to the Properties tab. Under Static website hosting, find and click on the Endpoint URL. (Bucket Website endpoint)
This Endpoint URL is the Amazon S3 website endpoint for your bucket.
Finally, we have successfully deployed and hosted a static website using automation to the Amazon S3 bucket.
With this setup, whenever you push changes to your GitHub repository, GitHub Actions automatically trigger the deployment process. This ensures that your static website is seamlessly updated and deployed to your AWS S3 bucket without any manual intervention. This automation streamlines the deployment workflow, making it more efficient and error-free.
]]>
Sitecore frequently releases hotfixes to address reported issues, including critical security vulnerabilities or urgent problems. Having a quick, automated process to apply these updates is crucial. By automating the deployment of Sitecore hotfixes with an Azure DevOps pipeline, you can ensure faster, more reliable updates while reducing human error and minimizing downtime. This approach allows you to apply hotfixes quickly and consistently to your Azure PaaS environment, ensuring your Sitecore instance remains secure and up to date without manual intervention. In this post, we’ll walk you through how to automate this process using Azure DevOps.
Before diving into the pipeline setup, make sure you have the following prerequisites in place:
Steps to Automate Sitecore Hotfix Deployment
Automating the deployment of Sitecore hotfixes to Azure PaaS with an Azure DevOps pipeline saves time and ensures consistency and accuracy across environments. By storing the hotfix WDP in an Azure Storage Account, you create a centralized, secure location for all your hotfixes. The Azure DevOps pipeline then handles the rest—keeping your Sitecore environment up to date.
This process makes applying Sitecore hotfixes faster, more reliable, and less prone to error, which is exactly what you need in a production environment.
]]>Visual Studio Code (VS Code) has become a ubiquitous tool in the software development world, prized for its speed, versatility, and extensive customization options. At its heart, VS Code is a lightweight, open-source code editor that supports a vast ecosystem of extensions. These extensions are the key to unlocking the true potential of VS Code, transforming it from a simple editor into a powerful, tailored IDE (Integrated Development Environment).
This blog post will explore the world of VS Code extensions, focusing on how they can enhance your development team’s productivity, code quality, and overall efficiency. We’ll cover everything from selecting the right extensions to managing them effectively and even creating your own custom extensions to meet specific needs.
Extensions are essentially plugins that add new features and capabilities to VS Code. They can range from simple syntax highlighting and code completion tools to more complex features like debuggers, linters, and integration with external services. The Visual Studio Code Marketplace hosts thousands of extensions, catering to virtually every programming language, framework, and development workflow imaginable.
Popular examples include Prettier for automatic code formatting, ESLint for identifying and fixing code errors, and Live Share for real-time collaborative coding.
The benefits of using VS Code extensions are numerous and can significantly impact your development team’s performance.
As software development teams grow and projects become more complex, managing IDE tools effectively becomes crucial. A well-managed IDE environment can significantly impact a team’s ability to deliver high-quality software on time and within budget.
Effectively managing VS Code extensions within a team requires a strategic approach. Here are some best practices to consider:
While VS Code extensions offer numerous benefits, they can also introduce security risks if not managed properly. It’s crucial to be aware of these risks and take steps to mitigate them.
In some cases, existing extensions may not fully meet your team’s specific needs. Creating custom VS Code extensions can be a powerful way to add proprietary capabilities to your IDE and tailor it to your unique workflow. One exciting area is integrating AI Chatbots directly into VS Code for code generation, documentation, and more.
Identify the Need: Start by identifying the specific functionality that your team requires. This could be anything from custom code snippets and templates to integrations with internal tools and services. For this example, we’ll create an extension that allows you to highlight code, right-click, and generate documentation using a custom prompt sent to an AI Chatbot.
Learn the Basics: Familiarize yourself with the Visual Studio Code Extension API and the tools required to develop extensions. The API documentation provides comprehensive guides and examples to help you get started.
Set Up Your Development Environment: Install the necessary tools, such as Node.js and Yeoman, to create and test your extensions. The Yeoman generator for Visual Studio Code extensions can help you quickly scaffold a new project.
Develop Your Extension: Write the code for your extension, leveraging the Visual Studio Code Extension API to add the desired functionality. Be sure to follow best practices for coding and testing to ensure that your extension is reliable, maintainable, and secure.
Test Thoroughly: Test your extension in various scenarios to ensure that it works as expected and doesn’t introduce any new issues. This includes testing with different configurations, environments, and user roles.
Distribute Your Extension: Once your extension is ready, you can distribute it to your team. You can either publish it to the Visual Studio Code Marketplace or share it privately within your organization. Consider using a private extension registry to manage and distribute your custom extensions securely.
Developing robust and efficient VS Code extensions requires careful attention to best practices. Here are some key considerations:
Resource Management:
context.subscriptions.push()
method to register disposables, which will be automatically disposed of when the extension is deactivated.deactivate()
function to clean up any resources that need to be explicitly released when the extension is deactivated.Asynchronous Operations:
async/await
to handle asynchronous operations in a clean and readable way. This makes your code easier to understand and maintain.try/catch
blocks. Log errors and provide informative messages to the user.vscode.window.withProgress
to provide feedback to the user during long operations.Security:
Performance:
Code Quality:
User Experience:
By following these best practices, you can develop robust, efficient, and user-friendly VS Code extensions that enhance the development experience for yourself and others.
Let’s walk through creating a custom VS Code extension that integrates with an AI Chatbot to generate documentation for selected code. This example assumes you have access to an AI Chatbot API (like OpenAI’s GPT models). You’ll need an API key. Remember to handle your API key securely and do not commit it to your repository.
1. Scaffold the Extension:
First, use the Yeoman generator to create a new extension project:
yo code
2. Modify the Extension Code:
Open the generated src/extension.ts
file and add the following code to create a command that sends selected code to the AI Chatbot and displays the generated documentation:
import * as vscode from 'vscode'; import axios from 'axios'; export function activate(context: vscode.ExtensionContext) { let disposable = vscode.commands.registerCommand('extension.generateDocs', async () => { const editor = vscode.window.activeTextEditor; if (editor) { const selection = editor.selection; const selectedText = editor.document.getText(selection); const apiKey = 'YOUR_API_KEY'; // Replace with your actual API key const apiUrl = 'https://api.openai.com/v1/engines/davinci-codex/completions'; try { const response = await axios.post( apiUrl, { prompt: `Generate documentation for the following code:\n\n${selectedText}`, max_tokens: 150, n: 1, stop: null, temperature: 0.5, }, { headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, } ); const generatedDocs = response.data.choices[0].text; vscode.window.showInformationMessage('Generated Documentation:\n' + generatedDocs); } catch (error) { vscode.window.showErrorMessage('Error generating documentation: ' + error.message); } } }); context.subscriptions.push(disposable); } export function deactivate() {}
3. Update package.json
:
Add the following command configuration to the contributes
section of your package.json
file:
"contributes": { "commands": [ { "command": "extension.generateDocs", "title": "Generate Documentation" } ] }
4. Run and Test the Extension:
Press F5
to open a new VS Code window with your extension loaded. Highlight some code, right-click, and select “Generate Documentation” to see the AI-generated documentation.
Once you’ve developed and tested your custom VS Code extension, you’ll likely want to share it with your team or the wider community. Here’s how to package and distribute your extension, including options for local and private distribution:
1. Package the Extension:
VS Code uses the vsce
(Visual Studio Code Extensions) tool to package extensions. If you don’t have it installed globally, install it using npm:
npm install -g vsce
Navigate to your extension’s root directory and run the following command to package your extension:
vsce package
This will create a .vsix
file, which is the packaged extension.
2. Publish to the Visual Studio Code Marketplace:
To publish your extension to the Visual Studio Code Marketplace, you’ll need to create a publisher account and obtain a Personal Access Token (PAT). Follow the instructions on the Visual Studio Code Marketplace to set up your publisher account and generate a PAT.
Once you have your PAT, run the following command to publish your extension:
vsce publish
You’ll be prompted to enter your publisher name and PAT. After successful authentication, your extension will be published to the marketplace.
3. Share Privately:
If you prefer to share your extension privately within your organization, you can distribute the .vsix
file directly to your team members. They can install the extension by running the following command in VS Code:
code --install-extension your-extension.vsix
Alternatively, you can set up a private extension registry using tools like Azure DevOps Artifacts or npm Enterprise to manage and distribute your custom extensions securely.
Visual Studio Code extensions are a powerful tool for enhancing the capabilities of your development environment and improving your team’s productivity, code quality, and overall efficiency. By carefully selecting, managing, and securing your extensions, you can create a tailored IDE that meets your specific needs and helps your team deliver high-quality software on time and within budget. Whether you’re using existing extensions from the marketplace or creating your own custom solutions, the possibilities are endless. Embrace the power of VS Code extensions and unlock the full potential of your development team.
Transferring Route 53 hosted zone records between AWS accounts using the CLI involves exporting the records from one account and then importing them to another. Here’s a step-by-step guide:
The primary objective of this process is to migrate Route 53 hosted zone records seamlessly between AWS accounts while ensuring minimal disruption to DNS functionality. This involves securely transferring DNS records, preserving their integrity, maintaining availability, and ensuring linked AWS resources remain accessible. Additionally, cross-account DNS access may be implemented as needed to meet business requirements.
Wget https://github.com/barnybug/cli53/releases/download/0.8.16/cli53-linux-amd64
Note: Linux can also be used, but it requires cli53 dependency and AWS credentials
Note: Created Microsoft.com as dummy hosted zone.
When migrating Route 53 hosted zones between AWS accounts, applying best practices helps ensure a smooth transition with minimal disruption. Here are key best practices for a successful Route 53 hosted zone migration:
Migrating a Route 53 hosted zone between AWS accounts involves careful planning, especially to ensure DNS records are exported and imported correctly. After migrating, testing is crucial to confirm that DNS resolution works as expected. Cross-account setups may require additional configuration, such as Route 53 Resolver rules, to ensure seamless DNS functionality across environments.
]]>Unit testing is a crucial aspect of software development, especially in complex applications like Android apps. It involves testing individual units of code, such as methods or classes, in isolation. This ensures the correctness of each component, leading to a more robust and reliable application.
The Model-View-ViewModel (MVVM) architectural pattern is widely adopted in Android app development. It separates the application into three distinct layers:
Unit testing each layer in an MVVM architecture offers numerous benefits:
testImplementation 'junit:junit:4.13.2' androidTestImplementation 'androidx.test.ext:junit:1.1.5' androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1'
Example:
@RunWith(AndroidJUnit4::class) class MyViewModelTest { @Test fun `should update the UI when data is fetched successfully`() { // ... (Arrange) val viewModel = MyViewModel(mockRepository) // ... (Act) viewModel.fetchData() // ... (Assert) viewModel.uiState.observeForever { uiState -> assertThat(uiState.isLoading).isFalse() assertThat(uiState.error).isNull() assertThat(uiState.data).isEqualTo(expectedData) } } }
Example:
@RunWith(AndroidJUnit4::class) class MyRepositoryTest { @Test fun `should fetch data from remote source successfully`() { // ... (Arrange) val mockApi = mock(MyApi::class.java) val repository = MyRepository(mockApi) // ... (Act) repository.fetchData() // ... (Assert) verify(mockApi).fetchData() } }
SonarQube is a powerful tool for code quality and security analysis. Here’s a detailed guide on how to integrate SonarQube with your Android project:
sonar.host.url=http://localhost:9000 sonar.login=your_sonar_login sonar.password=your_sonar_password sonar.projectKey=my-android-project sonar.projectName=My Android Project sonar.sources=src/main/java sonar.java.binaries=build/intermediates/javac/release/classes
plugins { id 'org.sonarsource.scanner-gradle' version '3.3' }
Configure the plugin with your SonarQube server URL and authentication token.
Test coverage measures the percentage of your code that is covered by tests. It’s a crucial metric to assess the quality of your test suite. Here’s how to measure test coverage with Bitrise:
By following these guidelines and incorporating unit testing into your development process, you can significantly improve the quality and reliability of your Android apps.
]]>Last week, I noticed that deployments to Sitecore XM Cloud were failing on one of my projects. In this blog post, I’ll review the troubleshooting steps I went through and what the issue turned out to be. To provide a bit more context on the DevOps setup for this particular project, an Azure DevOps pipeline runs a script. That script uses the Sitecore CLI and the Sitecore XM Cloud plugin’s cloud deployment command to deploy to XM Cloud. The last successful deployment was just a few days prior and there hadn’t been many code changes since. Initially, I was pretty stumped but, hey, what can you do except start from the top…
msbuild
had changed between the last successful deployment and the more recent failed deployments. I downloaded the same, newer version of msbuild
and verified, once again, that I could restore NuGet packages and build the solution.At this point, while I continued to analyze the deployment logs, I opened a Sitecore support ticket to have them weigh in . I provided support with the last known working build logs, the latest failed build logs, and the list of my troubleshooting steps up to that point.
After hearing back from Sitecore support, it turned out that Sitecore had recently made a change to how the buildTargets
property in the xmcloud.build.json
file was consumed and used as part of deployments. To quote the support engineer:
There were some changes in the build process, and now the build targets are loaded from the “buildTargets ” list. The previous working builds were using the “.sln” file directly.
It looks like that resulted in the build not working properly for some projects.
The suggested fix was to specifically target the Visual Studio solution file to ensure that the XM Cloud Deployment NuGet package restore and compilation worked as expected. My interpretation of the change was “XM Cloud Deploy used to not care about/respect buildTargets
…but now it does.”
After creating a pull request to change the buildTargets
property from this (targeting the specific, top-level project):
{ ... "buildTargets": [ "./src/platform/Project/Foo.Project.Platform/Platform.csproj" ] ... }
…to this (targeting the solution):
{ ... "buildTargets": [ "./Foo.sln" ] ... }
…the next deployment to XM Cloud (via CI/CD) worked as expected.
After asking the Sitecore support engineer where this change was documented, they graciously escalated internally and posted a new event to the Sitecore Status Page to acknowledge the change/issue: Deployment is failing to build.
If you’re noticing that your XM Cloud deployments are failing on the build step while compiling your Visual Studio solution, make sure you’re targeting the solution file (.sln
) and not a specific project file (.csproj
) in the buildTargets
property in the xmcloud.build.json
file…because it matters now, apparently .
Thanks for the read!
Various deployment methods, including cloud-based (e.g., CloudHub) and on-premises, are available to meet diverse infrastructure needs. GitHub, among other tools, supports versioning and code backup, while CI/CD practices automate integration and deployment processes, enhancing code quality and speeding up software delivery.
GitHub Actions, an automation platform by GitHub, streamlines building, testing, and deploying software workflows directly from repositories. Although commonly associated with cloud deployments, GitHub Actions can be adapted for on-premises setups with self-hosted runners. These runners serve as the execution environment, enabling deployment tasks on local infrastructure.
Configuring self-hosted runners allows customization of GitHub Actions workflows for on-premises deployment needs. Workflows can automate tasks like Docker image building, artifact pushing to private registries, and application deployment to local servers.
Leveraging GitHub Actions for on-premises deployment combines the benefits of automation, version control, and collaboration with control over infrastructure and deployment processes.
A runner refers to the machine responsible for executing tasks within a GitHub Actions workflow. It performs various tasks defined in the action script, like cloning the code directory, building the code, testing the code, and installing various tools and software required to run the GitHub action workflow.
These are virtual machines provided by GitHub to run workflows. Each machine comes pre-configured with the environment, tools, and settings required for GitHub Actions. GitHub-hosted runners support various operating systems, such as Ubuntu Linux, Windows, and macOS.
A self-hosted runner is a system deployed and managed by the user to execute GitHub Actions jobs. Compared to GitHub-hosted runners, self-hosted runners offer more flexibility and control over hardware, operating systems, and software tools. Users can customize hardware configurations, install software from their local network, and choose operating systems not provided by GitHub-hosted runners. Self-hosted runners can be physical machines, virtual machines, containers, on-premises servers, or cloud-based instances.
Self-hosted runners play a crucial role in deploying applications on the on-prem server using GitHub Action Scripts and establishing connectivity with an on-prem server. These runners can be created at different management levels within GitHub: repository, organization, and enterprise.
By leveraging self-hosted runners for deployment, organizations can optimize control, customization, performance, and cost-effectiveness while meeting compliance requirements and integrating seamlessly with existing infrastructure and tools. Here are few advantages of self-hosted runners as given below.
Self-hosted runners allow organizations to maintain control over their infrastructure and deployment environment. This includes implementing specific security measures tailored to the organization’s requirements, such as firewall rules and access controls.
With self-hosted runners, you have the flexibility to customize the hardware and software environment to match your specific needs. This can include installing specific libraries, tools, or dependencies required for your applications or services.
Self-hosted runners can offer improved performance compared to cloud-based alternatives, especially for deployments that require high computational resources or low-latency connections to local resources.
While cloud-based solutions often incur ongoing costs based on usage and resource consumption, self-hosted runners can provide cost savings by utilizing existing infrastructure without incurring additional cloud service charges.
For organizations operating in regulated industries or regions with strict compliance requirements, self-hosted runners offer greater control and visibility over where code is executed and how data is handled, facilitating compliance efforts.
In environments where internet connectivity is limited or unreliable, self-hosted runners enable deployment workflows to continue functioning without dependency on external cloud services or repositories.
Self-hosted runners can be scaled up or down according to demand, allowing organizations to adjust resource allocation based on workload fluctuations or project requirements.
Self-hosted runners seamlessly integrate with existing on-premises tools and infrastructure, facilitating smoother adoption and interoperability within the organization’s ecosystem.
Follow the steps below to create and utilize a self-hosted runner.
mkdir actions-runner; cd actions-runner
Invoke-WebRequest -Uri https://github.com/actions/runner/releases/download/v2.316.0/actions-runner-win-x64-2.316.0.zip -OutFile actions-runner-win-x64-2.316.0.zip
Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory("$PWD/actions-runner-win-x64-2.316.0.zip", "$PWD")
./config.cmd --url https://github.com/<owner>/<repo_name> --token <token>
./run.cmd
# Use below YAML code snippet in your workflow file for each job runs-on: self-hosted
The process for creating organization-level and enterprise-level self-hosted runners follows similar steps. Still, the runners created at these levels can serve multiple repositories or organizations within the account. The setup process generally involves administrative permissions and configuration at a broader level.
By following these steps, you can set up self-hosted runners to enable connectivity between your on-prem server and GitHub Action Scripts, facilitating on-prem deployments seamlessly.
]]>Perficient is excited to announce our achievement in Amazon Web Services (AWS) DevOps Competency for AWS partners. This designation recognizes Perficient as an AWS partner that has demonstrated expertise in delivering DevSecOps solutions. This competency highlights Perficient’s ability to drive innovation, meet business objectives, and get the most out of your AWS services.
Achieving the AWS DevOps Competency status differentiates Perficient as an AWS Partner Network (APN) member that provides modern product engineering solutions designed to help enterprises adopt, develop, and deploy complex projects faster on AWS. To receive the designation, APN members must possess deep AWS expertise and deliver solutions seamlessly on AWS.
This competency empowers our delivery teams to break down traditional silos, shorten feedback loops, and respond more effectively to changes, ultimately increasing speed to market by up to 75%.
With our partnership with AWS, we can modernize our clients’ processes to improve product quality, scalability, and performance, and significantly reduce release costs by up to 97%. This achievement ensures that our CI/CD processes and IT governance are sustainable and efficient, benefiting organizations of any size.
At Perficient, we strive to be the place where great minds and great companies converge to boldly advance business, and this achievement is a testament to that vision!
]]>