DevOps Articles / Blogs / Perficient https://blogs.perficient.com/tag/devops/ Expert Digital Insights Tue, 03 Jun 2025 10:09:38 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png DevOps Articles / Blogs / Perficient https://blogs.perficient.com/tag/devops/ 32 32 30508587 A Comprehensive Guide to Azure Firewall https://blogs.perficient.com/2025/06/03/a-comprehensive-guide-to-azure-firewall/ https://blogs.perficient.com/2025/06/03/a-comprehensive-guide-to-azure-firewall/#respond Tue, 03 Jun 2025 10:09:38 +0000 https://blogs.perficient.com/?p=380960

Azure Firewall, a managed, cloud-based network security service, is an essential component of Azure’s security offerings. It comes in three different versions – Basic, Standard, and Premium – each designed to cater to a wide range of customer use cases and preferences. This blog post will provide a comprehensive comparison of these versions, discuss best practices for their use, and delve into their application in hub-spoke and Azure Virtual WAN with Secure Hub architectures.

What is Azure Firewall?

Azure Firewall is a cloud-native, intelligent network firewall security service designed to protect your Azure cloud workloads. It offers top-tier threat protection and is fully stateful, meaning it can track the state of network connections and make decisions based on the context of the traffic.

Key Features of Azure Firewall

  • High Availability: Built-in high availability ensures that your firewall remains operational at all times.
  • Scalability: Unlimited cloud scalability to handle varying workloads.
  • Traffic Inspection: Inspects both east-west (within the same network) and north-south (between different networks) traffic.
  • Threat Intelligence: Uses advanced threat intelligence to block malicious IP addresses and domains.
  • Centralized Management: Allows you to centrally create, enforce, and log application and network connectivity policies across multiple subscriptions and virtual networks.
  • Compliance: Helps organizations meet regulatory and compliance requirements by providing detailed logging and monitoring capabilities.
  • Cost Efficiency: By deploying Azure Firewall in a central virtual network, you can achieve cost savings by avoiding the need to deploy multiple firewalls across different networks.

    Firewall Architecture

Why Azure Firewall is Essential

Enhanced Security

In today’s digital landscape, cyber threats are becoming increasingly sophisticated. Organizations need robust security measures to protect their data and applications. Azure Firewall provides enhanced security by inspecting both inbound and outbound traffic, using advanced threat intelligence to block malicious IP addresses and domains. This ensures that your network is protected against a wide range of threats, including malware, phishing, and other cyberattacks.

Centralized Management

Managing network security across multiple subscriptions and virtual networks can be a complex and time-consuming process. Azure Firewall simplifies this process by allowing you to centrally create, enforce, and log application and network connectivity policies. This centralized management ensures consistent security policies across your organization, making it easier to maintain and monitor your network security.

Scalability

Businesses often experience fluctuating traffic volumes, which can strain network resources. Azure Firewall offers unlimited cloud scalability, meaning it can handle varying workloads without compromising performance. This scalability is crucial for businesses that need to accommodate peak traffic periods and ensure continuous protection.

High Availability

Downtime can be costly for businesses, both in terms of lost revenue and damage to reputation. Azure Firewall’s built-in high availability ensures that your firewall is always operational, minimizing downtime and maintaining continuous protection

Compliance

Many industries have strict data protection regulations that organizations must comply with. Azure Firewall helps organizations meet these regulatory and compliance requirements by providing detailed logging and monitoring capabilities. This is particularly vital for industries such as finance, healthcare, and government, where data security is of paramount importance.

Cost Efficiency

Deploying multiple firewalls across different networks can be expensive. By deploying Azure Firewall in a central virtual network, organizations can achieve cost savings. This centralized approach reduces the need for multiple firewalls, lowering overall costs while maintaining robust security.

Azure Firewall Versions: Basic, Standard, and Premium

Azure Firewall Basic

Azure Firewall Basic is recommended for small to medium-sized business (SMB) customers with throughput needs of up to 250 Mbps. It’s a cost-effective solution for businesses that require fundamental network protection.

Azure Firewall Standard

Azure Firewall Standard is recommended for customers looking for a Layer 3–Layer 7 firewall and need autoscaling to handle peak traffic periods of up to 30 Gbps. It supports enterprise features like threat intelligence, DNS proxy, custom DNS, and web categories.

Azure Firewall Premium

Azure Firewall Premium is recommended for securing highly sensitive applications, such as those involved in payment processing. It supports advanced threat protection capabilities like malware and TLS inspection. Azure Firewall Premium utilizes advanced hardware and features a higher-performing underlying engine, making it ideal for handling heavier workloads and higher traffic volumes.

Azure Firewall Features Comparison

Here’s a comparison of the features available in each version of Azure Firewall:

Feature Basic Standard Premium
Stateful firewall (Layer 3/Layer 4) Yes Yes Yes
Application FQDN filtering Yes Yes Yes
Network traffic filtering rules Yes Yes Yes
Outbound SNAT support Yes Yes Yes
Threat intelligence-based filtering No Yes Yes
Web categories No Yes Yes
Intrusion Detection and Prevention System (IDPS) No No Yes
TLS Inspection No No Yes
URL Filtering No No Yes

Azure Firewall Architecture

Azure Firewall plays a crucial role in the hub-spoke network architecture pattern in Azure. The hub is a virtual network (VNet) in Azure that acts as a central point of connectivity to your on-premises network. The spokes are VNets that peer with the hub and can be used to isolate workloads. Azure Firewall secures and inspects network traffic, but it also routes traffic between VNets .

A secured hub is an Azure Virtual WAN Hub with associated security and routing policies configured by Azure Firewall Manager. Use secured virtual hubs to easily create hub-and-spoke and transitive architectures with native security services for traffic governance and protection.

Spoke Spoke Routing

How Azure Firewall Works

Azure Firewall operates by using rules and rule collections to manage and filter network traffic. Here are some key concepts:

  • Rule Collections: A set of rules with the same order and priority. Rule collections are executed in priority order.
  • Application Rules: Configure fully qualified domain names (FQDNs) that can be accessed from a virtual network.
  • Network Rules: Configure rules with source addresses, protocols, destination ports, and destination addresses.
  • NAT Rules: Configure DNAT rules to allow incoming Internet or intranet connections.

Azure Firewall integrates with Azure Monitor for viewing and analyzing logs. Logs can be sent to Log Analytics, Azure Storage, or Event Hubs and analyzed using tools like Log Analytics, Excel, or Power BI.

Steps to Deploy and Configure Azure Firewall

Step 1: Set Up the Network

Create a Resource Group
Sign in to the Azure portal:

  • Navigate to Azure Portal.
    • Use your credentials to sign in.
  • Create a Resource Group:
    • On the Azure portal menu, select Resource groups or search for and select Resource groups from any page.
    • Click Create.
    • Enter the following values:
      • Subscription: Select your Azure subscription.
      • Resource group: Enter Test-FW-RG.
      • Region: Select a region (ensure all resources you create are in the same region).
    • Click Review + create and then Create.
  • Create a Virtual Network (VNet)
    • On the Azure portal menu or from the Home page, select Create a resource.
    • Select Networking and search for Virtual network, then click Create.
    • Enter the following values:
      • Subscription: Select your Azure subscription.
      • Resource group: Select Test-FW-RG.
      • Name: Enter Test-FW-VN.
      • Region: Select the same region as the resource group.
  • Click Next: IP Addresses.
    • Configure IP Addresses:
    • Set the Address space to 10.0.0.0/16.
      • Create two subnets:
      • AzureFirewallSubnet: Enter 10.0.1.0/26.
      • Workload-SN: Enter 10.0.2.0/24.
  • Click Next: Security.
    • Configure Security Settings:
    • Leave the default settings for Security.
  • Click Next: Tags.
    • Add Tags (Optional):
    • Tags are useful for organizing resources. Add any tags if needed.
  • Click Next: Review + create.
    • Review and Create:
    • Review the settings and click Create.

 

Vnet

Step 2: Deploy the Firewall

Create the Firewall:

  • On the Azure portal menu, select Create a resource.
    • Search for Firewall and select Create.
    • Enter the following values:
      • Subscription: Select your Azure subscription.
      • Resource group: Select Test-FW-RG.
      • Name: Enter Test-FW.
      • Region: Select the same region as the resource group.
      • Virtual network: Select Test-FW-VN.
      • Subnet: Select AzureFirewallSubnet.
  • Click Next: IP Addresses.
    • Configure IP Addresses:
    • Assign a Public IP Address:
      • Click Add new.
      • Enter a name for the public IP address, e.g., Test-FW-PIP.Click OK.
  • Click Next: Tags.
    • Add Tags (Optional):
    • Add any tags if needed.
  • Click Next: Review + create.
    • Review and Create:
    • Review the settings and click Create.

Deploy Firewall

Step 3: Configure Firewall Rules

Create Application Rules

  • Navigate to the Firewall:
    • Go to the Resource groups and select Test-FW-RG.
    • Click on Test-FW.
  • Configure Application Rules:
    • Select Rules from the left-hand menu.
    • Click Add application rule collection.
      • Enter the following values:Name: Enter AppRuleCollection.
      • Priority: Enter 100.
      • Action: Select Allow.
      • Rules: Click Add rule.
      • Name: Enter AllowGoogle.
      • Source IP addresses: Enter *.
      • Protocol: Select http, https.
      • Target FQDNs: Enter www.google.com.
    • Click Add.
  • Create Network Rules
  • Configure Network Rules:
    • Select Rules from the left-hand menu.
    • Click Add network rule collection.
    • Enter the following values:
      • Name: Enter NetRuleCollection.
      • Priority: Enter 200.
      • Action: Select Allow.
      • Rules: Click Add rule.
      • Name: Enter AllowDNS.
      • Source IP addresses: Enter *.
      • Protocol: Select UDP.
      • Destination IP addresses: Enter 8.8.8.8, 8.8.4.4.Destination ports: Enter 53.
    • Click Add.
  • Create NAT Rules
    • Configure NAT Rules:
      • Select Rules from the left-hand menu.
      • Click Add NAT rule collection.
      • Enter the following values:
        • Name: Enter NATRuleCollection.
        • Priority: Enter 300.
        • Action: Select DNAT.
        • Rules: Click Add rule.
        • Name: Enter AllowRDP.
        • Source IP addresses: Enter *.Protocol: Select TCP.
        • Destination IP addresses: Enter the public IP address of the firewall.
        • Destination ports: Enter 3389.
        • Translated address: Enter the private IP address of the workload server.
        • Translated port: Enter 3389.
      • Click Add.

Rdp

Step 4: Test the Firewall

  • Deploy a Test VM:
    • Create a virtual machine in the Workload-SN subnet.
    • Ensure it has a private IP address within the 10.0.2.0/24 range.
  • Test Connectivity:
    • Attempt to access www.google.com from the test VM to verify the application rule.
    • Attempt to resolve DNS queries to 8.8.8.8 and 8.8.4.4 to verify the network rule.
    • Attempt to connect via RDP to the test VM using the public IP address of the firewall to verify the NAT rule.
  • Monitoring and Managing Azure Firewall
    • Integrate with Azure Monitor:
      • Navigate to the firewall resource.
        • Select Logs from the left-hand menu.
        • Configure diagnostic settings to send logs to Azure Monitor, Log Analytics, or Event Hubs.
  • Analyze Logs:
    • Use Azure Monitor to view and analyze firewall logs.
    • Create alerts and dashboards to monitor firewall activity and performance.

Test Firewall

Best Practices for Azure Firewall

To maximize the performance of your Azure Firewall, it’s important to follow best practices. Here are some recommendations:

  • Optimize Rule Configuration and Processing: Organize rules using firewall policy into Rule Collection Groups and Rule Collections, prioritizing them based on their frequency of use.
  • Use or Migrate to Azure Firewall Premium: Azure Firewall Premium offers a higher-performing underlying engine and includes built-in accelerated networking software.
  • Add Multiple Public IP Addresses to the Firewall: Consider adding multiple public IP addresses (PIPs) to your firewall to prevent SNAT port exhaustion.
]]>
https://blogs.perficient.com/2025/06/03/a-comprehensive-guide-to-azure-firewall/feed/ 0 380960
Over The Air Updates for React Native Apps https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/ https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/#respond Mon, 02 Jun 2025 14:07:24 +0000 https://blogs.perficient.com/?p=349211

Mobile App development is rapidly growing and so is the expectation of robust support. “Mobile first” is the set paradigm for many application development teams. Unlike web deployment, an app release has to go through the review process via App Store Connect and Google Play. Minor or major releases follow the app review same process, which can take 1-4 days. Hot fixes or critical security patches are also bound by the review cycle restrictions.  This may lead to service disruptions, negative app and customer reviews.

Let’s say that the latest version of an app is version 1.2. However, a critical bug was identified in version 1.1. The app developers may release version 1.3, but the challenge would be that it may take a while to release the new version (unless a forced update mechanism is implemented for the app). Another potential challenge would be the fact that there is no guarantee that the user would have auto updates on.

Luckily, “Over The Air” updates comes to the rescue in such situations.

The Over The Air (OTA), deployment process for mobile apps allows developers to push updates without going through the traditional review process. The OTA update process enables faster delivery for any hot fix or patch.

While this is very exciting, it does come with a few limitations:

  • This feature is not intended for major updates or large feature launches.
  • OTA primarily works with JavaScript bundlers so native feature changes cannot be deployed via OTA deployment.

Mobile OTA Deployment

React Native consists of JavaScript and Native code. When the app gets compiled, it creates the JSbundles for Android and iOS apps along with the native builds. OTA also relies on the JavaScript bundles and hence React Native apps are great candidates to take advantage of the OTA update technology.

One of our client’s app has an OTA deployment process implemented using App Center. However, Microsoft has decided to retire App Center as of March 31, 2025. Hence, we started exploring the alternatives. One of the alternate solutions on the the table was provided by App Center and the other was to find a similar PAAS solution from another provider. Since back-end stack was AWS, we chose to go with EAS Update.

EAS Update

EAS Update is a hosted service that serves updates for projects using expo-updates library. Once the EAS Update is configured correctly, the app will be listening for any targeted version of the app on the EAS dev cloud server. Expo provides a great documentation on setup and configuration.

How Does It Work?

In a nutshell;

  1. Integrate “EAS Updates” in the app project.
  2. The user has the app installed on their device.
  3. The development team made a bug fix/patch and generated JSbundle for the targeted app version and uploaded to the Expo.dev cloud server.
  4. Next time the user opens the app (frequencies can be configurable, we can set on app resume/start), the app will check if any bundle is available to be installed. If there is an update available, the newer version of the app from Expo will be installed on user’s device.
Over The Air Update process flow

OTA deployment process

Additional details can be found at https://docs.expo.dev/eas-update/how-it-works/.

Implementation Details:

If you are new to React Native app development, this article may help Ramp Up On React/React Native In Less Than a Month. And if you are transitioning from React to React Native, you may find this React Native – A Web Developer’s Perspective on Pivoting to Mobile useful.

I am using my existing React-Native 0.73.7 app. However, one can start a fresh React Native App for your test.

Project configuration requires us to setup expo-modules. The Expo installation guide provides an installer which handles configuration.  Our project needed an SDK 50 version of the installer.

  • Using npx install-expo-modules@0.8.1, I installed Expo, SDK-50, in alignment with our current React native version 0.73.7, which added the following dependencies.
"@expo/vector-icons": "^14.0.0",
"expo-asset": "~9.0.2",
"expo-file-system": "~16.0.9",
"expo-font": "~11.10.3",
"expo-keep-awake": "~12.8.2",
"expo-modules-autolinking": "1.10.3",
"expo-modules-core": "1.11.14",
"fbemitter": "^3.0.0",
"whatwg-url-without-unicode": "8.0.0-3"
  • Installed Expo-updates v0.24.14 package which added the following dependencies.
"@expo/code-signing-certificates": "0.0.5",
"@expo/config": "~8.5.0",
"@expo/config-plugins": "~7.9.0",
"arg": "4.1.0",
"chalk": "^4.1.2",
"expo-eas-client": "~0.11.0",
"expo-manifests": "~0.13.0",
"expo-structured-headers": "~3.7.0",
"expo-updates-interface": "~0.15.1",
"fbemitter": "^3.0.0",
"resolve-from": "^5.0.0"
  • Created expo account at https://expo.dev/signup
  • To setup the account execute, eas configure
  • This generated the project id and other account details.
  • Following channels were created: staging, uat, and production.
  • Added relevant project values to app.json, added Expo.plist, and updated same in AndroidManifest.xml.
  • Scripts block of package.json has been updated to use npx expo to launch the app.
  • AppDelegate.swift was refactored as part of the change.
  • App Center and CodePush assets and references were removed.
  • Created custom component to display a modal prompt when new update is found.

OTA Deployment:

  • Execute the command via terminal:
EAS_CHANNEL=staging RUNTIME_VERSION="7.13" eas update --message "build:[QA] - 7.13.841 - 25.5.9.4 - OTA Test2 commit"
  • Once the package is published, I can see my update available in expo.dev as shown in the image below.
EAS update OTA deployment

EAS update screen once OTA deployment is successful.

Test:

  1. Unlike App center, Expo provides the same package for iOS and Android targets.
  2. The targeted version package is available on the expo server.
  3. App restart or resume will display the popup (custom implementation) informing “A new update is available.”.
  4. When a user hits “OK” button in the popup, the update will be installed and content within the App will restart.
  5. If the app successfully restarts, the update is successfully installed.

Considerations:

  • In metro.config.js – the @rnx-kit/metro-serializer had to be commented out due to compatibility issue with EAS Update bundle process.
  • @expo/vector-icons package causes Android release build to crash on app startup. This package can be removed but if package-lock.json is removed the package will reinstall as an expo dependency and again, cause the app to crash. The issue is described in the comments here: https://github.com/expo/expo/issues/26521. There is no solution available at the moment. The expo vector icons package isn’t being handled correctly during the build process. It is caused by the react-native-elements package. When removed, the files are no longer added to app.manifest and the app builds and runs as expected.
  • Somehow the font require statements in node_modules/react-native-elements/dist/helpers/getIconType.js are being picked up during the expo-updates generation of app.manifest even though the files are not used our app. The current solution is to go ahead and include the fonts in the package but this is not optimal. Better solution is to filter those fonts from expo-updates process.

Deployment Troubleshooting:

  • Error fetching latest Expo update: Error: “channel-name” is not allowed to be empty.

The headers “expo-runtime-version”, “expo-channel-name”, and “expo-platform” are required. They can also be set with the query parameters “runtime-version”, “channel-name”, and “platform”. Learn more: https://github.com/expo/fyi/blob/main/eas-update-missing-headers.md

The configuration values for iOS app are maintained in Supporting/Expo.plist. The above error indicates that the EXUpdatesRequestHeadersblock in the plist might be missing.

OTA deployment is very useful when large number of customers are using the app and any urgent hot fix or patch needs to be released. You can set this for your lower environments as well as the production.

In my experience, it is very reliable and the expo team is doing great job on maintaining it.

So take advantage of this amazing service and Happy coding!

 

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/feed/ 0 349211
How the Change to TLS Certificate Lifetimes Will Affect Sitecore Projects (and How to Prepare) https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/ https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/#respond Fri, 18 Apr 2025 13:54:17 +0000 https://blogs.perficient.com/?p=380286

TLS certificate lifetimes are being significantly reduced over the next few years as part of an industry-wide push toward greater security and automation. Here’s the phased timeline currently in place:

  • Now through March 15, 2026: Maximum lifetime is 398 days

  • Starting March 15, 2026: Reduced to 200 days

  • Starting March 15, 2027: Further reduced to 100 days

  • Starting March 15, 2029: Reduced again to just 47 days

For teams managing Sitecore implementations, this is more than a policy shift—it introduces operational urgency. As certificates begin expiring more frequently, any reliance on manual tracking or last-minute renewals could result in costly downtime or broken integrations.

If your Sitecore environment includes secure endpoints, custom domains, or external integrations, now is the time to assess your certificate strategy and move toward automation.

Why This Matters for Sitecore

Sitecore projects often involve:

  • Multiple environments (development, staging, production) with different certificates

  • Custom domains or subdomains used for CDNs, APIs, headless apps, or marketing campaigns

  • Third-party integrations that require secure connections

  • Marketing and personalization features that rely on seamless uptime

A single expired certificate can lead to downtime, loss of customer trust, or failed integrations—any of which could severely impact your digital experience delivery.

Key Risks of Shorter TLS Lifetimes

  • Increased risk of missed renewals if teams rely on manual tracking

  • Broken environments due to expired certs in Azure, IIS, or Kubernetes configurations

  • Delayed deployments when certificates must be re-issued last minute

  • SEO and trust damage if browsers start flagging your site as insecure

How to Prepare Your Sitecore Project Teams

To stay ahead of the TLS certificate lifecycle changes, here are concrete steps you should take:

1. Inventory All TLS Certificates

  • Audit all environments and domains using certificates

  • Include internal services, custom endpoints, and non-production domains

  • Use a centralized tracking tool (e.g., Azure Key Vault, HashiCorp Vault, or a certificate management platform)

2. Automate Certificate Renewals

  • Wherever possible, switch to automated certificate issuance and renewal

  • Use services like:

    • Azure App Service Managed Certificates

    • Let’s Encrypt with automation scripts

    • ACME protocol integrations for Kubernetes

  • For Azure-hosted Sitecore instances, leverage Key Vault and App Gateway integrations

3. Establish Certificate Ownership

  • Assign clear ownership of certificate management per environment or domain

  • Document who is responsible for renewals and updates

  • Add certificate health checks to your DevOps dashboards

4. Integrate Certificate Checks into CI/CD Pipelines

  • Validate certificate validity before deployments

  • Fail builds if certificates are nearing expiration

  • Include certificate management tasks as part of environment provisioning

5. Educate Your Team

  • Hold knowledge-sharing sessions with developers, infrastructure engineers, and marketers

  • Make sure everyone understands the impact of expired certificates on the Sitecore experience

6. Test Expiry Scenarios

  • Simulate certificate expiry in non-production environments

  • Monitor behavior in Sitecore XP and XM environments, including CD and CM roles

  • Validate external systems (e.g., CDNs, integrations, identity providers) against cert failures

Final Thoughts

TLS certificate management is no longer a “set it and forget it” task. With shorter lifetimes becoming the norm, proactive planning is essential to avoid downtime and ensure secure, uninterrupted experiences for your users.

Start by auditing your current certificates and work toward automating renewals. Make certificate monitoring part of your DevOps practice, and ensure your Sitecore teams are aware of the upcoming changes.

Action Items for This Week:

  • Identify all TLS certificates in your Sitecore environments

  • Document renewal dates and responsible owners

  • Begin automating renewals for at least one domain

  • Review Azure and Sitecore documentation for certificate integration options

]]>
https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/feed/ 0 380286
Security Best Practices in Sitecore XM Cloud https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/ https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/#respond Wed, 16 Apr 2025 23:45:38 +0000 https://blogs.perficient.com/?p=380233

Securing your Sitecore XM Cloud environment is critical to protecting your content, your users, and your brand. This post walks through key areas of XM Cloud security, including user management, authentication, secure coding, and best practices you can implement today to reduce your security risks.

We’ll also take a step back to look at the Sitecore Cloud Portal—the central control panel for managing user access across your Sitecore organization. Understanding both the Cloud Portal and XM Cloud’s internal security tools is essential for building a strong foundation of security.


Sitecore Cloud Portal User Management: Centralized Access Control

The Sitecore Cloud Portal is the gateway to managing user access across all Sitecore DXP tools, including XM Cloud. Proper setup here ensures that only the right people can view or change your environments and content.

Organization Roles

Each user you invite to your Sitecore organization is assigned an Organization Role, which defines their overall access level:

  • Organization Owner – Full control over the organization, including user and app management.

  • Organization Admin – Can manage users and assign app access, but cannot assign/remove Owners.

  • Organization User – Limited access; can only use specific apps they’ve been assigned to.

Tip: Assign the “Owner” role sparingly—only to those who absolutely need full administrative control.

App Roles

Beyond organization roles, users are granted App Roles for specific products like XM Cloud. These roles determine what actions they can take inside each product:

  • Admin – Full access to all features of the application.

  • User – More limited, often focused on content authoring or reviewing.

Managing Access

From the Admin section of the Cloud Portal, Organization Owners or Admins can:

  • Invite new team members and assign roles.

  • Grant access to apps like XM Cloud and assign appropriate app-level roles.

  • Review and update roles as team responsibilities shift.

  • Remove access when team members leave or change roles.

Security Tips:

  • Review user access regularly.

  • Use the least privilege principle—only grant what’s necessary.

  • Enable Multi-Factor Authentication (MFA) and integrate Single Sign-On (SSO) for extra protection.


XM Cloud User Management and Access Rights

Within XM Cloud itself, there’s another layer of user and role management that governs access to content and features.

Key Concepts

  • Users: Individual accounts representing people who work in the XM Cloud instance.

  • Roles: Collections of users with shared permissions.

  • Domains: Logical groupings of users and roles, useful for managing access in larger organizations.

Recommendation: Don’t assign permissions directly to users—assign them to roles instead for easier management.

Access Rights

Permissions can be set at the item level for things like reading, writing, deleting, or publishing. Access rights include:

  • Read

  • Write

  • Create

  • Delete

  • Administer

Each right can be set to:

  • Allow

  • Deny

  • Inherit

Best Practices

  • Follow the Role-Based Access Control (RBAC) model.

  • Create custom roles to reflect your team’s structure and responsibilities.

  • Audit roles and access regularly to prevent privilege creep.

  • Avoid modifying default system users—create new accounts instead.


Authentication and Client Credentials

XM Cloud supports robust authentication mechanisms to control access between services, deployments, and repositories.

Managing Client Credentials

When integrating external services or deploying via CI/CD, you’ll often need to authenticate through client credentials.

  • Use the Sitecore Cloud Portal to create and manage client credentials.

  • Grant only the necessary scopes (permissions) to each credential.

  • Rotate credentials periodically and revoke unused ones.

  • Use secure secrets management tools to store client IDs and secrets outside of source code.

For Git and deployment pipelines, connect XM Cloud environments to your repository using secure tokens and limit access to specific environments or branches when possible.


Secure Coding and Data Handling

Security isn’t just about who has access—it’s also about how your code and data behave in production.

Secure Coding Practices

  • Sanitize all inputs to prevent injection attacks.

  • Avoid exposing sensitive information in logs or error messages.

  • Use HTTPS for all external communications.

  • Validate data both on the client and server sides.

  • Keep dependencies up to date and monitor for vulnerabilities.

Data Privacy and Visitor Personalization

When using visitor data for personalization, be transparent and follow data privacy best practices:

  • Explicitly define what data is collected and how it’s used.

  • Give visitors control over their data preferences.

  • Avoid storing personally identifiable information (PII) unless absolutely necessary.


Where to Go from Here

Securing your XM Cloud environment is an ongoing process that involves team coordination, regular reviews, and constant vigilance. Here’s how to get started:

  • Audit your Cloud Portal roles and remove unnecessary access.

  • Establish a role-based structure in XM Cloud and limit direct user permissions.

  • Implement secure credential management for deployments and integrations.

  • Train your developers on secure coding and privacy best practices.

The stronger your security practices, the more confidence you—and your clients—can have in your digital experience platform.

]]>
https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/feed/ 0 380233
Automate the Deployment of a Static Website to an S3 Bucket Using GitHub Actions https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/ https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/#comments Wed, 05 Mar 2025 06:43:31 +0000 https://blogs.perficient.com/?p=377956

Automating deployments is crucial for efficiency and reliability in today’s fast-paced development environment. GitHub Actions provides a seamless way to implement CI/CD pipelines, allowing developers to automate the deployment of static websites without manual intervention.

In this blog, we will explore how to deploy a static website to an AWS S3 bucket using GitHub Actions. We’ll cover setting up an S3 bucket, configuring IAM roles for secure authentication, and leveraging GitHub Actions workflows to streamline deployment. By the end, you’ll have a fully automated pipeline that ensures quick and secure deployments with minimal effort.

Prerequisites

  1. Amazon S3 Bucket: Create an S3 bucket and enable static website hosting.
  2. IAM User & Permissions: Create an IAM user with access to S3 and store credentials securely.
  3. GitHub Repository: Your static website code should be in a GitHub repository.
  4. GitHub Secrets: Store AWS credentials in GitHub Actions Secrets.
  5. Amazon EC2 – to create a self-hosted runner.

Deploy a Static Website to an S3 Bucket

Step 1

First, create a GitHub repository. I already made one with the same name, which is why it exists.

Static 1

 

 

Step 2

You can clone the repository from the URL below and put it into your local system. I have added the website-related code to my GitHub repository, so you just need to clone it: https://github.com/Kunal2795/Static-Website.git.

 

Step 3

Push the code to host this static website with your changes, such as updating the bucket name and AWS region. I already have it locally, so you just need to push it using the Git commands below:

Static 2

Step 4

Once the changes are pushed to your GitHub repository, ensure the main. The yaml file is in the .github/workflows directory.

Staticc 3

If the main.yaml file is not present in the .github/workflows/ directory. Create it and add a job to run the static website pipeline in GitHub Actions. The main.yaml file is the primary configuration file in GitHub Actions that runs the entire pipeline.

Add the following job code to the main.yaml file in the .github/workflows/ directory:

name: Portfolio Deployment2

on:

  push:

    branches:

    – main

jobs:

  build-and-deploy:

    runs-on: [self-hosted, silver]

    steps:

    – name: Checkout

      uses: actions/checkout@v1

 

    – name: Configure AWS Credentials

      uses: aws-actions/configure-aws-credentials@v1

      with:

        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}

        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

        aws-region: us-east-2

 

    – name: Deploy static site to S3 bucket

      run: aws s3 sync . s3://kc-devops –delete

 

You need to make some modifications in the above jobs, such as:

  • runs-on – Add either a self-hosted runner or a default runner (I have added a self-hosted runner).
  • AWS-access-key-id – You need to add the Access Key ID variable name (store the variable value in Variables, which I will show you below).
  • AWS-secret-access-key – You need to add the Secret Access Key ID variable name (store its value in Variables, which I will show you below)
  • AWS-region – Add Region of s3 bucket
  • run – In that section, you need to add the path of your bucket where you want to store your static website code.

How to Create a Self-hosted Runner

Launch an EC2 instance with Ubuntu OS using a simple configuration.

Static 4

After that, create a self-hosted runner using specific commands. To get these commands, go to Settings in GitHub, navigate to Actions, click on Runners, and then select Create New Self-Hosted Runner.

Select Linux as the runner image.

Static 5

Static 6

Run the above commands step by step on your EC2 server to download and configure the self-hosted runner.

Static 7

 

Static 8

Once the runner is downloaded and configured, check its status to ensure it is idle or offline. If it is offline, start the GitHub Runner service on your EC2 server.

Also, ensure that AWS CLI is installed on your server.

Static 9

IAM User

Create an IAM user and grant it full access to EC2 and S3 services.

Static 10

Then, go to Security Credentials, create an Access Key and Secret Access Key, and securely copy and store both the Access Key and Secret Access Key in a safe place.

Static 11

 

Next, navigate to GitHub Actions → Secrets & Variables → Actions, then add your AWS Access Key ID and Secret Access Key securely.

Static 12

After adding the Access Key ID and Secret Access Key, proceed to the next section: S3.

Create an S3 bucket—I have created one with the name kc-devops.

Static 13

Add the policy below to your S3 bucket and update the bucket name with your own bucket name.

Static 14

After setting up everything, go to GitHub Actions, open the main. In the yaml file, update the bucket name and commit the changes.

Then, click the Actions tab to see all your triggered workflows and their status.

Static 15

We can see that all the steps for the build and deploy jobs have been successfully completed.

Static 16

Lastly, sign in to the AWS Management Console and open the Amazon S3 console. Check all the codes are stored in your bucket.

Static 17

Then, go to the Properties tab. Under Static website hosting, find and click on the Endpoint URL. (Bucket Website endpoint)

This Endpoint URL is the Amazon S3 website endpoint for your bucket.

Static 18

Output

Finally, we have successfully deployed and hosted a static website using automation to the Amazon S3 bucket.

Static 19

Conclusion

With this setup, whenever you push changes to your GitHub repository, GitHub Actions automatically trigger the deployment process. This ensures that your static website is seamlessly updated and deployed to your AWS S3 bucket without any manual intervention. This automation streamlines the deployment workflow, making it more efficient and error-free.

 

]]>
https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/feed/ 1 377956
Install Sitecore Hotfixes on Azure PaaS with Azure DevOps Pipeline https://blogs.perficient.com/2025/02/17/install-sitecore-hotfixes-on-azure-paas-with-azure-devops-pipeline/ https://blogs.perficient.com/2025/02/17/install-sitecore-hotfixes-on-azure-paas-with-azure-devops-pipeline/#respond Mon, 17 Feb 2025 21:47:29 +0000 https://blogs.perficient.com/?p=377308

Why Automate Sitecore Hotfix Deployment to Azure PaaS?

Sitecore frequently releases hotfixes to address reported issues, including critical security vulnerabilities or urgent problems. Having a quick, automated process to apply these updates is crucial. By automating the deployment of Sitecore hotfixes with an Azure DevOps pipeline, you can ensure faster, more reliable updates while reducing human error and minimizing downtime. This approach allows you to apply hotfixes quickly and consistently to your Azure PaaS environment, ensuring your Sitecore instance remains secure and up to date without manual intervention. In this post, we’ll walk you through how to automate this process using Azure DevOps.

Prerequisites for Automating Sitecore Hotfix Deployment

Before diving into the pipeline setup, make sure you have the following prerequisites in place:

  1. Azure DevOps Account: Ensure you have access to Azure DevOps to create and manage pipelines.
  2. Azure Storage Account: You’ll need an Azure Storage Account to store your Sitecore WDP hotfix files.
  3. Azure Subscription: Your Azure PaaS environment should be up and running, with a subscription linked to Azure DevOps.
  4. Sitecore Hotfix WDP: Download the Cloud Cumulative package for your version and topology. Be sure to check the release notes for additional instructions.

Steps to Automate Sitecore Hotfix Deployment

  1. Upload Your Sitecore Hotfix to Azure Storage
    • Create a storage container in Azure to store your WDP files.
    • Upload the hotfix using Azure Portal, Storage Explorer, or CLI.
  2. Create a New Pipeline in Azure DevOps
    • Navigate to Pipelines and create a new pipeline.
    • Select the repository containing your Sitecore solution.
    • Configure the pipeline using YAML for flexibility and automation.
  3. Define the Pipeline to Automate Hotfix Deployment
    • Retrieve the Azure Storage connection string securely via Azure Key Vault.
    • Download the Sitecore hotfix from Azure Storage.
    • Deploy the hotfix package to the Azure Web App production slot.
  4. Set Up Pipeline Variables
    • Store critical values like storage connection strings and hotfix file names securely.
    • Ensure the web application name is correctly configured in the pipeline.
  5. Trigger and Verify the Deployment
    • Run the pipeline manually or set up an automatic trigger on commit.
    • Verify the applied hotfix by checking the Sitecore instance and confirming issue resolution.

Enhancing Security in the Deployment Process

  • Use Azure Key Vault: Securely store sensitive credentials and access keys, preventing unauthorized access.
  • Restrict Access to Storage Accounts: Implement role-based access control (RBAC) to limit who can modify or retrieve the hotfix files.
  • Enable Logging and Monitoring: Utilize Azure Monitor and Application Insights to track deployment performance and detect potential failures.

Handling Rollbacks and Errors

  • Implement Deployment Slots: Test hotfix deployments in a staging slot before swapping them into production.
  • Set Up Automated Rollbacks: Configure rollback procedures to revert to a previous stable version if an issue is detected.
  • Enable Notifications: Use Azure DevOps notifications to alert teams about deployment success or failure.

Scaling the Approach for Large Deployments

  • Automate Across Multiple Environments: Extend the pipeline to deploy hotfixes across development, QA, and production environments.
  • Use Infrastructure as Code (IaC): Leverage tools like Terraform or ARM templates to ensure a consistent infrastructure setup.
  • Integrate Automated Testing: Implement testing frameworks such as Selenium or JMeter to verify hotfix functionality before deployment.

Why Streamline Sitecore Hotfix Deployments with Azure DevOps is Important

Automating the deployment of Sitecore hotfixes to Azure PaaS with an Azure DevOps pipeline saves time and ensures consistency and accuracy across environments. By storing the hotfix WDP in an Azure Storage Account, you create a centralized, secure location for all your hotfixes. The Azure DevOps pipeline then handles the rest—keeping your Sitecore environment up to date.

This process makes applying Sitecore hotfixes faster, more reliable, and less prone to error, which is exactly what you need in a production environment.

]]>
https://blogs.perficient.com/2025/02/17/install-sitecore-hotfixes-on-azure-paas-with-azure-devops-pipeline/feed/ 0 377308
Extending the Capabilities of Your Development Team with Visual Studio Code Extensions https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/ https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/#respond Tue, 11 Feb 2025 20:53:23 +0000 https://blogs.perficient.com/?p=377088

Introduction

Visual Studio Code (VS Code) has become a ubiquitous tool in the software development world, prized for its speed, versatility, and extensive customization options. At its heart, VS Code is a lightweight, open-source code editor that supports a vast ecosystem of extensions. These extensions are the key to unlocking the true potential of VS Code, transforming it from a simple editor into a powerful, tailored IDE (Integrated Development Environment).

This blog post will explore the world of VS Code extensions, focusing on how they can enhance your development team’s productivity, code quality, and overall efficiency. We’ll cover everything from selecting the right extensions to managing them effectively and even creating your own custom extensions to meet specific needs.

What are Visual Studio Code Extensions?

Extensions are essentially plugins that add new features and capabilities to VS Code. They can range from simple syntax highlighting and code completion tools to more complex features like debuggers, linters, and integration with external services. The Visual Studio Code Marketplace hosts thousands of extensions, catering to virtually every programming language, framework, and development workflow imaginable.

Popular examples include Prettier for automatic code formatting, ESLint for identifying and fixing code errors, and Live Share for real-time collaborative coding.

Why Use Visual Studio Code Extensions?

The benefits of using VS Code extensions are numerous and can significantly impact your development team’s performance.

  1. Improve Code Quality: Extensions like ESLint and JSHint help enforce coding standards and identify potential errors early in the development process. This leads to more robust, maintainable, and bug-free code.
  2. Boost Productivity: Extensions like Auto Close Tag and IntelliCode automate repetitive tasks, provide intelligent code completion, and streamline your workflow. This allows developers to focus on solving complex problems rather than getting bogged down in tedious tasks.
  3. Enhance Collaboration: Extensions like Live Share enable real-time collaboration, making it easier for team members to review code, pair program, and troubleshoot issues together, regardless of their physical location.
  4. Customize Your Workflow: VS Code’s flexibility allows you to tailor your development environment to your specific needs and preferences. Extensions like Bracket Pair Colorizer and custom themes can enhance readability and create a more comfortable and efficient working environment.
  5. Stay Current: Extensions provide support for the latest technologies and frameworks, ensuring that your team can quickly adapt to new developments in the industry and leverage the best tools for the job.
  6. Save Time: By automating common tasks and providing intelligent assistance, extensions like Path Intellisense can significantly reduce the amount of time spent on mundane tasks, freeing up more time for creative problem-solving and innovation.
  7. Ensure Consistency: Extensions like EditorConfig help enforce coding standards and best practices across your team, ensuring that everyone is following the same guidelines and producing consistent, maintainable code.
  8. Enhance Debugging: Powerful debugging extensions like Debugger for Java provide advanced debugging capabilities, making it easier to identify and resolve issues quickly and efficiently.

Managing IDE Tools for Mature Software Development Teams

As software development teams grow and projects become more complex, managing IDE tools effectively becomes crucial. A well-managed IDE environment can significantly impact a team’s ability to deliver high-quality software on time and within budget.

  1. Standardization: Ensuring that all team members use the same tools and configurations reduces discrepancies, improves collaboration, and simplifies onboarding for new team members. Standardized extensions help maintain code quality and consistency, especially in larger teams where diverse setups can lead to confusion and inefficiencies.
  2. Efficiency: Streamlining the setup process for new team members allows them to get up to speed quickly. Automated setup scripts can install all necessary extensions and configurations in one go, saving time and reducing the risk of errors.
  3. Quality Control: Enforcing coding standards and best practices across the team is essential for maintaining code quality. Extensions like SonarLint can continuously analyze code quality, catching issues early and preventing bugs from making their way into production.
  4. Scalability: As your team evolves and adopts new technologies, managing IDE tools effectively facilitates the integration of new languages, frameworks, and tools. This ensures that your team can quickly adapt to new developments and leverage the best tools for the job.
  5. Security: Keeping all tools and extensions up-to-date and secure is paramount, especially for teams working on sensitive or high-stakes projects. Regularly updating extensions prevents security issues and ensures access to the latest features and security patches.

Best Practices for Managing VS Code Extensions in a Team

Effectively managing VS Code extensions within a team requires a strategic approach. Here are some best practices to consider:

  1. Establish an Approved Extension List: Create and maintain a list of extensions that are approved for use by the team. This ensures that everyone is using the same core tools and configurations, reducing inconsistencies and improving collaboration. Consider using a shared document or a dedicated tool to manage this list.
  2. Automate Installation and Configuration: Use tools like Visual Studio Code Settings Sync or custom scripts to automate the installation and configuration of extensions and settings for all team members. This ensures that everyone has the same setup without manual intervention, saving time and reducing the risk of errors.
  3. Implement Regular Audits and Updates: Regularly review and update the list of approved extensions to add new tools, remove outdated ones, and ensure that all extensions are up-to-date with the latest security patches. This helps keep your team current with the latest developments and minimizes security risks.
  4. Provide Training and Documentation: Offer training and documentation on the approved extensions and best practices for using them. This helps ensure that all team members are proficient in using the tools and can leverage them effectively.
  5. Encourage Feedback and Collaboration: Encourage team members to provide feedback on the approved extensions and suggest new tools that could benefit the team. This fosters a culture of continuous improvement and ensures that the team is always using the best tools for the job.

Security Considerations for VS Code Extensions

While VS Code extensions offer numerous benefits, they can also introduce security risks if not managed properly. It’s crucial to be aware of these risks and take steps to mitigate them.

  1. Verify the Source: Only install extensions from trusted sources, such as the Visual Studio Code Marketplace. Avoid downloading extensions from unknown or unverified sources, as they may contain malware or other malicious code.
  2. Review Permissions: Carefully review the permissions requested by extensions before installing them. Be cautious of extensions that request excessive permissions or access to sensitive data, as they may be attempting to compromise your security.
  3. Keep Extensions Updated: Regularly update your extensions to ensure that you have the latest security patches and bug fixes. Outdated extensions can be vulnerable to security exploits, so it’s important to keep them up-to-date.
  4. Use Security Scanning Tools: Consider using security scanning tools to automatically identify and assess potential security vulnerabilities in your VS Code extensions. These tools can help you proactively identify and address security risks before they can be exploited.

Creating Custom Visual Studio Code Extensions

In some cases, existing extensions may not fully meet your team’s specific needs. Creating custom VS Code extensions can be a powerful way to add proprietary capabilities to your IDE and tailor it to your unique workflow. One exciting area is integrating AI Chatbots directly into VS Code for code generation, documentation, and more.

  1. Identify the Need: Start by identifying the specific functionality that your team requires. This could be anything from custom code snippets and templates to integrations with internal tools and services. For this example, we’ll create an extension that allows you to highlight code, right-click, and generate documentation using a custom prompt sent to an AI Chatbot.

  2. Learn the Basics: Familiarize yourself with the Visual Studio Code Extension API and the tools required to develop extensions. The API documentation provides comprehensive guides and examples to help you get started.

  3. Set Up Your Development Environment: Install the necessary tools, such as Node.js and Yeoman, to create and test your extensions. The Yeoman generator for Visual Studio Code extensions can help you quickly scaffold a new project.

  4. Develop Your Extension: Write the code for your extension, leveraging the Visual Studio Code Extension API to add the desired functionality. Be sure to follow best practices for coding and testing to ensure that your extension is reliable, maintainable, and secure.

  5. Test Thoroughly: Test your extension in various scenarios to ensure that it works as expected and doesn’t introduce any new issues. This includes testing with different configurations, environments, and user roles.

  6. Distribute Your Extension: Once your extension is ready, you can distribute it to your team. You can either publish it to the Visual Studio Code Marketplace or share it privately within your organization. Consider using a private extension registry to manage and distribute your custom extensions securely.

Best Practices for Extension Development

Developing robust and efficient VS Code extensions requires careful attention to best practices. Here are some key considerations:

  • Resource Management:

    • Dispose of Resources: Properly dispose of any resources your extension creates, such as disposables, subscriptions, and timers. Use the context.subscriptions.push() method to register disposables, which will be automatically disposed of when the extension is deactivated.
    • Avoid Memory Leaks: Be mindful of memory usage, especially when dealing with large files or data sets. Use techniques like streaming and pagination to process data in smaller chunks.
    • Clean Up on Deactivation: Implement the deactivate() function to clean up any resources that need to be explicitly released when the extension is deactivated.
  • Asynchronous Operations:

    • Use Async/Await: Use async/await to handle asynchronous operations in a clean and readable way. This makes your code easier to understand and maintain.
    • Handle Errors: Properly handle errors in asynchronous operations using try/catch blocks. Log errors and provide informative messages to the user.
    • Avoid Blocking the UI: Ensure that long-running operations are performed in the background to avoid blocking the VS Code UI. Use vscode.window.withProgress to provide feedback to the user during long operations.
  • Security:

    • Validate User Input: Sanitize and validate any user input to prevent security vulnerabilities like code injection and cross-site scripting (XSS).
    • Secure API Keys: Store API keys and other sensitive information securely. Use VS Code’s secret storage API to encrypt and protect sensitive data.
    • Limit Permissions: Request only the necessary permissions for your extension. Avoid requesting excessive permissions that could compromise user security.
  • Performance:

    • Optimize Code: Optimize your code for performance. Use efficient algorithms and data structures to minimize execution time.
    • Lazy Load Resources: Load resources only when they are needed. This can improve the startup time of your extension.
    • Cache Data: Cache frequently accessed data to reduce the number of API calls and improve performance.
  • Code Quality:

    • Follow Coding Standards: Adhere to established coding standards and best practices. This makes your code more readable, maintainable, and less prone to errors.
    • Write Unit Tests: Write unit tests to ensure that your code is working correctly. This helps you catch bugs early and prevent regressions.
    • Use a Linter: Use a linter to automatically identify and fix code style issues. This helps you maintain a consistent code style across your project.
  • User Experience:

    • Provide Clear Feedback: Provide clear and informative feedback to the user. Use status bar messages, progress bars, and error messages to keep the user informed about what’s happening.
    • Respect User Settings: Respect user settings and preferences. Allow users to customize the behavior of your extension to suit their needs.
    • Keep it Simple: Keep your extension simple and easy to use. Avoid adding unnecessary features that could clutter the UI and confuse the user.

By following these best practices, you can develop robust, efficient, and user-friendly VS Code extensions that enhance the development experience for yourself and others.

Example: Creating an AI Chatbot Integration for Documentation Generation

Let’s walk through creating a custom VS Code extension that integrates with an AI Chatbot to generate documentation for selected code. This example assumes you have access to an AI Chatbot API (like OpenAI’s GPT models). You’ll need an API key. Remember to handle your API key securely and do not commit it to your repository.

1. Scaffold the Extension:

First, use the Yeoman generator to create a new extension project:

yo code

2. Modify the Extension Code:

Open the generated src/extension.ts file and add the following code to create a command that sends selected code to the AI Chatbot and displays the generated documentation:

import * as vscode from 'vscode';
import axios from 'axios';

export function activate(context: vscode.ExtensionContext) {
 let disposable = vscode.commands.registerCommand('extension.generateDocs', async () => {
  const editor = vscode.window.activeTextEditor;
  if (editor) {
   const selection = editor.selection;
   const selectedText = editor.document.getText(selection);

   const apiKey = 'YOUR_API_KEY'; // Replace with your actual API key
   const apiUrl = 'https://api.openai.com/v1/engines/davinci-codex/completions';

   try {
    const response = await axios.post(
     apiUrl,
     {
      prompt: `Generate documentation for the following code:\n\n${selectedText}`,
      max_tokens: 150,
      n: 1,
      stop: null,
      temperature: 0.5,
     },
     {
      headers: {
       'Content-Type': 'application/json',
       Authorization: `Bearer ${apiKey}`,
      },
     }
    );

    const generatedDocs = response.data.choices[0].text;
    vscode.window.showInformationMessage('Generated Documentation:\n' + generatedDocs);
   } catch (error) {
    vscode.window.showErrorMessage('Error generating documentation: ' + error.message);
   }
  }
 });

 context.subscriptions.push(disposable);
}

export function deactivate() {}

3. Update package.json:

Add the following command configuration to the contributes section of your package.json file:

"contributes": {
    "commands": [
        {
            "command": "extension.generateDocs",
            "title": "Generate Documentation"
        }
    ]
}

4. Run and Test the Extension:

Press F5 to open a new VS Code window with your extension loaded. Highlight some code, right-click, and select “Generate Documentation” to see the AI-generated documentation.

Packaging and Distributing Your Custom Extension

Once you’ve developed and tested your custom VS Code extension, you’ll likely want to share it with your team or the wider community. Here’s how to package and distribute your extension, including options for local and private distribution:

1. Package the Extension:

VS Code uses the vsce (Visual Studio Code Extensions) tool to package extensions. If you don’t have it installed globally, install it using npm:

npm install -g vsce

Navigate to your extension’s root directory and run the following command to package your extension:

vsce package

This will create a .vsix file, which is the packaged extension.

2. Publish to the Visual Studio Code Marketplace:

To publish your extension to the Visual Studio Code Marketplace, you’ll need to create a publisher account and obtain a Personal Access Token (PAT). Follow the instructions on the Visual Studio Code Marketplace to set up your publisher account and generate a PAT.

Once you have your PAT, run the following command to publish your extension:

vsce publish

You’ll be prompted to enter your publisher name and PAT. After successful authentication, your extension will be published to the marketplace.

3. Share Privately:

If you prefer to share your extension privately within your organization, you can distribute the .vsix file directly to your team members. They can install the extension by running the following command in VS Code:

code --install-extension your-extension.vsix

Alternatively, you can set up a private extension registry using tools like Azure DevOps Artifacts or npm Enterprise to manage and distribute your custom extensions securely.

Conclusion

Visual Studio Code extensions are a powerful tool for enhancing the capabilities of your development environment and improving your team’s productivity, code quality, and overall efficiency. By carefully selecting, managing, and securing your extensions, you can create a tailored IDE that meets your specific needs and helps your team deliver high-quality software on time and within budget. Whether you’re using existing extensions from the marketplace or creating your own custom solutions, the possibilities are endless. Embrace the power of VS Code extensions and unlock the full potential of your development team.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/feed/ 0 377088
Migration of DNS Hosted Zones in AWS https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/ https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/#respond Tue, 31 Dec 2024 08:00:47 +0000 https://blogs.perficient.com/?p=374245

Transferring Route 53 hosted zone records between AWS accounts using the CLI involves exporting the records from one account and then importing them to another. Here’s a step-by-step guide:

Migration of DNS Hosted Zones in AWS

The primary objective of this process is to migrate Route 53 hosted zone records seamlessly between AWS accounts while ensuring minimal disruption to DNS functionality. This involves securely transferring DNS records, preserving their integrity, maintaining availability, and ensuring linked AWS resources remain accessible. Additionally, cross-account DNS access may be implemented as needed to meet business requirements.

Img1

Objectives:

  • Migration Process Overview
  • Prerequisites
  • Configuration Overview
  • Best Practices
  • Conclusion

 

Prerequisites:

  • Account Permissions: Ensure you have AmazonRoute53FullAccess permissions in both source and destination accounts. For domain transfers, additional permissions (TransferDomains, DisableDomainTransferLock, etc.) are required.
  • Export Tooling: Use the AWS CLI or SDK for listing and exporting DNS records, as Route 53 does not have a built-in export feature.
  • Destination Hosted Zone: Create a hosted zone in the destination account with the same domain name as the original. Note the new hosted zone ID for use in subsequent steps.
  • AWS Resource Dependencies: Identify resources tied to DNS records (such as EC2 instances or ELBs) and ensure these are accessible or re-created in the destination account if needed.

 

Configuration Overview:

1. Crete EC2 Instance and Download the cli53 in Using Below Commands:

  • Use the AWS CLI53 to list DNS records in the source account and save them to a JSON file:

Wget https://github.com/barnybug/cli53/releases/download/0.8.16/cli53-linux-amd64

Note: Linux can also be used, but it requires cli53 dependency and AWS credentials

 

  • Move the cli53 to the bin folder and change the permission

Img2

2. Create Hosted Zone in Destination Account:

  • In the destination account, create a new hosted zone with the same domain name using cli or GUI:
    • Take note of the new hosted zone ID.

3. Export DNS Records from Existing Hosted Zone:

  • Export the records using cli53 in ec2 instance using below command and remove NS and SOA records from this file, as the new hosted zone will generate these by default.

Img3

Note: Created Microsoft.com as dummy hosted zone.

4. Import DNS Records to Destination Hosted Zone:

  • Use the exported JSON file to import records into the new hosted zone for that just copy all records from the domain.com.txt file

Img4

  • Now login to other AWS route53 account and just import the records those copied from the exported file, please refer to below ss
  • Now save the file and verified the records

Img5

5. Test DNS Records:

  • Verify DNS record functionality by querying records in the new hosted zone and ensuring that all services resolve correctly.

 

Best practices:

When migrating Route 53 hosted zones between AWS accounts, applying best practices helps ensure a smooth transition with minimal disruption. Here are key best practices for a successful Route 53 hosted zone migration:

1. Plan and Document the Migration Process

  • Detailed Planning: Outline each step of the migration process, including DNS record export, transfer, and import, as well as any required changes in the destination account.
  • Documentation: Document all DNS records, configurations, and dependencies before starting the migration. This helps in troubleshooting and serves as a backup.

2. Schedule Migration During Low-Traffic Periods

  • Reduce Impact: Perform the migration during off-peak hours to minimize potential disruption, especially if you need to update NS records or other critical DNS configurations.

3. Test in a Staging Environment

  • Dry Run: Before migrating a production hosted zone, perform a test migration in a staging environment. This helps identify potential issues and ensures that your migration plan is sound.
  • Verify Configurations: Ensure that the DNS records resolve correctly and that applications dependent on these records function as expected.

4. Use Route 53 Resolver for Multi-Account Setups

  • Centralized DNS Management: For environments with multiple AWS accounts, consider using Route 53 Resolver endpoints and sharing resolver rules through AWS Resource Access Manager (RAM). This enables efficient cross-account DNS resolution without duplicating hosted zones across accounts.

5. Avoid Overwriting NS and SOA Records

  • Use Default NS and SOA: Route 53 automatically creates NS and SOA records when you create a hosted zone. Retain these default records in the destination account, as they are linked to the new hosted zone’s configuration and AWS infrastructure.

6. Update Resource Permissions and Dependencies

  • Resource Links: DNS records may point to AWS resources like load balancers or S3 buckets. Ensure that these resources are accessible from the new account and adjust permissions if necessary.
  • Cross-Account Access: If resources remain in the source account, establish cross-account permissions to ensure continued access.

7. Validate DNS Records Post-Migration

  • DNS Resolution Testing: Test the new hosted zone’s DNS records using tools like dig or nslookup to confirm they are resolving correctly. Check application connectivity to confirm that all dependent services are operational.
  • TTL Considerations: Set a low TTL (Time to Live) on records before migration. This speeds up DNS propagation once the migration is complete, reducing the time it takes for changes to propagate.

8. Consider Security and Access Control

  • Secure Access: Ensure that only authorized personnel have access to modify hosted zones during the migration.

9. Establish a Rollback Plan

  • Rollback Strategy: Plan for a rollback if any issues arise. Keep the original hosted zone active until the new configuration is fully tested and validated.
  • Backup Data: Maintain a backup of all records and configurations so you can revert to the original settings if needed.

Conclusion

Migrating a Route 53 hosted zone between AWS accounts involves careful planning, especially to ensure DNS records are exported and imported correctly. After migrating, testing is crucial to confirm that DNS resolution works as expected. Cross-account setups may require additional configuration, such as Route 53 Resolver rules, to ensure seamless DNS functionality across environments.

]]>
https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/feed/ 0 374245
Unit Testing in Android Apps: A Deep Dive into MVVM https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/ https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/#respond Tue, 26 Nov 2024 19:56:40 +0000 https://blogs.perficient.com/?p=372567

Understanding Unit Testing

Unit testing is a crucial aspect of software development, especially in complex applications like Android apps. It involves testing individual units of code, such as methods or classes, in isolation. This ensures the correctness of each component, leading to a more robust and reliable application.

Why Unit Testing in MVVM?

The Model-View-ViewModel (MVVM) architectural pattern is widely adopted in Android app development. It separates the application into three distinct layers:

  • Model: Handles data logic and interacts with data sources.
  • View: Responsible for the UI and user interactions.
  • ViewModel: Acts as a bridge between the View and Model, providing data and handling UI logic.

Unit testing each layer in an MVVM architecture offers numerous benefits:

  • Early Bug Detection: Identify and fix issues before they propagate to other parts of the app.
  • Improved Code Quality: Write cleaner, more concise, and maintainable code.
  • Accelerated Development: Refactor code and add new features with confidence.
  • Enhanced Collaboration: Maintain consistent code quality across the team.

Setting Up the Environment

  1. Android Studio: Ensure you have the latest version installed.
  2. Testing Framework: Add the necessary testing framework to your app/build.gradle file:

    testImplementation 'junit:junit:4.13.2'
    androidTestImplementation 'androidx.test.ext:junit:1.1.5'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1'
  3. Testing Library: Consider using a testing library like Mockito or MockK to create mock objects for testing dependencies.

Unit Testing ViewModels

  1. Create a Test Class: Create a separate test class for each ViewModel you want to test.
  2. Set Up Test Dependencies: Use dependency injection frameworks like Dagger Hilt or Koin to inject dependencies into your ViewModel. For testing, use mock objects to simulate the behavior of these dependencies.
  3. Write Test Cases: Write comprehensive test cases covering various scenarios:
  • Input Validation: Test how the ViewModel handles invalid input.
  • Data Transformation: Test how the ViewModel transforms data from the Model.
  • UI Updates: Test how the ViewModel updates the UI through LiveData or StateFlow.
  • Error Handling: Test how the ViewModel handles errors and exceptions.

Example:

@RunWith(AndroidJUnit4::class)
class MyViewModelTest {

    @Test
    fun `should update the UI when data is fetched successfully`() {
        // ... (Arrange)
        val viewModel = MyViewModel(mockRepository)

        // ... (Act)
        viewModel.fetchData()

        // ... (Assert)
        viewModel.uiState.observeForever { uiState ->
            assertThat(uiState.isLoading).isFalse()
            assertThat(uiState.error).isNull()
            assertThat(uiState.data).isEqualTo(expectedData)
        }
    }
}

Unit Testing Repositories

  1. Create Test Classes: Create separate test classes for each Repository class.
  2. Set Up Test Dependencies: Use dependency injection to inject dependencies into your Repository. For testing, use mock objects to simulate the behavior of data sources like databases or network APIs.
  3. Write Test Cases: Write test cases to cover:
  • Data Fetching: Test how the Repository fetches data from remote or local sources.
  • Data Storage: Test how the Repository stores and retrieves data.
  • Data Manipulation: Test how the Repository processes and transforms data.
  • Error Handling: Test how the Repository handles errors and exceptions.

Example:

@RunWith(AndroidJUnit4::class)
class MyRepositoryTest {

    @Test
    fun `should fetch data from remote source successfully`() {
        // ... (Arrange)
        val mockApi = mock(MyApi::class.java)
        val repository = MyRepository(mockApi)

        // ... (Act)
        repository.fetchData()

        // ... (Assert)
        verify(mockApi).fetchData()
    }
}

Implementing SonarQube

SonarQube is a powerful tool for code quality and security analysis. Here’s a detailed guide on how to integrate SonarQube with your Android project:

  1. Set Up SonarQube Server:
  • Install SonarQube Server: Download and install the SonarQube server on your machine or a server.
  • Configure SonarQube: Configure the server with database settings, user authentication, and other necessary parameters.
  • Start SonarQube Server: Start the SonarQube server.
  1. Configure SonarQube Scanner:
  • Install SonarQube Scanner: Download and install the SonarQube Scanner.
  • Configure Scanner Properties: Create a sonar-scanner.properties file in your project’s root directory and configure the following properties:

    sonar.host.url=http://localhost:9000
    sonar.login=your_sonar_login
    sonar.password=your_sonar_password
    sonar.projectKey=my-android-project
    sonar.projectName=My Android Project
    sonar.sources=src/main/java
    sonar.java.binaries=build/intermediates/javac/release/classes
  1. Integrate SonarQube with Your Build Process:
  • Gradle: Add the SonarQube Gradle plugin to your build.gradle file:

    plugins {
        id 'org.sonarsource.scanner-gradle' version '3.3'
    }

    Configure the plugin with your SonarQube server URL and authentication token.

  • Maven: Add the SonarQube Maven plugin to your pom.xml file. Configure the plugin with your SonarQube server URL and authentication token.
  1. Run SonarQube Analysis:
  • Execute the SonarQube analysis using the SonarQube Scanner. This can be done manually or integrated into your CI/CD pipeline.
  1. Analyze the Results:
  • Once the analysis is complete, you can view the results on the SonarQube dashboard. The dashboard provides insights into code quality, security vulnerabilities, and potential improvements.

Implementing Test Coverage with Bitrise

Test coverage measures the percentage of your code that is covered by tests. It’s a crucial metric to assess the quality of your test suite. Here’s how to measure test coverage with Bitrise:

  1. Configure Code Coverage Tool: Add a code coverage tool like JaCoCo to your project. Configure it to generate coverage reports in a suitable format (e.g., XML).
  2. Add Code Coverage Step to Bitrise Workflow: Add a step to your Bitrise Workflow to generate the code coverage report. This step should execute your tests and generate the report.
  3. Upload Coverage Report to SonarQube: Add a step to upload the generated code coverage report to SonarQube. This will allow SonarQube to analyze the report and display the coverage metrics.

Best Practices for Unit Testing

  • Write Clear and Concise Tests: Use descriptive names for test methods and variables.
  • Test Edge Cases: Consider testing scenarios with invalid input, empty data, or network errors.
  • Use a Testing Framework: A testing framework like JUnit provides a structured way to write and run tests.
  • Leverage Mocking: Use mocking frameworks like Mockito or MockK to isolate units of code and control their behavior.
  • Automate Testing: Integrate unit tests into your CI/CD pipeline to ensure code quality.
  • Review and Refactor Tests: Regularly review and refactor your tests to keep them up-to-date and maintainable.

By following these guidelines and incorporating unit testing into your development process, you can significantly improve the quality and reliability of your Android apps.

]]>
https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/feed/ 0 372567
Fixing an XM Cloud Deployment Failure https://blogs.perficient.com/2024/06/14/fixing-an-xm-cloud-deployment-failure/ https://blogs.perficient.com/2024/06/14/fixing-an-xm-cloud-deployment-failure/#respond Fri, 14 Jun 2024 18:49:22 +0000 https://blogs.perficient.com/?p=364321

Intro 📖

Last week, I noticed that deployments to Sitecore XM Cloud were failing on one of my projects. In this blog post, I’ll review the troubleshooting steps I went through and what the issue turned out to be. To provide a bit more context on the DevOps setup for this particular project, an Azure DevOps pipeline runs a script. That script uses the Sitecore CLI and the Sitecore XM Cloud plugin’s cloud deployment command to deploy to XM Cloud. The last successful deployment was just a few days prior and there hadn’t been many code changes since. Initially, I was pretty stumped but, hey, what can you do except start from the top…

Troubleshooting 👷‍♂️

  1. Anyone that has worked with cloud-based SaaS services knows that transient faults are a thing–and XM Cloud is no exception. The first thing I tried was to simply rerun the failed stage in our pipeline to see if this was “just a hiccup.” Alas, several subsequent deployment attempts failed with the same error. Okay, fine, this wasn’t a transient issue 😞.
  2. Looking at the logs in the XM Cloud Deploy interface, the build stage was consistently failing. Drilling into the logs, there were several compilation errors citing missing Sitecore assemblies. For example: error CS0246: The type or namespace name ‘Item’ could not be found (are you missing a using directive or an assembly reference?). This suggested an issue with either the NuGet restore or with compilation more broadly.
  3. Rerunning failed stages in an Azure DevOps pipeline uses the same commit that was associated with the first run–the latest code from the branch isn’t pulled on each rerun attempt. This meant that the code used for the last successful deployment was the same code used for the subsequent attempts. In other words, this probably wasn’t a code issue (famous last words, right 😅?).
  4. Just to be sure, I diffed several recent commits on our development branch and, yeah, there weren’t any changes that could have broken compilation since the last successful deployment.
  5. To continue the sanity checks, I pulled down the specific commit locally and verified that I could:
    1. Restore NuGet packages, via both the UI and console
    2. Build/rebuild the Visual Studio solution
  6. After revisiting and diffing the XM Cloud Deploy logs, I noticed that the version of msbuild had changed between the last successful deployment and the more recent failed deployments. I downloaded the same, newer version of msbuild and verified, once again, that I could restore NuGet packages and build the solution.
  7. Finally, I confirmed that the validation build configured for the development branch (via branch policies in Azure DevOps) was running as expected and successfully building the solution each time a new pull request was created.

At this point, while I continued to analyze the deployment logs, I opened a Sitecore support ticket to have them weigh in 🙋‍♂️. I provided support with the last known working build logs, the latest failed build logs, and the list of my troubleshooting steps up to that point.

The Fix 🩹

After hearing back from Sitecore support, it turned out that Sitecore had recently made a change to how the buildTargets property in the xmcloud.build.json file was consumed and used as part of deployments. To quote the support engineer:

There were some changes in the build process, and now the build targets are loaded from the “buildTargets ” list. The previous working builds were using the “.sln” file directly.
It looks like that resulted in the build not working properly for some projects.

The suggested fix was to specifically target the Visual Studio solution file to ensure that the XM Cloud Deployment NuGet package restore and compilation worked as expected. My interpretation of the change was “XM Cloud Deploy used to not care about/respect buildTargets…but now it does.”

After creating a pull request to change the buildTargets property from this (targeting the specific, top-level project):

{
  ...
  "buildTargets": [
    "./src/platform/Project/Foo.Project.Platform/Platform.csproj"
  ]
  ...
}

…to this (targeting the solution):

{
  ...
  "buildTargets": [
    "./Foo.sln"
  ]
  ...
}

…the next deployment to XM Cloud (via CI/CD) worked as expected. ✅🎉

After asking the Sitecore support engineer where this change was documented, they graciously escalated internally and posted a new event to the Sitecore Status Page to acknowledge the change/issue: Deployment is failing to build.

If you’re noticing that your XM Cloud deployments are failing on the build step while compiling your Visual Studio solution, make sure you’re targeting the solution file (.sln) and not a specific project file (.csproj) in the buildTargets property in the xmcloud.build.json file…because it matters now, apparently 😉.

Thanks for the read! 🙏

]]>
https://blogs.perficient.com/2024/06/14/fixing-an-xm-cloud-deployment-failure/feed/ 0 364321
GitHub – On-Prem Server Connectivity Using Self-Hosted Runners https://blogs.perficient.com/2024/06/05/github-on-prem-server-connectivity-using-self-hosted-runners/ https://blogs.perficient.com/2024/06/05/github-on-prem-server-connectivity-using-self-hosted-runners/#respond Wed, 05 Jun 2024 06:09:03 +0000 https://blogs.perficient.com/?p=363835

Various deployment methods, including cloud-based (e.g., CloudHub) and on-premises, are available to meet diverse infrastructure needs. GitHub, among other tools, supports versioning and code backup, while CI/CD practices automate integration and deployment processes, enhancing code quality and speeding up software delivery.

GitHub Actions, an automation platform by GitHub, streamlines building, testing, and deploying software workflows directly from repositories. Although commonly associated with cloud deployments, GitHub Actions can be adapted for on-premises setups with self-hosted runners. These runners serve as the execution environment, enabling deployment tasks on local infrastructure.

Configuring self-hosted runners allows customization of GitHub Actions workflows for on-premises deployment needs. Workflows can automate tasks like Docker image building, artifact pushing to private registries, and application deployment to local servers.

Leveraging GitHub Actions for on-premises deployment combines the benefits of automation, version control, and collaboration with control over infrastructure and deployment processes.

What is a Runner?

A runner refers to the machine responsible for executing tasks within a GitHub Actions workflow. It performs various tasks defined in the action script, like cloning the code directory, building the code, testing the code, and installing various tools and software required to run the GitHub action workflow.

There are 2 Primary Types of Runners:

  1. GitHub Hosted Runners:

    These are virtual machines provided by GitHub to run workflows. Each machine comes pre-configured with the environment, tools, and settings required for GitHub Actions. GitHub-hosted runners support various operating systems, such as Ubuntu Linux, Windows, and macOS.

  2. Self-Hosted Runners:

    A self-hosted runner is a system deployed and managed by the user to execute GitHub Actions jobs. Compared to GitHub-hosted runners, self-hosted runners offer more flexibility and control over hardware, operating systems, and software tools. Users can customize hardware configurations, install software from their local network, and choose operating systems not provided by GitHub-hosted runners. Self-hosted runners can be physical machines, virtual machines, containers, on-premises servers, or cloud-based instances.

Why Do We Need a Self-hosted Runner?

Self-hosted runners play a crucial role in deploying applications on the on-prem server using GitHub Action Scripts and establishing connectivity with an on-prem server. These runners can be created at different management levels within GitHub: repository, organization, and enterprise.

By leveraging self-hosted runners for deployment, organizations can optimize control, customization, performance, and cost-effectiveness while meeting compliance requirements and integrating seamlessly with existing infrastructure and tools. Here are few advantages of self-hosted runners as given below.

  1. Control and Security:

    Self-hosted runners allow organizations to maintain control over their infrastructure and deployment environment. This includes implementing specific security measures tailored to the organization’s requirements, such as firewall rules and access controls.

  2. Customization:

    With self-hosted runners, you have the flexibility to customize the hardware and software environment to match your specific needs. This can include installing specific libraries, tools, or dependencies required for your applications or services.

  3. Performance:

    Self-hosted runners can offer improved performance compared to cloud-based alternatives, especially for deployments that require high computational resources or low-latency connections to local resources.

  4. Cost Management:

    While cloud-based solutions often incur ongoing costs based on usage and resource consumption, self-hosted runners can provide cost savings by utilizing existing infrastructure without incurring additional cloud service charges.

  5. Compliance:

    For organizations operating in regulated industries or regions with strict compliance requirements, self-hosted runners offer greater control and visibility over where code is executed and how data is handled, facilitating compliance efforts.

  6. Offline Deployment:

    In environments where internet connectivity is limited or unreliable, self-hosted runners enable deployment workflows to continue functioning without dependency on external cloud services or repositories.

  7. Scalability:

    Self-hosted runners can be scaled up or down according to demand, allowing organizations to adjust resource allocation based on workload fluctuations or project requirements.

  8. Integration with Existing Tools:

    Self-hosted runners seamlessly integrate with existing on-premises tools and infrastructure, facilitating smoother adoption and interoperability within the organization’s ecosystem.

Getting Started With a Self-hosted Runner

Follow the steps below to create and utilize a self-hosted runner.

Repository Level Self-hosted Runner:

  1. Log in to your GitHub account and navigate to the desired repository.
  2. Go to the repository’s settings tab and select the “Runners” menu under the “Actions” menu.
    Step 1
  3. Click on the “New self-hosted runner” button to initiate the creation process.
    Image 2
  4. Based on your system requirements, choose the appropriate runner image. For instance, if your self-hosted runner will run on Windows, select the Windows runner image.
    Image 3
  5. Open Windows PowerShell and execute the following command to create the actions-runner folder:
    mkdir actions-runner; cd actions-runner
  6. Download the latest runner package by running the following command:
    Invoke-WebRequest -Uri https://github.com/actions/runner/releases/download/v2.316.0/actions-runner-win-x64-2.316.0.zip -OutFile actions-runner-win-x64-2.316.0.zip
  7. Extract the downloaded package and configure the self-hosted runner according to your deployment needs.
    Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory("$PWD/actions-runner-win-x64-2.316.0.zip", "$PWD")
  8. Configure the runner using the below command. Replace the placeholder with actual values.
    ./config.cmd --url https://github.com/<owner>/<repo_name> --token <token>
  9. To run the runner, use the below command.
    ./run.cmd
  10. Now your self-hosted runner is ready to use in your GitHub action script.
    # Use below YAML code snippet in your workflow file for each job
    runs-on: self-hosted

    Image 4

Organization Level and Enterprise Level Self-hosted Runners:

The process for creating organization-level and enterprise-level self-hosted runners follows similar steps. Still, the runners created at these levels can serve multiple repositories or organizations within the account. The setup process generally involves administrative permissions and configuration at a broader level.

By following these steps, you can set up self-hosted runners to enable connectivity between your on-prem server and GitHub Action Scripts, facilitating on-prem deployments seamlessly.

]]>
https://blogs.perficient.com/2024/06/05/github-on-prem-server-connectivity-using-self-hosted-runners/feed/ 0 363835
Perficient Achieves AWS DevOps Competency https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/ https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/#respond Tue, 04 Jun 2024 18:48:31 +0000 https://blogs.perficient.com/?p=363795

Perficient is excited to announce our achievement in Amazon Web Services (AWS) DevOps Competency for AWS partners. This designation recognizes Perficient as an AWS partner that has demonstrated expertise in delivering DevSecOps solutions. This competency highlights Perficient’s ability to drive innovation, meet business objectives, and get the most out of your AWS services. 

What does this mean for Perficient? 

Achieving the AWS DevOps Competency status differentiates Perficient as an AWS Partner Network (APN) member that provides modern product engineering solutions designed to help enterprises adopt, develop, and deploy complex projects faster on AWS. To receive the designation, APN members must possess deep AWS expertise and deliver solutions seamlessly on AWS. 

This competency empowers our delivery teams to break down traditional silos, shorten feedback loops, and respond more effectively to changes, ultimately increasing speed to market by up to 75%.  

What does this mean for you? 

With our partnership with AWS, we can modernize our clients’ processes to improve product quality, scalability, and performance, and significantly reduce release costs by up to 97%. This achievement ensures that our CI/CD processes and IT governance are sustainable and efficient, benefiting organizations of any size.  

At Perficient, we strive to be the place where great minds and great companies converge to boldly advance business, and this achievement is a testament to that vision!  

]]>
https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/feed/ 0 363795