Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ Expert Digital Insights Fri, 03 Oct 2025 18:52:06 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ 32 32 30508587 Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 2 https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/ https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/#comments Fri, 03 Oct 2025 07:25:24 +0000 https://blogs.perficient.com/?p=387517

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Talend Components:

Key components for batch processing as mention below:

  • tDBConnection: Establish and manage database connections within a job & allow configuration with single connection to reuse within Talend job.
  • tFileInputDelimited: For reading data from flat files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Data transformation allows you to map input data with output data and enables you to perform data filtering, complex data manipulation, typecasting, and multiple input source joins.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tPreJob, tPostJob: PreJob start the execution before the job & PostJob at the end of the job.
  • tDBOutput: Supports wide range of databases & used to write data to various databases.
  • tDBCommit:It retains and verifies the alterations applied to a connected database throughout a Talend job, guaranteeing that it permanently records the data changes.
  • tDBClose:  It explicitly close a database connection that was opened by a tDBConnection component.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.
  • tDie: We can stop the job execution explicitly if it fails. In addition, we can create a customized warning message and exit code.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after the execution, we can process it to insert it into MySQL database table as a target & we can achieve this without batch processing. But this data flow will take quite a longer time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into batch of records into MySQL database table at the target location.

Talend Job Design

Talend Job Design 

Solution:

  • Establish the database connection at the start of the execution so that we can reuse.
  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow
Filter the dataset using tFilterRow

Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write the batch data MySQL database table using tDBOutput
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.
  • At the end of the job execution, we are committing the database modification & closing the connection to release the database resource.

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 1

]]>
https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/feed/ 3 387517
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 1 https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/ https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/#respond Fri, 03 Oct 2025 07:22:35 +0000 https://blogs.perficient.com/?p=387572

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Understand Batch Processing:

            Batch processing is a method of running high-volume, repetitive data within Talend jobs. The batch method allows users to process a bunch of data when computing resources are available, and with little or no user interaction.

Through batch processing, users gather and retain data, subsequently processing it during a designated period referred to as a “batch window.” This method enhances efficiency by establishing processing priorities and executing data tasks in a timeframe that is optimal.

Here, Talend job takes the total row count from source file then load the data from the flat file, processes it in a batch, provided input through context variable & then write the data into smaller flat files. This implementation made it possible to process enormous amounts of data more precisely and quickly than other implementation.

Batch processing is a method of executing a series of jobs sequentially without user interaction, typically used for handling large volumes of data efficiently. Talend, a prominent and extensively employed ETL (Extract, Transform, Load) tool, utilizes batch processing to facilitate the integration, transformation, and loading of data into data warehouse and various other target systems.

Talend Components:

Key components for batch processing as mention below:

  • tFileInputDelimited, tFileOutputDelimited: For reading & writing data from/to files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Use for data transformation which allow to map input data with output data along with use to perform data filtering, complex data manipulation, typecasting & multiple input source join.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after execution, we can write it into a chunk of another flat file as a target & we can achieve this without batch processing. But this data flow will take quite a larger execution time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into chunks of files at the target location.

Talend job design

Talend job design

Solution:

  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch / chunk by reducing the header from total row count & then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write it into chunk of output target file using tFileOutputDelimited
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, we have a tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.

 

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 2

]]>
https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/feed/ 0 387572
AEM and Cloudflare Workers: The Ultimate Duo for Blazing Fast Pages https://blogs.perficient.com/2025/09/23/aem-and-cloudflare-workers-the-ultimate-duo-for-blazing-fast-pages/ https://blogs.perficient.com/2025/09/23/aem-and-cloudflare-workers-the-ultimate-duo-for-blazing-fast-pages/#respond Tue, 23 Sep 2025 10:30:23 +0000 https://blogs.perficient.com/?p=387173

If you’re using Adobe Experience Manager as a Cloud Service (AEMaaCS), you’ve likely wondered what to do with your existing CDN. AEMaaCS includes a fully managed CDN with caching, WAF, and DDoS protection. But it also supports a Bring Your Own CDN model.

This flexibility allows you to layer your CDN in front of Adobe’s, boosting page speed through edge caching.

The Challenge: Static vs. Dynamic Content

Many AEM pages combine static and dynamic components, and delivering both types of content through multiple layers of CDN can become a complex process.

Imagine a page filled with static components and just a few dynamic ones. For performance, the static content should be cached heavily. But dynamic components often require real-time rendering and can’t be cached. Since caching is typically controlled by page path—both in Dispatcher and the CDN—we end up disabling caching for the entire page. This workaround ensures dynamic components work as expected, but it undermines the purpose of caching and fast delivery.

Sling Dynamic Includes Provides a Partial Solution

AEM provides Sling Dynamic Includes (SDI) to partially cache the static content and dynamic content using placeholder tags. When a request comes in, it merges the static and dynamic content and then delivers it to the customer.

You can learn more about Sling Dynamic Include on the Adobe Experience League site.

However, SDI relies on the Dispatcher server for processing. This adds load and latency.

Imagine if this process is done on the CDN. This is where Edge Side Includes (ESI) comes into play.

Edge-Side Includes Enters the Chat

ESI does the same thing as SDI, but it uses ESI tags on the cached pages on the CDN.

ESI is powerful, but what if you want to do additional custom business logic apart from just fetching the content? That’s where Cloudflare Workers shines.

What is Cloudflare Workers?  

Cloudflare Workers is a serverless application that can be executed on the CDN Edge Network. Executing the code in edge locations closer to the user location reduces the latency and performance because the request does not have to reach the origin servers.  

Learn more about Cloudflare Workers on the Cloudflare Doc site.

ESI + Cloudflare Workers

In the following example, I’ll share how Cloudflare Workers intercepts ESI tags and fetches both original and translated content.

How to Enable ESI in AEM

  1. Enable SDI in AEM Publish: /system/console/configMgr/org.apache.sling.dynamicinclude.Configuration
  2. Add mod_include to your Dispatcher config.
  3. Set no-cache rules for SDI fragments using specific selectors.

Note: Set include-filter.config.include-type to “ESI” to enable Edge Side Includes.

Visit this article for more detailed steps on how to enable SDI, Dispatcher configuration.  

Writing the Cloudflare Worker Script

Next, write a custom script to intercept the ESI tag request and make a custom call to the origin to get the content, either from the original or translated content.

addEventListener('fetch', (event) => { 

  event.respondWith(handleRequest(event.request)); 

}); 

async function handleRequest(request){ 

//You can update the url and set it to your local aem url in case of local development 

const url = new URL(request.url); 

const origin = url.origin; 

// You can modify the headers based on your requirements and then create a new request with the new headers 

const originalHeaders = request.headers; 

            const newHeaders = new Headers(originalHeaders); 

//Append new headers here 

const aemRequest = new Request(url, { 

                    headers: newHeaders, 

                    redirect: 'manual', 

                }); 

// Get the response from the Origin 

try{ 

const aemresponse = await fetch(aemRequest); 

// Get the content type 

const contentType = aemresponse.headers.get("Content-Type") || ""; 

// If the content type is not “text/html”, return the response as usual or as per requirement, else check if the content has any “esi:include” tag 

If(!contentType.toLocaleLowerCase().includes("text/html")){ 

//return  

} 

//fetch the HTML response 

            const html = await aemresponse.text(); 

if(!html.includes("esi:include")){ 

//content doesn’t have esi tag, return the response 

} 

return fetchESIContent(aemresponse, html, origin) 

} 

} 

 

async function fetchESIContent(originResponse, html, origin) { 

    try{ 

   

      //RegEx expression to find all the esi:include tag in the page 

      const esiRegex = /<esi:include[^>]*\ssrc="([^"]+)"[^>]*\/?>/gi; 

   

      //fetch all fragments and replace those 

      const replaced = await replaceAsync(html, esiRegex, async (match, src) => { 

        try { 

          const absEsiUrl = resolveEsiSrc(src, origin); 

          const fragRes = await fetch(absEsiUrl, {headers: {"Cache-Control" : "no-store"}}); 

          console.log('Fragment response',fragRes.statusText) 

          return fragRes.ok ? await fragRes.text() : "Fragment Response didn't return anything"; 

        } catch (error) { 

          console.error("Error in fetching esi fragments: ",error.message); 

          return ""; 

        } 

      }) 

   

      const headers = appendResponseHeader(originResponse) 

      // Add this header to confirm that ESI has been injected successfully  

       headers.set("X-ESI-Injected", "true"); 

   

      return new Response(replaced, { 

        headers, 

        statusText: originResponse.statusText, 

        status: originResponse.status 

      }) 

    } 

    catch(err){ 

      new Response("Failed to fetch AEM page: "+ err.message, {status: 500}) 

    } 

  } 

 

// Function to fetch content asynchronously 

async function replaceAsync(str, regex, asycFn) { 

    const parts = []; 

    let lastIndex = 0; 

    for( const m of str.matchAll(regex)){ 

        //console.log("ESI Part of the page:: ",m) 

        parts.push(str.slice(lastIndex, m.index)); 

        parts.push(await asycFn(...m)); 

        lastIndex = m.index + m[0].length; 

    } 

    parts.push(str.slice(lastIndex)); 

    return parts.join(""); 

}

Bonus Tip: Local Testing With Miniflare

Want to test Cloudflare Workers locally? Use Miniflare, a simulator for Worker environments.

Check out the official Miniflare documentation.

You Don’t Need to Sacrifice Performance or Functionality

Implementing ESI through Cloudflare Workers is an excellent way to combine aggressive caching with dynamic content rendering—without compromising overall page performance or functionality. 

This approach helps teams deliver faster, smarter experiences at scale. As edge computing continues to evolve, we’re excited to explore even more ways to optimize performance and personalization.

]]>
https://blogs.perficient.com/2025/09/23/aem-and-cloudflare-workers-the-ultimate-duo-for-blazing-fast-pages/feed/ 0 387173
Why Oracle Fusion AI is the Smart Manufacturing Equalizer — and How Perficient Helps You Win https://blogs.perficient.com/2025/09/11/why-oracle-fusion-ai-is-the-smart-manufacturing-equalizer-and-how-perficient-helps-you-win/ https://blogs.perficient.com/2025/09/11/why-oracle-fusion-ai-is-the-smart-manufacturing-equalizer-and-how-perficient-helps-you-win/#respond Thu, 11 Sep 2025 20:24:13 +0000 https://blogs.perficient.com/?p=387047

My 30-year technology career has taught me many things…and one big thing: the companies that treat technology as a cost center are the ones that get blindsided. In manufacturing, that blindside is already here — and it’s wearing the name tag “AI.”

For decades, manufacturers have been locked into rigid systems, long upgrade cycles, and siloed data. The result? Operations that run on yesterday’s insights while competitors are making tomorrow’s moves. Sound familiar? It’s the same trap traditional IT outsourcing fell into — and it’s just as deadly in the age of smart manufacturing.

The AI Advantage in Manufacturing

Oracle Fusion AI for Manufacturing Smart Operations isn’t just another software upgrade. It’s a shift from reactive to predictive, from siloed to synchronized. Think:

  • Real-time anomaly detection that flags quality issues before they hit the line.
  • Predictive maintenance that slashes downtime and extends asset life.
  • Intelligent scheduling that adapts to supply chain disruptions in minutes, not weeks.
  • Embedded analytics that turn every operator, planner, and manager into a decision-maker armed with live data.

This isn’t about replacing people — it’s about giving them superpowers. Read more from Oracle here.

Proof in Action: Roeslein & Associates

If you want to see what this looks like in the wild, look at Roeslein & Associates. They were running on disparate, outdated legacy systems — the kind that make global process consistency a pipe dream. Perficient stepped in and implemented Oracle Fusion Cloud Manufacturing with Project Driven Supply Chain, plus full Financial and Supply Chain Management suites. The result?

  • A global solution template that can be rolled out anywhere in the business.
  • A redesigned enterprise structure to track profits across business units.
  • Standardized manufacturing processes that still flex for highly customized demand.
  • Integrated aftermarket parts ordering and manufacturing flows.
  • Seamless connections between Fusion, labor capture systems, and eCommerce.

That’s not just “going live” — that’s rewiring the operational nervous system for speed, visibility, and scale.

Why Standing Still is Riskier Than Moving Fast

In my words, “true innovation is darn near impossible” when you’re chained to legacy thinking. The same applies here: if your manufacturing ops are running on static ERP data and manual interventions, you’re already losing ground to AI‑driven competitors who can pivot in real time.

Oracle Fusion Cloud with embedded AI is the equalizer. A mid‑sized manufacturer with the right AI tools can outmaneuver industry giants still stuck in quarterly planning cycles.

Where Perficient Comes In

Perficient’s Oracle team doesn’t just implement software — they architect transformation. With deep expertise in Oracle Manufacturing Cloud, Supply Chain Management, and embedded Fusion AI solutions, they help you:

  • Integrate AI into existing workflows without blowing up your operations.
  • Optimize supply chain visibility from raw materials to customer delivery.
  • Leverage IoT and machine learning for continuous process improvement.
  • Scale securely in the cloud while keeping compliance and governance in check.

They’ve done it for global manufacturers, and they can do it for you — faster than you think.

The Call to Action

If you believe your manufacturing operations are immune to disruption, history says otherwise. The companies that win will be the ones that treat AI not as a pilot project, but as the new operating system for their business.

Rather than letting new entrants disrupt your position, take initiative and lead the charge—make them play catch-up.

]]>
https://blogs.perficient.com/2025/09/11/why-oracle-fusion-ai-is-the-smart-manufacturing-equalizer-and-how-perficient-helps-you-win/feed/ 0 387047
Why It’s Time to Move from SharePoint On-Premises to SharePoint Online https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/ https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/#respond Tue, 09 Sep 2025 14:53:50 +0000 https://blogs.perficient.com/?p=387013

In today’s fast-paced digital workplace, agility, scalability, and collaboration aren’t just nice to have—they’re business-critical. If your organization is still on Microsoft SharePoint On-Premises, now is the time to make the move to SharePoint Online. Here’s why this isn’t just a technology upgrade—it’s a strategic leap forward.

1. Work Anywhere, Without Barriers

SharePoint Online empowers your workforce with secure access to content from virtually anywhere. Whether your team is remote, hybrid, or on the go, they can collaborate in real time without being tethered to a corporate network or VPN.

2. Always Up to Date

Forget about manual patching and version upgrades. SharePoint Online is part of Microsoft 365, which means you automatically receive the latest features, security updates, and performance improvements—without the overhead of managing infrastructure.

3. Reduce Costs and Complexity

Maintaining on-premises servers is expensive and resource-intensive. By moving to SharePoint Online, you eliminate hardware costs, reduce IT overhead, and streamline operations. Plus, Microsoft handles the backend, so your team can focus on innovation instead of maintenance.

4. Enterprise-Grade Security and Compliance

Microsoft invests heavily in security, offering built-in compliance tools, data loss prevention, and advanced threat protection. SharePoint Online is designed to meet global standards and industry regulations, giving you peace of mind that your data is safe.

5. Seamless Integration with Microsoft 365

SharePoint Online integrates effortlessly with Microsoft Teams, OneDrive, Power Automate, and Power BI—enabling smarter workflows, better insights, and more connected experiences across your organization.

6. Scalability for the Future

Whether you’re a small business or a global enterprise, SharePoint Online scales with your needs. You can easily add users, expand storage, and adapt to changing business demands without worrying about infrastructure limitations.

Why Perficient for Your SharePoint Online Migration 

Migrating to SharePoint Online is more than a move to the cloud—it’s a chance to transform how your business works. At Perficient, we help you turn common migration challenges into measurable wins:
  • 35% boost in collaboration efficiency
  • Up to 60% cost savings per user
  • 73% reduction in data breach risk
  • 100+ IT hours saved each month
Our Microsoft 365 Modernization solutions don’t just migrate content—they build a secure, AI-ready foundation. From app modernization and AI-powered search to Microsoft Copilot integration, Perficient positions your organization for the future.
]]>
https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/feed/ 0 387013
Automating Azure Key Vault Secret and Certificate Expiry Monitoring with Azure Function App https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/ https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/#respond Tue, 26 Aug 2025 14:15:25 +0000 https://blogs.perficient.com/?p=386349

How to monitor hundreds of Key Vaults across multiple subscriptions for just $15-25/month

The Challenge: Key Vault Sprawl in Enterprise Azure

If you’re managing Azure at enterprise scale, you’ve likely encountered this scenario: Key Vaults scattered across dozens of subscriptions, hundreds of certificates and secrets with different expiry dates, and the constant fear of unexpected outages due to expired certificates. Manual monitoring simply doesn’t scale when you’re dealing with:

  • Multiple Azure subscriptions (often 10-50+ in large organizations)
  • Hundreds of Key Vaults across different teams and environments
  • Thousands of certificates with varying renewal cycles
  • Critical secrets that applications depend on
  • Different time zones and rotation schedules

The traditional approach of spreadsheets, manual checks, or basic Azure Monitor alerts breaks down quickly. You need something that scales automatically, costs practically nothing, and provides real-time visibility across your entire Azure estate.

The Solution: Event-Driven Monitoring Architecture

Keyvaultautomation

Single Function App, Unlimited Key Vaults

Instead of deploying monitoring resources per Key Vault (expensive and complex), we use a centralized architecture:

Management Group (100+ Key Vaults)
           ↓
   Single Function App
           ↓
     Action Group
           ↓
    Notifications

This approach provides:

  • Unlimited scalability: Monitor 1 or 1000+ Key Vaults with the same infrastructure
  • Cross-subscription coverage: Works across your entire Azure estate
  • Real-time alerts: Sub-5-minute notification delivery
  • Cost optimization: $15-25/month total (not per Key Vault!)

How It Works: The Technical Deep Dive

1. Event Grid System Topics (The Sensors)

Azure Key Vault automatically generates events when certificates and secrets are about to expire. We create Event Grid System Topics for each Key Vault to capture these events:

Event Types Monitored:
• Microsoft.KeyVault.CertificateNearExpiry
• Microsoft.KeyVault.CertificateExpired  
• Microsoft.KeyVault.SecretNearExpiry
• Microsoft.KeyVault.SecretExpired

The beauty? These events are generated automatically by Azure – no polling, no manual checking, just real-time notifications when things are about to expire.

2. Centralized Processing (The Brain)

A single Azure Function App processes ALL events from across your organization:

// Simplified event processing flow
eventGridEvent → parseEvent() → extractMetadata() → 
formatAlert() → sendToActionGroup()

Example Alert Generated:
{
  severity: "Sev1",
  alertTitle: "Certificate Expired in Key Vault",
  description: "Certificate 'prod-ssl-cert' has expired in Key Vault 'prod-keyvault'",
  keyVaultName: "prod-keyvault",
  objectType: "Certificate",
  expiryDate: "2024-01-15T00:00:00.000Z"
}

3. Smart Notification Routing (The Messenger)

Azure Action Groups handle notification distribution with support for:

  • Email notifications (unlimited recipients)
  • SMS alerts for critical expiries
  • Webhook integration with ITSM tools (ServiceNow, Jira, etc.)
  • Voice calls for emergency situations.

Implementation: Infrastructure as Code

The entire solution is deployed using Terraform, making it repeatable and version-controlled. Here’s the high-level infrastructure:

Resource Architecture

# Single monitoring resource group
resource "azurerm_resource_group" "monitoring" {
  name     = "rg-kv-monitoring-${var.timestamp}"
  location = var.primary_location
}

# Function App (handles ALL Key Vaults)
resource "azurerm_linux_function_app" "kv_processor" {
  name                = "func-kv-monitoring-${var.timestamp}"
  service_plan_id     = azurerm_service_plan.function_plan.id
  # ... configuration
}

# Event Grid System Topics (one per Key Vault)
resource "azurerm_eventgrid_system_topic" "key_vault" {
  for_each = { for kv in var.key_vaults : kv.name => kv }
  
  name                   = "evgt-${each.key}"
  source_arm_resource_id = "/subscriptions/${each.value.subscriptionId}/resourceGroups/${each.value.resourceGroup}/providers/Microsoft.KeyVault/vaults/${each.key}"
  topic_type            = "Microsoft.KeyVault.vaults"
}

# Event Subscriptions (route events to Function App)
resource "azurerm_eventgrid_event_subscription" "certificate_expiry" {
  for_each = { for kv in var.key_vaults : kv.name => kv }
  
  azure_function_endpoint {
    function_id = "${azurerm_linux_function_app.kv_processor.id}/functions/EventGridTrigger"
  }
  
  included_event_types = [
    "Microsoft.KeyVault.CertificateNearExpiry",
    "Microsoft.KeyVault.CertificateExpired"
  ]
}

CI/CD Pipeline Integration

The solution includes an Azure DevOps pipeline that:

  1. Discovers Key Vaults across your management group automatically
  2. Generates Terraform variables with all discovered Key Vaults
  3. Deploys infrastructure using infrastructure as code
  4. Validates deployment to ensure everything works
# Simplified pipeline flow
stages:
  - stage: DiscoverKeyVaults
    # Scan management group for all Key Vaults
    
  - stage: DeployMonitoring  
    # Deploy Function App and Event Grid subscriptions
    
  - stage: ValidateDeployment
    # Ensure monitoring is working correctly

Cost Analysis: Why This Approach Wins

Traditional Approach (Per-Key Vault Monitoring)

100 Key Vaults × $20/month per KV = $2,000/month
Annual cost: $24,000

This Approach (Centralized Monitoring)

Base infrastructure: $15-25/month
Event Grid events: $2-5/month  
Total: $17-30/month
Annual cost: $204-360

Savings: 98%+ reduction in monitoring costs

Detailed Cost Breakdown

ComponentMonthly CostNotes
Function App (Basic B1)$13.14Handles unlimited Key Vaults
Storage Account$1-3Function runtime storage
Log Analytics$2-15Centralized logging
Event Grid$0.50-2$0.60 per million operations
Action Group$0Email notifications free
Total$17-33Scales to unlimited Key Vaults

Implementation Guide: Getting Started

Prerequisites

  1. Azure Management Group with Key Vaults to monitor
  2. Service Principal with appropriate permissions:
    • Reader on Management Group
    • Contributor on monitoring subscription
    • Event Grid Contributor on Key Vault subscriptions
  3. Azure DevOps or similar CI/CD platform

Step 1: Repository Setup

Create this folder structure:

keyvault-monitoring/
├── terraform/
│   ├── main.tf              # Infrastructure definitions
│   ├── variables.tf         # Configuration variables
│   ├── terraform.tfvars     # Your specific settings
│   └── function_code/       # Function App source code
├── azure-pipelines.yml      # CI/CD pipeline
└── docs/                    # Documentation

Step 2: Configuration

Update terraform.tfvars with your settings:

# Required configuration
notification_emails = [
  "your-team@company.com",
  "security@company.com"
]

primary_location = "East US"
log_retention_days = 90

# Optional: SMS for critical alerts
sms_notifications = [
  {
    country_code = "1"
    phone_number = "5551234567"
  }
]

# Optional: Webhook integration
webhook_url = "https://your-itsm-tool.com/api/alerts"

Step 3: Deployment

The pipeline automatically:

  1. Scans your management group for all Key Vaults
  2. Generates infrastructure code with discovered Key Vaults
  3. Deploys monitoring resources using Terraform
  4. Validates functionality with test events

Expected deployment time: 5-10 minutes

Step 4: Validation

Test the setup by creating a short-lived certificate:

# Create test certificate with 1-day expiry
az keyvault certificate create \
  --vault-name "your-test-keyvault" \
  --name "test-monitoring-cert" \
  --policy '{
    "issuerParameters": {"name": "Self"},
    "x509CertificateProperties": {
      "validityInMonths": 1,
      "subject": "CN=test-monitoring"
    }
  }'

# You should receive an alert within 5 minutes

Operational Excellence

Monitoring the Monitor

The solution includes comprehensive observability:

// Function App performance dashboard
FunctionAppLogs
| where TimeGenerated > ago(24h)
| summarize 
    ExecutionCount = count(),
    SuccessRate = (countif(Level != "Error") * 100.0) / count(),
    AvgDurationMs = avg(DurationMs)
| extend PerformanceScore = case(
    SuccessRate >= 99.5, "Excellent",
    SuccessRate >= 99.0, "Good", 
    "Needs Attention"
)

Advanced Features and Customizations

1. Integration with ITSM Tools

The webhook capability enables integration with enterprise tools:

// ServiceNow integration example
const serviceNowPayload = {
  short_description: `${objectType} '${objectName}' expiring in Key Vault '${keyVaultName}'`,
  urgency: severity === 'Sev1' ? '1' : '3',
  category: 'Security',
  subcategory: 'Certificate Management',
  caller_id: 'keyvault-monitoring-system'
};

2. Custom Alert Routing

Different Key Vaults can route to different teams:

// Route alerts based on Key Vault naming convention
const getNotificationGroup = (keyVaultName) => {
  if (keyVaultName.includes('prod-')) return 'production-team';
  if (keyVaultName.includes('dev-')) return 'development-team';
  return 'platform-team';
};

3. Business Hours Filtering

Critical alerts can bypass business hours, while informational alerts respect working hours:

const shouldSendImmediately = (severity, currentTime) => {
  if (severity === 'Sev1') return true; // Always send critical alerts
  
  const businessHours = isBusinessHours(currentTime);
  return businessHours || isNearBusinessHours(currentTime, 2); // 2 hours before business hours
};

Troubleshooting Common Issues

Issue: No Alerts Received

Symptoms:

Events are visible in Azure, but no notifications are arriving

Resolution Steps:

  1. Check the Action Group configuration in the Azure Portal
  2. Verify the Function App is running and healthy
  3. Review Function App logs for processing errors
  4. Validate Event Grid subscription is active

Issue: High Alert Volume

Symptoms:

Too many notifications, alert fatigue

Resolution:

// Implement intelligent batching
const batchAlerts = (alerts, timeWindow = '15m') => {
  return alerts.reduce((batches, alert) => {
    const key = `${alert.keyVaultName}-${alert.objectType}`;
    batches[key] = batches[key] || [];
    batches[key].push(alert);
    return batches;
  }, {});
};

Issue: Missing Key Vaults

Symptoms: Some Key Vaults are not included in monitoring

Resolution:

  1. Re-run the discovery pipeline to pick up new Key Vaults
  2. Verify service principal has Reader access to all subscriptions
  3. Check for Key Vaults in subscriptions outside the management group
]]>
https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/feed/ 0 386349
Part 2: Implementing Azure Virtual WAN – A Practical Walkthrough https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/ https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/#respond Thu, 21 Aug 2025 09:33:21 +0000 https://blogs.perficient.com/?p=386292

In Part 1 (Harnessing the Power of AWS Bedrock through CloudFormation / Blogs / Perficient), we discussed what Azure Virtual WAN is and why it’s a powerful solution for global networking. Now, let’s get hands-on and walk through the actual implementation—step by step, in a simple, conversational way.

Architecturediagram

1.     Creating the Virtual WAN – The Network’s Control Plane

Virtual WAN is the heart of a global network, not just another resource. It replaces: Isolated VPN gateways per region, Manual ExpressRoute configurations, and complex peering relationships.

Setting it up is easy:

  • Navigate to Azure Portal → Search “Virtual WAN”
  • Click Create and configure.
  • Name: Naming matters for enterprise environments
  • Resource Group: Create new rg-network-global (best practice for lifecycle management)
  • Type: Standard (Basic lacks critical features like ExpressRoute support)

Azure will set up the Virtual WAN in a few seconds. Now, the real fun begins.

2. Setting Up the Virtual WAN Hub – The Heart of The Network

The hub is where all connections converge. It’s like a major airport hub where traffic from different locations meets and gets efficiently routed. Without a hub, you’d need to configure individual gateways for every VPN and ExpressRoute connection, leading to higher costs and management overhead.

  • Navigate to the Virtual WAN resource → Click Hubs → New Hub.
  • Configure the Hub.
  • Region: Choose based on: Primary user locations & Azure service availability (some regions lack certain services)
  • Address Space: Assign a private IP range (e.g., 10.100.0.0/24).

Wait for Deployment, this takes about 30 minutes (Azure is building VPN gateways, ExpressRoute gateways, and more behind the scenes).

Once done, the hub is ready to connect everything: offices, cloud resources, and remote users.

3. Connecting Offices via Site-to-Site VPN – Building Secure Tunnels

Branches and data centres need a reliable, encrypted connection to Azure. Site-to-Site VPN provides this over the public internet while keeping data secure. Without VPN tunnels, branch offices would rely on slower, less secure internet connections to access cloud resources, increasing latency and security risks.

  • In the Virtual WAN Hub, go to VPN (Site-to-Site) → Create VPN Site.
  • Name: branch-nyc-01
  • Private Address Space: e.g., 192.168.100.0/24 (must match on-premises network)
  • Link Speed: Set accurately for Azure’s QoS calculations
  • Download VPN Configuration: Azure provides a config file—apply it to the office’s VPN device (like a Cisco or Fortinet firewall).
  • Lastly, connect the VPN Site to the Hub.
  • Navigate to VPN connections → Create connection → Link the office to the hub.

Now, the office and Azure are securely connected.

4. Adding ExpressRoute – The Private Superhighway

For critical applications (like databases or ERP systems), VPNs might not provide enough bandwidth or stability. ExpressRoute gives us a dedicated, high-speed connection that bypasses the public internet. Without ExpressRoute, latency-sensitive applications (like VoIP or real-time analytics) could suffer from internet congestion or unpredictable performance.

  • Order an ExpressRoute Circuit: We can do this via the Azure Portal or through an ISP (like AT&T or Verizon).
  • Authorize the Circuit in Azure
  • Navigate to the Virtual WAN Hub → ExpressRoute → Authorize.
  • Linking it to Hub: Once it is authorized, connect the ExpressRoute circuit to the hub.

Now, the on-premises network has a dedicated, high-speed connection to Azure—no internet required.

5. Enabling Point-to-Site VPN for Remote Workers – The Digital Commute

Employees working from home need secure access to internal apps without exposing them to the public internet. P2S VPN lets them “dial in” securely from anywhere. Without P2S VPN, remote workers might resort to risky workarounds like exposing RDP or databases to the internet.

  • Configure P2S in The Hub
  • Navigate to VPN (Point-to-Site) → Configure.
  • Set Up Authentication: Choose certificate-based auth (secure and easy to manage) and upload the root/issuer certificates.
  • Assign an IP Pool. e.g., 192.168.100.0/24 (this is where remote users will get their IPs).
  • Download & Distribute the VPN Client

Employees install this on their laptops to connect securely. Now, the team can access Azure resources from anywhere just like they’re in the office.

6. Linking Azure Virtual Networks (VNets) – The Cloud’s Backbone

Applications in one VNet (e.g., frontend servers) often need to talk to another (e.g., databases). Rather than complex peering, the Virtual WAN handles routing automatically. Without VNet integration, it needs manual peering and route tables for every connection, creating a management nightmare at scale.

  • VNets need to be attached.
  • Navigate to The Hub → Virtual Network Connections → Add Connection.
  • Select the VNets. e.g., Connect vnet-app (for applications) and vnet-db (for databases).
  • Azure handles the Routing: Traffic flows automatically through the hub-no manual route tables needed.

Now, the cloud resources communicate seamlessly.

Monitoring & Troubleshooting

Networks aren’t “set and forget.” We need visibility to prevent outages and quickly fix issues. We can use tools like Azure Monitor, which tracks VPN/ExpressRoute health—like a dashboard showing all trains (data packets) moving smoothly. Again, Network Watcher can help to diagnose why a branch can’t connect.

Common Problems & Fixes

  • When VPN connections fail, the problem is often a mismatched shared key—simply re-enter it on both ends.
  • If ExpressRoute goes down, check with your ISP—circuit issues usually require provider intervention.
  • When VNet traffic gets blocked, verify route tables in the hub—missing routes are a common culprit.
]]>
https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/feed/ 0 386292
Invoke the Mapbox Geocoding API to Populate the Location Autocomplete Functionality https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/ https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/#respond Thu, 21 Aug 2025 08:01:53 +0000 https://blogs.perficient.com/?p=381495

While working on one of my projects, I needed to implement an autocomplete box using Mapbox Geocoding APIs in a React/Next.js application. The goal was to filter a list of hospitals based on the selected location. The location results from the API include coordinates, which I compared with the coordinates of the hospitals in my list.

The API returns various properties, including coordinates, under the properties section (as shown in the image below). These coordinates (latitude and longitude) can be used to filter the hospital list by matching them with the selected location.

Mapboxresultproperties

The API requires an access token, which can be obtained by signing up on the Mapbox platform. You can refer to the Geocoding API documentation for more details. The documentation provides a variety of APIs that can be used depending on your specific requirements.

Below are some example APIs taken from the same link.

# A basic forward geocoding request
# Find Los Angeles

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Los%20Angeles&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Find a town called 'Chester' in a specific region
# Add the proximity parameter with local coordinates
# This ensures the town of Chester, New Jersey is in the results

curl "https://api.mapbox.com/search/geocode/v6/forward?q=chester&proximity=-74.70850,40.78375&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Specify types=country to search only for countries named Georgia
# Results will exclude the American state of Georgia

curl "https://api.mapbox.com/search/geocode/v6/forward?q=georgia&types=country&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Limit the results to two results using the limit option
# Even though there are many possible matches
# for "Washington", this query will only return two results.

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Washington&limit=2&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Search for the Place feature "Kaaleng" in the Ilemi Triangle. Specifying the cn worldview will return the country value South Sudan. Not including leaving the worldview parameter would default to the us worldview and return the country value Kenya.

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Kaaleng&worldview=cn&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

The implementation leverages React hooks along with state management for handling component behavior and data flow.

How to Create an Autocomplete Component in React

  1. Create a React component.
  2. Sign up and apply the access token and API URL to the constants.
  3. Create a type to bind the structure of the API response results.
  4. Use the useEffect hook to invoke the API.
  5. Map the fetched results to the defined type.
  6. Apply CSS to style the component and make the autocomplete feature visually appealing.
#constants.ts

export const APIConstants = {
  accessToken: 'YOUR_MAPBOX_ACCESS_TOKEN',
  geoCodeSearchForwardApiUrl: 'https://api.mapbox.com/search/geocode/v6/forward',
  searchWordCount: 3,
};
#LocationResultProps.ts

type Suggetions = {
  properties: {
    feature_type: string;
    full_address: string;
    name: string;
    name_preferred: string;
    coordinates: {
      longitude: number;
      latitude: number;
    };
  };
};
export type LocationResults = {
  features: Array<Suggetions>;
};
#Styles.ts

export const autoComplete = {
  container: {
    width: '250px',
    margin: '20px auto',
  },
  input: {
    width: '100%',
    padding: '10px',
    fontSize: '16px',
    border: '1px solid #ccc',
    borderRadius: '4px',
  },
  dropdown: {
    top: '42px',
    left: '0',
    right: '0',
    backgroundColor: '#fff',
    border: '1px solid #ccc',
    borderTop: 'none',
    maxHeight: '150px',
    listStyleType: 'none',
    padding: '0',
    margin: '0',
    zIndex: 1000,
  },
  item: {
    padding: '5px',
    cursor: 'pointer',
    borderBottom: '1px solid #eee',
  },
};

#LocationSearchInput.tsx

import React, { useEffect, useState } from 'react';
import { APIConstants } from 'lib/constants';
import { autoComplete } from '../Styles';
import { LocationResults } from 'lib/LocationResultProps';

export const Default = (): JSX.Element => {
  const apiUrlParam: string[][] = [
    //['country', 'us%2Cpr'],
    ['types', 'region%2Cpostcode%2Clocality%2Cplace%2Cdistrict%2Ccountry'],
    ['language', 'en'],
    //['worldview', 'us'],
  ];

  const [inputValue, setInputValue] = useState<string>('');
  const [results, setresults] = useState<LocationResults>();
  const [submitted, setSubmitted] = useState<boolean>(false);

  // When the input changes, reset the "submitted" flag.
  const handleChange = (value: string) => {
    setSubmitted(false);
    setInputValue(value);
  };
  const handleSubmit = (value: string) => {
    setSubmitted(true);
    setInputValue(value);
  };

  // Fetch results when the input value changes
  useEffect(() => {
    if (inputValue.length < APIConstants?.searchWordCount) {
      setresults(undefined);
      return;
    }
    if (submitted) {
      return;
    }
    const queryInputParam = [
      ['q', inputValue],
      ['access_token', APIConstants?.accessToken ?? ''],
    ];

    const fetchData = async () => {
      const queryString = apiUrlParam
        .concat(queryInputParam)
        .map((inner) => inner.join('='))
        .join('&');
      const url = APIConstants?.geoCodeSearchForwardApiUrl + '?' + queryString;

      try {
        const response: LocationResults = await (await fetch(url)).json();
        setresults(response);
        console.log(response);
      } catch (err: unknown) {
        console.error('Error obtaining location results for autocomplete', err);
      }
    };

    fetchData();
  }, [inputValue]);

  return (
    <div>
      <div style={autoComplete.container}>
        <input
          style={autoComplete.input}
          onChange={(e) => handleChange(e.target?.value)}
          value={inputValue}
          placeholder="Find Location"
        />

        {inputValue &&
          !submitted &&
          results?.features?.map((x) => {
            return (
              <ul style={autoComplete.dropdown}>
                <li style={autoComplete.item}>
                  <span onClick={() => handleSubmit(x?.properties?.full_address)}>
                    {x?.properties?.full_address}
                  </span>
                </li>
              </ul>
            );
          })}
      </div>
    </div>
  );
};

Finally, we can search for a location using a zip code, state, or country.

Recording 20250520 135312 (1)

 

Additionally, the reverse geocoding API is used similarly, requiring only minor adjustments to the parameters and API URL. The location autocomplete box offers a wide range of use cases. It can be integrated into user forms such as registration or contact forms, where exact location coordinates or a full address need to be captured upon selection. Each location result includes various properties. Based on the user’s input, whether it’s a city, ZIP code, or state, the autocomplete displays matching results.

 

]]>
https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/feed/ 0 381495
Smart Failure Handling in HCL Commerce with Circuit Breakers https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/ https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/#respond Fri, 15 Aug 2025 05:48:28 +0000 https://blogs.perficient.com/?p=386135

In modern enterprise systems, stability and fault tolerance are not optional; they are essential. One proven approach to ensure robustness is the Circuit Breaker pattern, widely used in API development to prevent cascading failures. HCL Commerce takes this principle further by embedding circuit breakers into its HCL Cache to effectively manage Redis failures.

 What Is a Circuit Breaker?
The Circuit Breaker is a design pattern commonly used in API development to stop continuous requests to a service that is currently failing, thereby protecting the system from further issues. It helps maintain system stability by detecting failures and stopping the flow of requests until the issue is resolved.

The circuit breaker typically operates in three main (or “normal”) states. These are part of the standard global pattern of Circuit Breaker design.

Normal States:

  1. CLOSED:
  • At the start, the circuit breaker allows all outbound requests to external services without restrictions.
  • It monitors the success and failure of these calls.
  1. OPEN:
  • The circuit breaker rejects all external calls.
  • This state is triggered when the failure threshold is reached (e.g., 50% failure rate).
  • It remains in this state for a specified duration (e.g., 60 seconds).
  1. HALF_OPEN:
  • After the wait duration in the OPEN state, the circuit breaker transitions to HALF_OPEN.
  • It allows a limited number of calls to check if the external service has recovered.
  • If these calls succeed (e.g., receive a 200 status), the circuit breaker transitions back to  CLOSED.
  • If the error rate continues to be high, the circuit breaker reverts to the OPEN state.
Circuit Breaker Pattern

Circuit breaker pattern with normal states

Special States:

  1. FORCED_OPEN:
  • The circuit breaker is manually set to reject all external calls.
  • No calls are allowed, regardless of the external service’s status.
  1. DISABLED:
  • The circuit breaker is manually set to allow all external calls.
  • It does not monitor or track the success or failure of these calls.
Circuit breaker pattern with special states

Circuit breaker pattern with special states

Circuit Breaker in HCL Cache (for Redis)

In HCL Commerce, the HCL Cache layer interacts with Redis for remote coaching. But what if Redis becomes unavailable or slow? HCL Cache uses circuit breakers to detect issues and temporarily stop calls to Redis, thus protecting the rest of the system from being affected.

Behavior Overview:

  • If 20 consecutive failures occur in 10 seconds, the Redis connection is cut off.
  • The circuit remains open for 60 seconds.
  • At this stage, the circuit enters a HALF_OPEN state, where it sends limited test requests to evaluate if the external service has recovered.
  • If even 2 of these test calls fail, the circuit reopens for another 60 seconds.

Configuration Snapshot

To manage Redis outages effectively, HCL Commerce provides fine-grained configuration settings for both Redis client behavior and circuit breaker logic. These settings are defined in the Cache YAML file, allowing teams to tailor fault-handling based on their system’s performance and resilience needs.

 Redis Request Timeout Configuration

Slow Redis responses are not treated as failures unless they exceed the defined timeout threshold. The Redis client in HCL Cache supports timeout and retry configurations to control how persistent the system should be before declaring a failure:

timeout: 3000           # Max time (in ms) to wait for a Redis response
retryAttempts: 3        # Number of retry attempts on failure
retryInterval: 1500    # Specifies the delay (in milliseconds) between each retry attempt.

With the above configuration, the system will spend up to 16.5 seconds (3000 + 3 × (3000 + 1500)) trying to get a response before returning a failure. While these settings offer robustness, overly long retries can result in delayed user responses or log flooding, so tuning is essential.

Circuit Breaker Configuration

Circuit breakers are configured under the redis.circuitBreaker section of the Cache YAML file. Here’s an example configuration:

redis:
  circuitBreaker:
    scope: auto
    retryWaitTimeMs: 60000
    minimumFailureTimeMs: 10000
    minimumConsecutiveFailures: 20
    minimumConsecutiveFailuresResumeOutage: 2 
cacheConfigs:
  defaultCacheConfig:
    localCache:
      enabled: true
      maxTimeToLiveWithRemoteOutage: 300

Explanation of Key Fields:

  • scope: auto: Automatically determines whether the circuit breaker operates at the client or cache/shard level, depending on the topology.
  • retryWaitTimeMs (Default: 60000): Time to wait before attempting Redis connections after circuit breaker is triggered.
  • minimumFailureTimeMs (Default: 10000): Minimum duration during which consecutive failures must occur before opening the circuit.
  • minimumConsecutiveFailures (Default: 20): Number of continuous failures required to trigger outage mode.
  • minimumConsecutiveFailuresResumeOutage (Default: 2): Number of failures after retrying that will put the system back into outage mode.
  • maxTimeToLiveWithRemoteOutage: During Redis outages, local cache entries use this TTL value (in seconds) to serve data without invalidation messages.

Real-world Analogy

Imagine you have a web service that fetches data from an external API. Here’s how the circuit breaker would work:

  1. CLOSED: The service makes calls to the API and monitors the responses.
  2. OPEN: If the API fails too often (e.g., 50% of the time), the circuit breaker stops making calls for 60 seconds.
  3. HALF_OPEN: After 60 seconds, the circuit breaker allows a few calls to the API to see if it’s working again.
  4. CLOSED: If the API responds successfully, the circuit breaker resumes normal operation.
  5. OPEN: If the API still fails, the circuit breaker stops making calls again and waits.

Final Thought

By combining the classic circuit breaker pattern with HCL Cache’s advanced configuration, HCL Commerce ensures graceful degradation during Redis outages. It’s not just about availability—it’s about intelligent fault recovery.

For more detailed information, you can refer to the official documentation here:
🔗 HCL Commerce Circuit Breakers – Official Docs

]]>
https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/feed/ 0 386135
How to Setup Nwayo Preprocessor in Magento 2 https://blogs.perficient.com/2025/08/13/how-to-setup-nwayo-preprocessor-in-magento-2/ https://blogs.perficient.com/2025/08/13/how-to-setup-nwayo-preprocessor-in-magento-2/#respond Wed, 13 Aug 2025 06:01:56 +0000 https://blogs.perficient.com/?p=385807

What is Nwayo?

Nwayo Preprocessor is an extendable front-end boilerplate designed to streamline development for multi-theme, multi-site, and multi-CMS front-end frameworks. It provides an efficient workflow for building responsive, scalable, and maintainable web themes across different platforms. 

In Magento 2, Nwayo can be particularly beneficial for front-end developers as it simplifies the theme deployment process. With just a single change in the Sass files, the framework can automatically regenerate and apply updates across the site. This approach not only accelerates the development process but also ensures consistency in the front-end experience across various themes and websites. 

Benefits of Using Nwayo Preprocessor

Time-Saving and Efficiency

  •  Nwayo automates the process of compiling and deploying front-end code, particularly Sass to CSS, with just a few commands. This allows developers to focus more on building and refining features rather than managing repetitive tasks like manual builds and deployments.                                                                                                            

Scalability Across Multi-Site and Multi-Theme Projects

  • Nwayo is designed to handle multi-site and multi-theme environments, which is common in complex platforms like Magento 2. This allows developers to easily maintain and apply changes across different sites and themes without duplicating efforts, making it ideal for large-scale projects.                                                                                   

Consistency and Maintainability

  • By centralizing code management and automating build processes, Nwayo ensures that all updates made in Sass files are applied consistently throughout the project. This helps in maintaining a uniform look and feel across different sections and themes, reducing the risk of human error and improving maintainability.                                                                                                                                                                                                                                        

Flexibility and Extensibility

Nwayo is highly extensible, allowing developers to tailor the boilerplate to their specific project needs. Whether it’s adding new workflows, integrating with different CMS platforms, or customizing the theme, Nwayo provides a flexible framework that can adapt to various front-end requirements.                                             

Version Control and Updates

With built-in commands to check versions and install updates, Nwayo makes it easy to keep the workflow up to date. This ensures compatibility with the latest development tools and standards, helping developers stay current with front-end best practices.  

Requirements to Set Up Nwayo

i)Node.js 

ii) Nwayo CLI

How to Set Up Nwayo Preprocessor in Magento 2?

Run the commands in your project root folder

Step 1

  • To set the boilerplate over the project 
  • npx @absolunet/nwayo-grow-project

Step 2

  • Install workflow and vendor (in the Nwayo root folder)
  • npm install

Step 3

  • Install CLI (in the Nwayo root folder) 
  • npm install -g @absolunet/nwayo-cli

 Step 4

  • Install Nwayo Workflow (in the Nwayo folder) 
  • nwayo install workflow

Step 5

  • Run the project (in the Nwayo folder) 
  • (It will convert Sass to CSS) 
  • nwayo Run watch

Step 6

  • Build the project (in the Nwayo folder) 
  • (It will build the Sass Files) 
  • nwayo rebuild

 Nwayo Rebuild

Magento 2 Integration

Nwayo integrates seamlessly with Magento 2, simplifying the process of managing multi-theme, multi-site environments. Automating Sass compilation and CSS generation allows developers to focus on custom features without worrying about the manual overhead of styling changes. With Nwayo, any updates to your Sass files are quickly reflected across your Magento 2 themes, saving time and reducing errors. 

Compatibility with Other Frameworks and CMS

Nwayo is a versatile tool designed to work with various front-end frameworks and CMS platforms. Its extendable architecture allows it to be used beyond Magento 2, providing a unified front-end development workflow for multiple environments. Some of the other frameworks and platforms that Nwayo supports include: 

1.WordPress

Nwayo can be easily adapted to work with WordPress themes. Since WordPress sites often rely on custom themes, Nwayo can handle Sass compilation and make theme management simpler by centralizing the CSS generation process for various stylesheets used in a WordPress project. 

2. Drupal

For Drupal projects, Nwayo can streamline theme development, allowing developers to work with Sass files while ensuring CSS is consistently generated across all Drupal themes. This is especially helpful when maintaining multi-site setups within Drupal, as it can reduce the time needed for theme updates. 

3. Laravel

When working with Laravel-based applications that require custom front-end solutions, Nwayo can automate the build process for Sass files, making it easier to manage the styles for different views and components within Laravel Blade templates. It helps keep the front-end codebase clean and optimized. 

4. Static Site Generators (Jekyll, Hugo, etc.)

Nwayo can also be used in static site generators like Jekyll or Hugo. In these setups, it handles the styling efficiently by generating optimized CSS files from Sass. This is particularly useful when you need to manage themes for static websites where speed and simplicity are key priorities. 

Framework-Agnostic Features

Nwayo’s CLI and Sass-based workflow can be customized to work in nearly any front-end project, regardless of the underlying CMS or framework. This makes it suitable for developers working on custom projects where there’s no predefined platform, allowing them to benefit from a consistent and efficient development workflow across different environments. 

Performance and Optimization

Nwayo includes several built-in features for optimizing front-end assets: 

  • Minification of CSS files: Ensures that the final CSS output is as small and efficient as possible, helping to improve page load times. 
  • Code Splitting: Allows developers to load only the required CSS for different pages or themes, reducing the size of CSS payloads and improving site performance. 
  • Automatic Prefixing: Nwayo can automatically add vendor prefixes for different browsers, ensuring cross-browser compatibility without manual adjustments.              

Custom Workflow Adaptation

Nwayo’s modular architecture allows developers to easily add or remove features from the workflow. Whether you’re working with React, Vue, or other JavaScript frameworks, Nwayo’s preprocessor can be extended to fit the unique requirements of any project. 

Example Framework Compatibility Diagram

You could include a diagram or chart that shows Nwayo’s compatibility with different frameworks and CMS: 

Framework Compatibility Diagram

This visual table makes it clear which frameworks Nwayo supports, giving developers an overview of its flexibility. 

10 Useful Nwayo Preprocessor Commands 

In addition to the basic commands for setting up and managing Nwayo in your project, here are other helpful commands you can use for various tasks:                                                                                                                                           

1. Check Nwayo Version

Check Nwayo Version   

This command allows you to verify the currently installed version of Nwayo in your environment. 

2. Install Vendors 

Install Vendors

Installs third-party dependencies required by the Nwayo workflow, making sure your project has all the necessary assets to function correctly. 

3. Remove Node Modules 

Remove Node Modules

This command clears the node_modules folder, which may be helpful if you’re facing dependency issues or need to reinstall modules. 

4. Build the Project 

Build The Project

Runs a complete build of the project, compiling all Sass files into CSS. This is typically used when preparing a project for production.

5. Watch for File Changes 

Watch For File Changes

Watches for changes in your Sass files and automatically compiles them into CSS. This is useful during development when you want real-time updates without having to manually trigger a build. 

6. Linting (Check for Code Quality) 

Linting (check For Code Quality)

Checks your Sass files for code quality and best practices using predefined linting rules. This helps ensure that your codebase follows consistent styling and performance guidelines. 

7. Clean Build Artifacts 

Clean Build Artifacts

Removes generated files (CSS, maps, etc.) to ensure that you’re working with a clean project. This can be useful when preparing for a fresh build.

8. Generate Production-Ready CSS

Generate Production Ready Css

This command builds the project in production mode, minifying CSS files and optimizing them for faster load times.

9. List Available Commands

List Available Commands

Displays all available commands, providing a quick reference for tasks that can be executed via the Nwayo CLI.

10. Nwayo Configurations (View or Edit) 

Nwayo Configurations (view Or Edit)

Allows you to view or modify the configuration settings for your Nwayo setup, such as output paths or preprocessing options.

By utilizing these commands, you can take full advantage of Nwayo’s features and streamline your front-end development workflow in Magento 2 or other compatible frameworks.

For a complete list of commands, visit the Nwayo CLI Documentation.

Reference Links

For more detailed information and official documentation on Nwayo, visit the following resources:

  1. Nwayo Official Documentation
    https://documentation.absolunet.com/nwayo/
    This is the official guide to setting up and using Nwayo. It includes installation instructions, supported commands, and best practices for integrating Nwayo with various frameworks, including Magento 2.
  2. Nwayo GitHub Repository
    https://github.com/absolunet/nwayo
    The GitHub repository provides access to the Nwayo source code, release notes, and additional resources for developers looking to contribute or understand the inner workings of the tool.
  3. Nwayo CLI Documentation
    https://npmjs.com/package/@absolunet/nwayo-cli
    This page details the Nwayo CLI, including installation instructions, supported commands, and usage examples.

Conclusion

In conclusion, using Nwayo code can significantly simplify the development process, allowing developers to focus on building unique features rather than spending time on repetitive tasks. By utilizing existing code templates and libraries, developers can save time and improve their productivity.

]]>
https://blogs.perficient.com/2025/08/13/how-to-setup-nwayo-preprocessor-in-magento-2/feed/ 0 385807
Why Value-Based Care Needs Digital Transformation to Succeed https://blogs.perficient.com/2025/08/12/why-value-based-care-needs-digital-transformation-to-succeed/ https://blogs.perficient.com/2025/08/12/why-value-based-care-needs-digital-transformation-to-succeed/#comments Tue, 12 Aug 2025 19:18:46 +0000 https://blogs.perficient.com/?p=385579

The pressure is on for healthcare organizations to deliver more—more value, more equity, more impact. That’s where a well-known approach is stepping back into the spotlight.

If you’ve been around healthcare conversations lately, you’ve probably heard the resurgence of term value-based care. And there’s a good reason for that. It’s not just a buzzword—it’s reshaping how we think about health, wellness, and the entire care experience.

What Is Value-Based Care, Really?

At its core, value-based care is a shift away from the old-school fee-for-service model, where providers got paid for every test, procedure, or visit, regardless of whether it actually helped the patient. Instead, value-based care rewards providers for delivering high-quality, efficient care that leads to better health outcomes.

It’s not about how much care is delivered, it’s about how effective that care is.

This shift matters because it places patients at the center of everything. It’s about making sure people get the right care, at the right time, in the right setting. That means fewer unnecessary tests, fewer duplicate procedures, and less of the fragmentation that’s plagued the system for decades.

The results? Better experiences for patients. Lower costs. Healthier communities.

Explore More: Access to Care Is Evolving: What Consumer Insights and Behavior Models Reveal

Benefits and Barriers of Value-Based Care in Healthcare Transformation

There’s a lot to be excited about, and for good reason! When we focus on prevention, chronic disease management, and whole-person wellness, we can avoid costly hospital stays and emergency room visits. That’s not just good for the healthcare system, it’s good for people, families, and communities. It moves us closer to the holy grail in healthcare: the quintuple aim. Achieving it means delivering better outcomes, elevating experiences for both patients and clinicians, reducing costs, and advancing health equity.

The challenge? Turning value-based care into a scalable, sustainable reality isn’t easy.

Despite more than a decade of pilots, programs, and well-intentioned reforms, only a small number of healthcare organizations have been able to scale their value-based care models effectively. Why? Because many still struggle with some pretty big roadblocks—like outdated technology, disconnected systems, siloed data, and limited ability to manage risk or coordinate care.

That’s where digital transformation comes in.

To make value-based care real and sustainable, healthcare organizations are rethinking their infrastructure from the ground up. They’re adopting cloud-based platforms and interoperable IT systems that allow for seamless data exchange across providers, payers, and patients. They’re tapping into advanced analytics, intelligent automation, and AI to identify at-risk patients, personalize care, and make smarter decisions faster.

As organizations work to enable VBC through digital transformation, it’s critical to really understand what the current research says. Our recent study, Access to Care: The Digital Imperative for Healthcare Leaders, backs up these trends, showing that digital convenience is no longer a differentiator—it’s a baseline expectation.

Findings show that nearly half of consumers have opted for digital-first care instead of visiting their regular physician or provider.

This shift highlights how important it is to offer simple and intuitive self-service digital tools that help people get what they need—fast. When it’s easy to find and access care, people are more likely to trust you, stick with you, and come back when they need you again.

You May Also Enjoy: How Innovative Healthcare Organizations Integrate Clinical Intelligence

Redesigning Care Models for a Consumer-Centric, Digitally Enabled Future

Care models are also evolving. Instead of reacting to illness, we’re seeing a stronger focus on prevention, early intervention, and proactive outreach. Consumer-centric tools like mobile apps, patient portals, and personalized health reminders are becoming the norm, not the exception. It’s all part of a broader movement to meet people where they are and give them more control over their health journey.

But here’s an important reminder: none of these efforts work in a vacuum.

Value-based care isn’t just a technology upgrade or a process tweak. It’s a cultural shift.

Success requires aligning people, processes, data, and technology in a way that’s intentional and strategic. It’s about creating an integrated system that’s designed to improve outcomes and then making those improvements stick.

So, while the road to value-based care may be long and winding, the destination is worth it. It’s not just a different way of delivering care—it’s a smarter, more sustainable one.

Success In Action: Empowering Healthcare Consumers and Their Care Ecosystems With Interoperable Data

Reimagine Healthcare Transformation With Confidence

If you’re exploring how to modernize your digital front door, consider starting with a strategic assessment. Align your goals, audit your content, and evaluate your tech stack. The path to better outcomes starts with a smarter, simpler way to help patients find care.

We combine strategy, industry best practices, and technology expertise to deliver award-winning results for leading healthcare organizations.

  • Business Transformation: Activate strategy for transformative outcomes and health experiences.
  • Modernization: Maximize technology to drive health innovation, efficiency, and interoperability.
  • Data + Analytics: Power enterprise agility and accelerate healthcare insights.
  • Consumer Experience: Connect, ease, and elevate impactful health journeys.

Our approach to designing and implementing AI and machine learning (ML) solutions promotes secure and responsible adoption and ensures demonstrated and sustainable business value.

Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S. Explore our healthcare expertise and contact us to learn more.

]]>
https://blogs.perficient.com/2025/08/12/why-value-based-care-needs-digital-transformation-to-succeed/feed/ 1 385579
End-to-End DevSecOps in CI/CD Pipelines: Build Secure Apps with Sast, Dast and Azure DevOps https://blogs.perficient.com/2025/08/06/devsecops-azure-devops-ci-cd-pipeline/ https://blogs.perficient.com/2025/08/06/devsecops-azure-devops-ci-cd-pipeline/#respond Wed, 06 Aug 2025 14:43:41 +0000 https://blogs.perficient.com/?p=384208

Introduction to DevSecOps

DevSecOps is the evolution of DevOps with a focused integration of security throughout the software development lifecycle (SDLC). It promotes a cultural and technical shift by “shifting security left,”  integrating security early in the CI/CD pipeline instead of treating it as an afterthought.

Dev Sec Ops Removebg Preview

While DevOps engineers focus on speed, automation, and reliability, DevSecOps engineers share the same goals with an added responsibility: ensuring security at every stage of the process.

DevSecOps = Development + Security + Operations

By embedding security from the beginning, DevSecOps enables organizations to build secure software faster, reduce costs, and minimize risks.

Why Shift Left with Security?

Dollar Logo PNG Vectors Free Download

Cost Savings

Search Bug Icons - Free SVG & PNG Search Bug Images - Noun Project

Early Detection

 

Shifting security left means embedding security checks earlier in the pipeline. This approach offers several key benefits:

 

 

  • Early Detection: Identifies vulnerabilities before they reach production.
  • Cost Savings: Fixing security issues in earlier phases of development is significantly more cost-effective.

  • Reduced Risk: Early intervention helps prevent critical vulnerabilities from being deployed.

Implementing DevSecOps in an Existing CI/CD Pipeline

Prerequisites

To implement DevSecOps in your Azure DevOps pipeline, ensure the following infrastructure is in place:

  • Azure VM (for self-hosted Azure DevOps agent)

  • Azure Kubernetes Service (AKS)

  • Azure Container Registry (ACR)

  • Azure DevOps project and repository

  • SonarQube (for static code analysis)Docker Registry Service Connection

Service Connections Setup

1. Docker Registry Connection

  • Go to Azure DevOps → Project Settings → Service Connections.

  • Click “New service connection” → Select Docker Registry.

  • Choose Docker Hub or ACR.

  • Provide Docker ID/Registry URL and credentials.

  • Verify and save the connection.

2. AKS Service Connection

  • Azure DevOps → Project Settings → Service Connections.

  • Click “New service connection” → Select Azure Resource Manager.

  • Use Service Principal (automatic).

  • Select your subscription and AKS resource group.

  • Name the connection and save.

3. SonarQube Service Connection

  • Azure DevOps → Project Settings → Service Connections.

  • New service connection → SonarQube.

  • Input the Server URL and token.

  • Save and verify.

Main Features Covered in DevSecOps Pipeline

Devsecops.drawio (1)

  • Secret Scanning

  • Dependency Scanning (SCA)

  • Static Code Analysis (SAST)

  • Container Image Scanning

  • DAST (Dynamic Application Security Testing)

  • Quality Gates Enforcement

  • Docker Build & Push

  • AKS Deployment

Pipeline Stages Overview

1. Secret Scanning

Trivy

Tools

detect-secrets, Trivy

Steps

  • Install Python and detect-secrets.

  • Scan source code for hardcoded secrets.

  • Run Trivy with --security-checks secret.

  • Save results as HTML → Publish to pipeline artifacts.

  • Apply quality gates to fail builds on critical secrets.

2. Dependency Scanning (SCA)

Containerizing OWASP Dependency Check Security Tool | by Deshani Geethika Poddenige | Medium

Tools

Safety, Trivy

Steps

  • Use requirements.txt for dependencies.

  • Run Safety to identify known vulnerabilities.

  • Scan the filesystem using Trivy fs.

  • Publish results.

  • Fail pipeline if critical vulnerabilities exceed the threshold.

3. Static Code Analysis (SAST)

SonarQube - Eclipsepedia

 

Tools

SonarQube, Bandit

Steps

  • Use Bandit for Python security issues.

  • Run SonarQube analysis via CLI.

  • Enforce SonarQube Quality Gate to fail the pipeline on low scores.

4. Container Image Build & Scan

Docker full logo transparent PNG - StickPNG

Tools

Docker, Trivy

Steps

  • Build the Docker image with a version tag.

  • Scan the image using Trivy.

  • Generate and publish scan reports.

  • Apply a security gate — fail on high-severity vulnerabilities.

  • Push image to ACR if passed.

5. DAST – OWASP ZAP Scan

Owasp Zap Logo Png, Transparent Png - kindpng

Tools

OWASP ZAP

Steps

  • Run the app in a test container network.

  • Perform ZAP baseline scan.

  • Save results as HTML.

  • Stop the test container.

  • Apply a security gate to block high-risk findings

6. Deploy to AKS

Azure Kubernetes Service: Use Cases | by Ankit Pramanik | Medium

Tools

kubectl, Kubernetes 

Steps

  • Fetch AKS credentials.

  • Use envsubst to fill in manifest variables.

  • Deploy the app via kubectl apply.

  • Trigger a pod restart to deploy a new image.

Conclusion

DevSecOps is not just a practice; it’s a mindset. By integrating security at every phase of your CI/CD pipeline, you’re not only protecting your software but also enhancing the speed and confidence with which you can deliver it.

Implementing these practices with Azure DevOps, SonarQube, Trivy, and other tools makes securing your applications systematic, efficient, and measurable.

Secure early. Secure often. Secure always. That’s the DevSecOps way.

]]>
https://blogs.perficient.com/2025/08/06/devsecops-azure-devops-ci-cd-pipeline/feed/ 0 384208