Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ Expert Digital Insights Tue, 09 Sep 2025 14:53:50 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ 32 32 30508587 Why It’s Time to Move from SharePoint On-Premises to SharePoint Online https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/ https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/#respond Tue, 09 Sep 2025 14:53:50 +0000 https://blogs.perficient.com/?p=387013

In today’s fast-paced digital workplace, agility, scalability, and collaboration aren’t just nice to have—they’re business-critical. If your organization is still on Microsoft SharePoint On-Premises, now is the time to make the move to SharePoint Online. Here’s why this isn’t just a technology upgrade—it’s a strategic leap forward.

1. Work Anywhere, Without Barriers

SharePoint Online empowers your workforce with secure access to content from virtually anywhere. Whether your team is remote, hybrid, or on the go, they can collaborate in real time without being tethered to a corporate network or VPN.

2. Always Up to Date

Forget about manual patching and version upgrades. SharePoint Online is part of Microsoft 365, which means you automatically receive the latest features, security updates, and performance improvements—without the overhead of managing infrastructure.

3. Reduce Costs and Complexity

Maintaining on-premises servers is expensive and resource-intensive. By moving to SharePoint Online, you eliminate hardware costs, reduce IT overhead, and streamline operations. Plus, Microsoft handles the backend, so your team can focus on innovation instead of maintenance.

4. Enterprise-Grade Security and Compliance

Microsoft invests heavily in security, offering built-in compliance tools, data loss prevention, and advanced threat protection. SharePoint Online is designed to meet global standards and industry regulations, giving you peace of mind that your data is safe.

5. Seamless Integration with Microsoft 365

SharePoint Online integrates effortlessly with Microsoft Teams, OneDrive, Power Automate, and Power BI—enabling smarter workflows, better insights, and more connected experiences across your organization.

6. Scalability for the Future

Whether you’re a small business or a global enterprise, SharePoint Online scales with your needs. You can easily add users, expand storage, and adapt to changing business demands without worrying about infrastructure limitations.

Why Perficient for Your SharePoint Online Migration 

Migrating to SharePoint Online is more than a move to the cloud—it’s a chance to transform how your business works. At Perficient, we help you turn common migration challenges into measurable wins:
  • 35% boost in collaboration efficiency
  • Up to 60% cost savings per user
  • 73% reduction in data breach risk
  • 100+ IT hours saved each month
Our Microsoft 365 Modernization solutions don’t just migrate content—they build a secure, AI-ready foundation. From app modernization and AI-powered search to Microsoft Copilot integration, Perficient positions your organization for the future.
]]>
https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/feed/ 0 387013
Automating Azure Key Vault Secret and Certificate Expiry Monitoring with Azure Function App https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/ https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/#respond Tue, 26 Aug 2025 14:15:25 +0000 https://blogs.perficient.com/?p=386349

How to monitor hundreds of Key Vaults across multiple subscriptions for just $15-25/month

The Challenge: Key Vault Sprawl in Enterprise Azure

If you’re managing Azure at enterprise scale, you’ve likely encountered this scenario: Key Vaults scattered across dozens of subscriptions, hundreds of certificates and secrets with different expiry dates, and the constant fear of unexpected outages due to expired certificates. Manual monitoring simply doesn’t scale when you’re dealing with:

  • Multiple Azure subscriptions (often 10-50+ in large organizations)
  • Hundreds of Key Vaults across different teams and environments
  • Thousands of certificates with varying renewal cycles
  • Critical secrets that applications depend on
  • Different time zones and rotation schedules

The traditional approach of spreadsheets, manual checks, or basic Azure Monitor alerts breaks down quickly. You need something that scales automatically, costs practically nothing, and provides real-time visibility across your entire Azure estate.

The Solution: Event-Driven Monitoring Architecture

Keyvaultautomation

Single Function App, Unlimited Key Vaults

Instead of deploying monitoring resources per Key Vault (expensive and complex), we use a centralized architecture:

Management Group (100+ Key Vaults)
           ↓
   Single Function App
           ↓
     Action Group
           ↓
    Notifications

This approach provides:

  • Unlimited scalability: Monitor 1 or 1000+ Key Vaults with the same infrastructure
  • Cross-subscription coverage: Works across your entire Azure estate
  • Real-time alerts: Sub-5-minute notification delivery
  • Cost optimization: $15-25/month total (not per Key Vault!)

How It Works: The Technical Deep Dive

1. Event Grid System Topics (The Sensors)

Azure Key Vault automatically generates events when certificates and secrets are about to expire. We create Event Grid System Topics for each Key Vault to capture these events:

Event Types Monitored:
• Microsoft.KeyVault.CertificateNearExpiry
• Microsoft.KeyVault.CertificateExpired  
• Microsoft.KeyVault.SecretNearExpiry
• Microsoft.KeyVault.SecretExpired

The beauty? These events are generated automatically by Azure – no polling, no manual checking, just real-time notifications when things are about to expire.

2. Centralized Processing (The Brain)

A single Azure Function App processes ALL events from across your organization:

// Simplified event processing flow
eventGridEvent → parseEvent() → extractMetadata() → 
formatAlert() → sendToActionGroup()

Example Alert Generated:
{
  severity: "Sev1",
  alertTitle: "Certificate Expired in Key Vault",
  description: "Certificate 'prod-ssl-cert' has expired in Key Vault 'prod-keyvault'",
  keyVaultName: "prod-keyvault",
  objectType: "Certificate",
  expiryDate: "2024-01-15T00:00:00.000Z"
}

3. Smart Notification Routing (The Messenger)

Azure Action Groups handle notification distribution with support for:

  • Email notifications (unlimited recipients)
  • SMS alerts for critical expiries
  • Webhook integration with ITSM tools (ServiceNow, Jira, etc.)
  • Voice calls for emergency situations.

Implementation: Infrastructure as Code

The entire solution is deployed using Terraform, making it repeatable and version-controlled. Here’s the high-level infrastructure:

Resource Architecture

# Single monitoring resource group
resource "azurerm_resource_group" "monitoring" {
  name     = "rg-kv-monitoring-${var.timestamp}"
  location = var.primary_location
}

# Function App (handles ALL Key Vaults)
resource "azurerm_linux_function_app" "kv_processor" {
  name                = "func-kv-monitoring-${var.timestamp}"
  service_plan_id     = azurerm_service_plan.function_plan.id
  # ... configuration
}

# Event Grid System Topics (one per Key Vault)
resource "azurerm_eventgrid_system_topic" "key_vault" {
  for_each = { for kv in var.key_vaults : kv.name => kv }
  
  name                   = "evgt-${each.key}"
  source_arm_resource_id = "/subscriptions/${each.value.subscriptionId}/resourceGroups/${each.value.resourceGroup}/providers/Microsoft.KeyVault/vaults/${each.key}"
  topic_type            = "Microsoft.KeyVault.vaults"
}

# Event Subscriptions (route events to Function App)
resource "azurerm_eventgrid_event_subscription" "certificate_expiry" {
  for_each = { for kv in var.key_vaults : kv.name => kv }
  
  azure_function_endpoint {
    function_id = "${azurerm_linux_function_app.kv_processor.id}/functions/EventGridTrigger"
  }
  
  included_event_types = [
    "Microsoft.KeyVault.CertificateNearExpiry",
    "Microsoft.KeyVault.CertificateExpired"
  ]
}

CI/CD Pipeline Integration

The solution includes an Azure DevOps pipeline that:

  1. Discovers Key Vaults across your management group automatically
  2. Generates Terraform variables with all discovered Key Vaults
  3. Deploys infrastructure using infrastructure as code
  4. Validates deployment to ensure everything works
# Simplified pipeline flow
stages:
  - stage: DiscoverKeyVaults
    # Scan management group for all Key Vaults
    
  - stage: DeployMonitoring  
    # Deploy Function App and Event Grid subscriptions
    
  - stage: ValidateDeployment
    # Ensure monitoring is working correctly

Cost Analysis: Why This Approach Wins

Traditional Approach (Per-Key Vault Monitoring)

100 Key Vaults × $20/month per KV = $2,000/month
Annual cost: $24,000

This Approach (Centralized Monitoring)

Base infrastructure: $15-25/month
Event Grid events: $2-5/month  
Total: $17-30/month
Annual cost: $204-360

Savings: 98%+ reduction in monitoring costs

Detailed Cost Breakdown

ComponentMonthly CostNotes
Function App (Basic B1)$13.14Handles unlimited Key Vaults
Storage Account$1-3Function runtime storage
Log Analytics$2-15Centralized logging
Event Grid$0.50-2$0.60 per million operations
Action Group$0Email notifications free
Total$17-33Scales to unlimited Key Vaults

Implementation Guide: Getting Started

Prerequisites

  1. Azure Management Group with Key Vaults to monitor
  2. Service Principal with appropriate permissions:
    • Reader on Management Group
    • Contributor on monitoring subscription
    • Event Grid Contributor on Key Vault subscriptions
  3. Azure DevOps or similar CI/CD platform

Step 1: Repository Setup

Create this folder structure:

keyvault-monitoring/
├── terraform/
│   ├── main.tf              # Infrastructure definitions
│   ├── variables.tf         # Configuration variables
│   ├── terraform.tfvars     # Your specific settings
│   └── function_code/       # Function App source code
├── azure-pipelines.yml      # CI/CD pipeline
└── docs/                    # Documentation

Step 2: Configuration

Update terraform.tfvars with your settings:

# Required configuration
notification_emails = [
  "your-team@company.com",
  "security@company.com"
]

primary_location = "East US"
log_retention_days = 90

# Optional: SMS for critical alerts
sms_notifications = [
  {
    country_code = "1"
    phone_number = "5551234567"
  }
]

# Optional: Webhook integration
webhook_url = "https://your-itsm-tool.com/api/alerts"

Step 3: Deployment

The pipeline automatically:

  1. Scans your management group for all Key Vaults
  2. Generates infrastructure code with discovered Key Vaults
  3. Deploys monitoring resources using Terraform
  4. Validates functionality with test events

Expected deployment time: 5-10 minutes

Step 4: Validation

Test the setup by creating a short-lived certificate:

# Create test certificate with 1-day expiry
az keyvault certificate create \
  --vault-name "your-test-keyvault" \
  --name "test-monitoring-cert" \
  --policy '{
    "issuerParameters": {"name": "Self"},
    "x509CertificateProperties": {
      "validityInMonths": 1,
      "subject": "CN=test-monitoring"
    }
  }'

# You should receive an alert within 5 minutes

Operational Excellence

Monitoring the Monitor

The solution includes comprehensive observability:

// Function App performance dashboard
FunctionAppLogs
| where TimeGenerated > ago(24h)
| summarize 
    ExecutionCount = count(),
    SuccessRate = (countif(Level != "Error") * 100.0) / count(),
    AvgDurationMs = avg(DurationMs)
| extend PerformanceScore = case(
    SuccessRate >= 99.5, "Excellent",
    SuccessRate >= 99.0, "Good", 
    "Needs Attention"
)

Advanced Features and Customizations

1. Integration with ITSM Tools

The webhook capability enables integration with enterprise tools:

// ServiceNow integration example
const serviceNowPayload = {
  short_description: `${objectType} '${objectName}' expiring in Key Vault '${keyVaultName}'`,
  urgency: severity === 'Sev1' ? '1' : '3',
  category: 'Security',
  subcategory: 'Certificate Management',
  caller_id: 'keyvault-monitoring-system'
};

2. Custom Alert Routing

Different Key Vaults can route to different teams:

// Route alerts based on Key Vault naming convention
const getNotificationGroup = (keyVaultName) => {
  if (keyVaultName.includes('prod-')) return 'production-team';
  if (keyVaultName.includes('dev-')) return 'development-team';
  return 'platform-team';
};

3. Business Hours Filtering

Critical alerts can bypass business hours, while informational alerts respect working hours:

const shouldSendImmediately = (severity, currentTime) => {
  if (severity === 'Sev1') return true; // Always send critical alerts
  
  const businessHours = isBusinessHours(currentTime);
  return businessHours || isNearBusinessHours(currentTime, 2); // 2 hours before business hours
};

Troubleshooting Common Issues

Issue: No Alerts Received

Symptoms:

Events are visible in Azure, but no notifications are arriving

Resolution Steps:

  1. Check the Action Group configuration in the Azure Portal
  2. Verify the Function App is running and healthy
  3. Review Function App logs for processing errors
  4. Validate Event Grid subscription is active

Issue: High Alert Volume

Symptoms:

Too many notifications, alert fatigue

Resolution:

// Implement intelligent batching
const batchAlerts = (alerts, timeWindow = '15m') => {
  return alerts.reduce((batches, alert) => {
    const key = `${alert.keyVaultName}-${alert.objectType}`;
    batches[key] = batches[key] || [];
    batches[key].push(alert);
    return batches;
  }, {});
};

Issue: Missing Key Vaults

Symptoms: Some Key Vaults are not included in monitoring

Resolution:

  1. Re-run the discovery pipeline to pick up new Key Vaults
  2. Verify service principal has Reader access to all subscriptions
  3. Check for Key Vaults in subscriptions outside the management group
]]>
https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/feed/ 0 386349
Part 2: Implementing Azure Virtual WAN – A Practical Walkthrough https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/ https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/#respond Thu, 21 Aug 2025 09:33:21 +0000 https://blogs.perficient.com/?p=386292

In Part 1 (Harnessing the Power of AWS Bedrock through CloudFormation / Blogs / Perficient), we discussed what Azure Virtual WAN is and why it’s a powerful solution for global networking. Now, let’s get hands-on and walk through the actual implementation—step by step, in a simple, conversational way.

Architecturediagram

1.     Creating the Virtual WAN – The Network’s Control Plane

Virtual WAN is the heart of a global network, not just another resource. It replaces: Isolated VPN gateways per region, Manual ExpressRoute configurations, and complex peering relationships.

Setting it up is easy:

  • Navigate to Azure Portal → Search “Virtual WAN”
  • Click Create and configure.
  • Name: Naming matters for enterprise environments
  • Resource Group: Create new rg-network-global (best practice for lifecycle management)
  • Type: Standard (Basic lacks critical features like ExpressRoute support)

Azure will set up the Virtual WAN in a few seconds. Now, the real fun begins.

2. Setting Up the Virtual WAN Hub – The Heart of The Network

The hub is where all connections converge. It’s like a major airport hub where traffic from different locations meets and gets efficiently routed. Without a hub, you’d need to configure individual gateways for every VPN and ExpressRoute connection, leading to higher costs and management overhead.

  • Navigate to the Virtual WAN resource → Click Hubs → New Hub.
  • Configure the Hub.
  • Region: Choose based on: Primary user locations & Azure service availability (some regions lack certain services)
  • Address Space: Assign a private IP range (e.g., 10.100.0.0/24).

Wait for Deployment, this takes about 30 minutes (Azure is building VPN gateways, ExpressRoute gateways, and more behind the scenes).

Once done, the hub is ready to connect everything: offices, cloud resources, and remote users.

3. Connecting Offices via Site-to-Site VPN – Building Secure Tunnels

Branches and data centres need a reliable, encrypted connection to Azure. Site-to-Site VPN provides this over the public internet while keeping data secure. Without VPN tunnels, branch offices would rely on slower, less secure internet connections to access cloud resources, increasing latency and security risks.

  • In the Virtual WAN Hub, go to VPN (Site-to-Site) → Create VPN Site.
  • Name: branch-nyc-01
  • Private Address Space: e.g., 192.168.100.0/24 (must match on-premises network)
  • Link Speed: Set accurately for Azure’s QoS calculations
  • Download VPN Configuration: Azure provides a config file—apply it to the office’s VPN device (like a Cisco or Fortinet firewall).
  • Lastly, connect the VPN Site to the Hub.
  • Navigate to VPN connections → Create connection → Link the office to the hub.

Now, the office and Azure are securely connected.

4. Adding ExpressRoute – The Private Superhighway

For critical applications (like databases or ERP systems), VPNs might not provide enough bandwidth or stability. ExpressRoute gives us a dedicated, high-speed connection that bypasses the public internet. Without ExpressRoute, latency-sensitive applications (like VoIP or real-time analytics) could suffer from internet congestion or unpredictable performance.

  • Order an ExpressRoute Circuit: We can do this via the Azure Portal or through an ISP (like AT&T or Verizon).
  • Authorize the Circuit in Azure
  • Navigate to the Virtual WAN Hub → ExpressRoute → Authorize.
  • Linking it to Hub: Once it is authorized, connect the ExpressRoute circuit to the hub.

Now, the on-premises network has a dedicated, high-speed connection to Azure—no internet required.

5. Enabling Point-to-Site VPN for Remote Workers – The Digital Commute

Employees working from home need secure access to internal apps without exposing them to the public internet. P2S VPN lets them “dial in” securely from anywhere. Without P2S VPN, remote workers might resort to risky workarounds like exposing RDP or databases to the internet.

  • Configure P2S in The Hub
  • Navigate to VPN (Point-to-Site) → Configure.
  • Set Up Authentication: Choose certificate-based auth (secure and easy to manage) and upload the root/issuer certificates.
  • Assign an IP Pool. e.g., 192.168.100.0/24 (this is where remote users will get their IPs).
  • Download & Distribute the VPN Client

Employees install this on their laptops to connect securely. Now, the team can access Azure resources from anywhere just like they’re in the office.

6. Linking Azure Virtual Networks (VNets) – The Cloud’s Backbone

Applications in one VNet (e.g., frontend servers) often need to talk to another (e.g., databases). Rather than complex peering, the Virtual WAN handles routing automatically. Without VNet integration, it needs manual peering and route tables for every connection, creating a management nightmare at scale.

  • VNets need to be attached.
  • Navigate to The Hub → Virtual Network Connections → Add Connection.
  • Select the VNets. e.g., Connect vnet-app (for applications) and vnet-db (for databases).
  • Azure handles the Routing: Traffic flows automatically through the hub-no manual route tables needed.

Now, the cloud resources communicate seamlessly.

Monitoring & Troubleshooting

Networks aren’t “set and forget.” We need visibility to prevent outages and quickly fix issues. We can use tools like Azure Monitor, which tracks VPN/ExpressRoute health—like a dashboard showing all trains (data packets) moving smoothly. Again, Network Watcher can help to diagnose why a branch can’t connect.

Common Problems & Fixes

  • When VPN connections fail, the problem is often a mismatched shared key—simply re-enter it on both ends.
  • If ExpressRoute goes down, check with your ISP—circuit issues usually require provider intervention.
  • When VNet traffic gets blocked, verify route tables in the hub—missing routes are a common culprit.
]]>
https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/feed/ 0 386292
Invoke the Mapbox Geocoding API to Populate the Location Autocomplete Functionality https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/ https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/#respond Thu, 21 Aug 2025 08:01:53 +0000 https://blogs.perficient.com/?p=381495

While working on one of my projects, I needed to implement an autocomplete box using Mapbox Geocoding APIs in a React/Next.js application. The goal was to filter a list of hospitals based on the selected location. The location results from the API include coordinates, which I compared with the coordinates of the hospitals in my list.

The API returns various properties, including coordinates, under the properties section (as shown in the image below). These coordinates (latitude and longitude) can be used to filter the hospital list by matching them with the selected location.

Mapboxresultproperties

The API requires an access token, which can be obtained by signing up on the Mapbox platform. You can refer to the Geocoding API documentation for more details. The documentation provides a variety of APIs that can be used depending on your specific requirements.

Below are some example APIs taken from the same link.

# A basic forward geocoding request
# Find Los Angeles

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Los%20Angeles&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Find a town called 'Chester' in a specific region
# Add the proximity parameter with local coordinates
# This ensures the town of Chester, New Jersey is in the results

curl "https://api.mapbox.com/search/geocode/v6/forward?q=chester&proximity=-74.70850,40.78375&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Specify types=country to search only for countries named Georgia
# Results will exclude the American state of Georgia

curl "https://api.mapbox.com/search/geocode/v6/forward?q=georgia&types=country&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Limit the results to two results using the limit option
# Even though there are many possible matches
# for "Washington", this query will only return two results.

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Washington&limit=2&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Search for the Place feature "Kaaleng" in the Ilemi Triangle. Specifying the cn worldview will return the country value South Sudan. Not including leaving the worldview parameter would default to the us worldview and return the country value Kenya.

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Kaaleng&worldview=cn&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

The implementation leverages React hooks along with state management for handling component behavior and data flow.

How to Create an Autocomplete Component in React

  1. Create a React component.
  2. Sign up and apply the access token and API URL to the constants.
  3. Create a type to bind the structure of the API response results.
  4. Use the useEffect hook to invoke the API.
  5. Map the fetched results to the defined type.
  6. Apply CSS to style the component and make the autocomplete feature visually appealing.
#constants.ts

export const APIConstants = {
  accessToken: 'YOUR_MAPBOX_ACCESS_TOKEN',
  geoCodeSearchForwardApiUrl: 'https://api.mapbox.com/search/geocode/v6/forward',
  searchWordCount: 3,
};
#LocationResultProps.ts

type Suggetions = {
  properties: {
    feature_type: string;
    full_address: string;
    name: string;
    name_preferred: string;
    coordinates: {
      longitude: number;
      latitude: number;
    };
  };
};
export type LocationResults = {
  features: Array<Suggetions>;
};
#Styles.ts

export const autoComplete = {
  container: {
    width: '250px',
    margin: '20px auto',
  },
  input: {
    width: '100%',
    padding: '10px',
    fontSize: '16px',
    border: '1px solid #ccc',
    borderRadius: '4px',
  },
  dropdown: {
    top: '42px',
    left: '0',
    right: '0',
    backgroundColor: '#fff',
    border: '1px solid #ccc',
    borderTop: 'none',
    maxHeight: '150px',
    listStyleType: 'none',
    padding: '0',
    margin: '0',
    zIndex: 1000,
  },
  item: {
    padding: '5px',
    cursor: 'pointer',
    borderBottom: '1px solid #eee',
  },
};

#LocationSearchInput.tsx

import React, { useEffect, useState } from 'react';
import { APIConstants } from 'lib/constants';
import { autoComplete } from '../Styles';
import { LocationResults } from 'lib/LocationResultProps';

export const Default = (): JSX.Element => {
  const apiUrlParam: string[][] = [
    //['country', 'us%2Cpr'],
    ['types', 'region%2Cpostcode%2Clocality%2Cplace%2Cdistrict%2Ccountry'],
    ['language', 'en'],
    //['worldview', 'us'],
  ];

  const [inputValue, setInputValue] = useState<string>('');
  const [results, setresults] = useState<LocationResults>();
  const [submitted, setSubmitted] = useState<boolean>(false);

  // When the input changes, reset the "submitted" flag.
  const handleChange = (value: string) => {
    setSubmitted(false);
    setInputValue(value);
  };
  const handleSubmit = (value: string) => {
    setSubmitted(true);
    setInputValue(value);
  };

  // Fetch results when the input value changes
  useEffect(() => {
    if (inputValue.length < APIConstants?.searchWordCount) {
      setresults(undefined);
      return;
    }
    if (submitted) {
      return;
    }
    const queryInputParam = [
      ['q', inputValue],
      ['access_token', APIConstants?.accessToken ?? ''],
    ];

    const fetchData = async () => {
      const queryString = apiUrlParam
        .concat(queryInputParam)
        .map((inner) => inner.join('='))
        .join('&');
      const url = APIConstants?.geoCodeSearchForwardApiUrl + '?' + queryString;

      try {
        const response: LocationResults = await (await fetch(url)).json();
        setresults(response);
        console.log(response);
      } catch (err: unknown) {
        console.error('Error obtaining location results for autocomplete', err);
      }
    };

    fetchData();
  }, [inputValue]);

  return (
    <div>
      <div style={autoComplete.container}>
        <input
          style={autoComplete.input}
          onChange={(e) => handleChange(e.target?.value)}
          value={inputValue}
          placeholder="Find Location"
        />

        {inputValue &&
          !submitted &&
          results?.features?.map((x) => {
            return (
              <ul style={autoComplete.dropdown}>
                <li style={autoComplete.item}>
                  <span onClick={() => handleSubmit(x?.properties?.full_address)}>
                    {x?.properties?.full_address}
                  </span>
                </li>
              </ul>
            );
          })}
      </div>
    </div>
  );
};

Finally, we can search for a location using a zip code, state, or country.

Recording 20250520 135312 (1)

 

Additionally, the reverse geocoding API is used similarly, requiring only minor adjustments to the parameters and API URL. The location autocomplete box offers a wide range of use cases. It can be integrated into user forms such as registration or contact forms, where exact location coordinates or a full address need to be captured upon selection. Each location result includes various properties. Based on the user’s input, whether it’s a city, ZIP code, or state, the autocomplete displays matching results.

 

]]>
https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/feed/ 0 381495
Smart Failure Handling in HCL Commerce with Circuit Breakers https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/ https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/#respond Fri, 15 Aug 2025 05:48:28 +0000 https://blogs.perficient.com/?p=386135

In modern enterprise systems, stability and fault tolerance are not optional; they are essential. One proven approach to ensure robustness is the Circuit Breaker pattern, widely used in API development to prevent cascading failures. HCL Commerce takes this principle further by embedding circuit breakers into its HCL Cache to effectively manage Redis failures.

 What Is a Circuit Breaker?
The Circuit Breaker is a design pattern commonly used in API development to stop continuous requests to a service that is currently failing, thereby protecting the system from further issues. It helps maintain system stability by detecting failures and stopping the flow of requests until the issue is resolved.

The circuit breaker typically operates in three main (or “normal”) states. These are part of the standard global pattern of Circuit Breaker design.

Normal States:

  1. CLOSED:
  • At the start, the circuit breaker allows all outbound requests to external services without restrictions.
  • It monitors the success and failure of these calls.
  1. OPEN:
  • The circuit breaker rejects all external calls.
  • This state is triggered when the failure threshold is reached (e.g., 50% failure rate).
  • It remains in this state for a specified duration (e.g., 60 seconds).
  1. HALF_OPEN:
  • After the wait duration in the OPEN state, the circuit breaker transitions to HALF_OPEN.
  • It allows a limited number of calls to check if the external service has recovered.
  • If these calls succeed (e.g., receive a 200 status), the circuit breaker transitions back to  CLOSED.
  • If the error rate continues to be high, the circuit breaker reverts to the OPEN state.
Circuit Breaker Pattern

Circuit breaker pattern with normal states

Special States:

  1. FORCED_OPEN:
  • The circuit breaker is manually set to reject all external calls.
  • No calls are allowed, regardless of the external service’s status.
  1. DISABLED:
  • The circuit breaker is manually set to allow all external calls.
  • It does not monitor or track the success or failure of these calls.
Circuit breaker pattern with special states

Circuit breaker pattern with special states

Circuit Breaker in HCL Cache (for Redis)

In HCL Commerce, the HCL Cache layer interacts with Redis for remote coaching. But what if Redis becomes unavailable or slow? HCL Cache uses circuit breakers to detect issues and temporarily stop calls to Redis, thus protecting the rest of the system from being affected.

Behavior Overview:

  • If 20 consecutive failures occur in 10 seconds, the Redis connection is cut off.
  • The circuit remains open for 60 seconds.
  • At this stage, the circuit enters a HALF_OPEN state, where it sends limited test requests to evaluate if the external service has recovered.
  • If even 2 of these test calls fail, the circuit reopens for another 60 seconds.

Configuration Snapshot

To manage Redis outages effectively, HCL Commerce provides fine-grained configuration settings for both Redis client behavior and circuit breaker logic. These settings are defined in the Cache YAML file, allowing teams to tailor fault-handling based on their system’s performance and resilience needs.

 Redis Request Timeout Configuration

Slow Redis responses are not treated as failures unless they exceed the defined timeout threshold. The Redis client in HCL Cache supports timeout and retry configurations to control how persistent the system should be before declaring a failure:

timeout: 3000           # Max time (in ms) to wait for a Redis response
retryAttempts: 3        # Number of retry attempts on failure
retryInterval: 1500    # Specifies the delay (in milliseconds) between each retry attempt.

With the above configuration, the system will spend up to 16.5 seconds (3000 + 3 × (3000 + 1500)) trying to get a response before returning a failure. While these settings offer robustness, overly long retries can result in delayed user responses or log flooding, so tuning is essential.

Circuit Breaker Configuration

Circuit breakers are configured under the redis.circuitBreaker section of the Cache YAML file. Here’s an example configuration:

redis:
  circuitBreaker:
    scope: auto
    retryWaitTimeMs: 60000
    minimumFailureTimeMs: 10000
    minimumConsecutiveFailures: 20
    minimumConsecutiveFailuresResumeOutage: 2 
cacheConfigs:
  defaultCacheConfig:
    localCache:
      enabled: true
      maxTimeToLiveWithRemoteOutage: 300

Explanation of Key Fields:

  • scope: auto: Automatically determines whether the circuit breaker operates at the client or cache/shard level, depending on the topology.
  • retryWaitTimeMs (Default: 60000): Time to wait before attempting Redis connections after circuit breaker is triggered.
  • minimumFailureTimeMs (Default: 10000): Minimum duration during which consecutive failures must occur before opening the circuit.
  • minimumConsecutiveFailures (Default: 20): Number of continuous failures required to trigger outage mode.
  • minimumConsecutiveFailuresResumeOutage (Default: 2): Number of failures after retrying that will put the system back into outage mode.
  • maxTimeToLiveWithRemoteOutage: During Redis outages, local cache entries use this TTL value (in seconds) to serve data without invalidation messages.

Real-world Analogy

Imagine you have a web service that fetches data from an external API. Here’s how the circuit breaker would work:

  1. CLOSED: The service makes calls to the API and monitors the responses.
  2. OPEN: If the API fails too often (e.g., 50% of the time), the circuit breaker stops making calls for 60 seconds.
  3. HALF_OPEN: After 60 seconds, the circuit breaker allows a few calls to the API to see if it’s working again.
  4. CLOSED: If the API responds successfully, the circuit breaker resumes normal operation.
  5. OPEN: If the API still fails, the circuit breaker stops making calls again and waits.

Final Thought

By combining the classic circuit breaker pattern with HCL Cache’s advanced configuration, HCL Commerce ensures graceful degradation during Redis outages. It’s not just about availability—it’s about intelligent fault recovery.

For more detailed information, you can refer to the official documentation here:
🔗 HCL Commerce Circuit Breakers – Official Docs

]]>
https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/feed/ 0 386135
How to Setup Nwayo Preprocessor in Magento 2 https://blogs.perficient.com/2025/08/13/how-to-setup-nwayo-preprocessor-in-magento-2/ https://blogs.perficient.com/2025/08/13/how-to-setup-nwayo-preprocessor-in-magento-2/#respond Wed, 13 Aug 2025 06:01:56 +0000 https://blogs.perficient.com/?p=385807

What is Nwayo?

Nwayo Preprocessor is an extendable front-end boilerplate designed to streamline development for multi-theme, multi-site, and multi-CMS front-end frameworks. It provides an efficient workflow for building responsive, scalable, and maintainable web themes across different platforms. 

In Magento 2, Nwayo can be particularly beneficial for front-end developers as it simplifies the theme deployment process. With just a single change in the Sass files, the framework can automatically regenerate and apply updates across the site. This approach not only accelerates the development process but also ensures consistency in the front-end experience across various themes and websites. 

Benefits of Using Nwayo Preprocessor

Time-Saving and Efficiency

  •  Nwayo automates the process of compiling and deploying front-end code, particularly Sass to CSS, with just a few commands. This allows developers to focus more on building and refining features rather than managing repetitive tasks like manual builds and deployments.                                                                                                            

Scalability Across Multi-Site and Multi-Theme Projects

  • Nwayo is designed to handle multi-site and multi-theme environments, which is common in complex platforms like Magento 2. This allows developers to easily maintain and apply changes across different sites and themes without duplicating efforts, making it ideal for large-scale projects.                                                                                   

Consistency and Maintainability

  • By centralizing code management and automating build processes, Nwayo ensures that all updates made in Sass files are applied consistently throughout the project. This helps in maintaining a uniform look and feel across different sections and themes, reducing the risk of human error and improving maintainability.                                                                                                                                                                                                                                        

Flexibility and Extensibility

Nwayo is highly extensible, allowing developers to tailor the boilerplate to their specific project needs. Whether it’s adding new workflows, integrating with different CMS platforms, or customizing the theme, Nwayo provides a flexible framework that can adapt to various front-end requirements.                                             

Version Control and Updates

With built-in commands to check versions and install updates, Nwayo makes it easy to keep the workflow up to date. This ensures compatibility with the latest development tools and standards, helping developers stay current with front-end best practices.  

Requirements to Set Up Nwayo

i)Node.js 

ii) Nwayo CLI

How to Set Up Nwayo Preprocessor in Magento 2?

Run the commands in your project root folder

Step 1

  • To set the boilerplate over the project 
  • npx @absolunet/nwayo-grow-project

Step 2

  • Install workflow and vendor (in the Nwayo root folder)
  • npm install

Step 3

  • Install CLI (in the Nwayo root folder) 
  • npm install -g @absolunet/nwayo-cli

 Step 4

  • Install Nwayo Workflow (in the Nwayo folder) 
  • nwayo install workflow

Step 5

  • Run the project (in the Nwayo folder) 
  • (It will convert Sass to CSS) 
  • nwayo Run watch

Step 6

  • Build the project (in the Nwayo folder) 
  • (It will build the Sass Files) 
  • nwayo rebuild

 Nwayo Rebuild

Magento 2 Integration

Nwayo integrates seamlessly with Magento 2, simplifying the process of managing multi-theme, multi-site environments. Automating Sass compilation and CSS generation allows developers to focus on custom features without worrying about the manual overhead of styling changes. With Nwayo, any updates to your Sass files are quickly reflected across your Magento 2 themes, saving time and reducing errors. 

Compatibility with Other Frameworks and CMS

Nwayo is a versatile tool designed to work with various front-end frameworks and CMS platforms. Its extendable architecture allows it to be used beyond Magento 2, providing a unified front-end development workflow for multiple environments. Some of the other frameworks and platforms that Nwayo supports include: 

1.WordPress

Nwayo can be easily adapted to work with WordPress themes. Since WordPress sites often rely on custom themes, Nwayo can handle Sass compilation and make theme management simpler by centralizing the CSS generation process for various stylesheets used in a WordPress project. 

2. Drupal

For Drupal projects, Nwayo can streamline theme development, allowing developers to work with Sass files while ensuring CSS is consistently generated across all Drupal themes. This is especially helpful when maintaining multi-site setups within Drupal, as it can reduce the time needed for theme updates. 

3. Laravel

When working with Laravel-based applications that require custom front-end solutions, Nwayo can automate the build process for Sass files, making it easier to manage the styles for different views and components within Laravel Blade templates. It helps keep the front-end codebase clean and optimized. 

4. Static Site Generators (Jekyll, Hugo, etc.)

Nwayo can also be used in static site generators like Jekyll or Hugo. In these setups, it handles the styling efficiently by generating optimized CSS files from Sass. This is particularly useful when you need to manage themes for static websites where speed and simplicity are key priorities. 

Framework-Agnostic Features

Nwayo’s CLI and Sass-based workflow can be customized to work in nearly any front-end project, regardless of the underlying CMS or framework. This makes it suitable for developers working on custom projects where there’s no predefined platform, allowing them to benefit from a consistent and efficient development workflow across different environments. 

Performance and Optimization

Nwayo includes several built-in features for optimizing front-end assets: 

  • Minification of CSS files: Ensures that the final CSS output is as small and efficient as possible, helping to improve page load times. 
  • Code Splitting: Allows developers to load only the required CSS for different pages or themes, reducing the size of CSS payloads and improving site performance. 
  • Automatic Prefixing: Nwayo can automatically add vendor prefixes for different browsers, ensuring cross-browser compatibility without manual adjustments.              

Custom Workflow Adaptation

Nwayo’s modular architecture allows developers to easily add or remove features from the workflow. Whether you’re working with React, Vue, or other JavaScript frameworks, Nwayo’s preprocessor can be extended to fit the unique requirements of any project. 

Example Framework Compatibility Diagram

You could include a diagram or chart that shows Nwayo’s compatibility with different frameworks and CMS: 

Framework Compatibility Diagram

This visual table makes it clear which frameworks Nwayo supports, giving developers an overview of its flexibility. 

10 Useful Nwayo Preprocessor Commands 

In addition to the basic commands for setting up and managing Nwayo in your project, here are other helpful commands you can use for various tasks:                                                                                                                                           

1. Check Nwayo Version

Check Nwayo Version   

This command allows you to verify the currently installed version of Nwayo in your environment. 

2. Install Vendors 

Install Vendors

Installs third-party dependencies required by the Nwayo workflow, making sure your project has all the necessary assets to function correctly. 

3. Remove Node Modules 

Remove Node Modules

This command clears the node_modules folder, which may be helpful if you’re facing dependency issues or need to reinstall modules. 

4. Build the Project 

Build The Project

Runs a complete build of the project, compiling all Sass files into CSS. This is typically used when preparing a project for production.

5. Watch for File Changes 

Watch For File Changes

Watches for changes in your Sass files and automatically compiles them into CSS. This is useful during development when you want real-time updates without having to manually trigger a build. 

6. Linting (Check for Code Quality) 

Linting (check For Code Quality)

Checks your Sass files for code quality and best practices using predefined linting rules. This helps ensure that your codebase follows consistent styling and performance guidelines. 

7. Clean Build Artifacts 

Clean Build Artifacts

Removes generated files (CSS, maps, etc.) to ensure that you’re working with a clean project. This can be useful when preparing for a fresh build.

8. Generate Production-Ready CSS

Generate Production Ready Css

This command builds the project in production mode, minifying CSS files and optimizing them for faster load times.

9. List Available Commands

List Available Commands

Displays all available commands, providing a quick reference for tasks that can be executed via the Nwayo CLI.

10. Nwayo Configurations (View or Edit) 

Nwayo Configurations (view Or Edit)

Allows you to view or modify the configuration settings for your Nwayo setup, such as output paths or preprocessing options.

By utilizing these commands, you can take full advantage of Nwayo’s features and streamline your front-end development workflow in Magento 2 or other compatible frameworks.

For a complete list of commands, visit the Nwayo CLI Documentation.

Reference Links

For more detailed information and official documentation on Nwayo, visit the following resources:

  1. Nwayo Official Documentation
    https://documentation.absolunet.com/nwayo/
    This is the official guide to setting up and using Nwayo. It includes installation instructions, supported commands, and best practices for integrating Nwayo with various frameworks, including Magento 2.
  2. Nwayo GitHub Repository
    https://github.com/absolunet/nwayo
    The GitHub repository provides access to the Nwayo source code, release notes, and additional resources for developers looking to contribute or understand the inner workings of the tool.
  3. Nwayo CLI Documentation
    https://npmjs.com/package/@absolunet/nwayo-cli
    This page details the Nwayo CLI, including installation instructions, supported commands, and usage examples.

Conclusion

In conclusion, using Nwayo code can significantly simplify the development process, allowing developers to focus on building unique features rather than spending time on repetitive tasks. By utilizing existing code templates and libraries, developers can save time and improve their productivity.

]]>
https://blogs.perficient.com/2025/08/13/how-to-setup-nwayo-preprocessor-in-magento-2/feed/ 0 385807
Why Value-Based Care Needs Digital Transformation to Succeed https://blogs.perficient.com/2025/08/12/why-value-based-care-needs-digital-transformation-to-succeed/ https://blogs.perficient.com/2025/08/12/why-value-based-care-needs-digital-transformation-to-succeed/#comments Tue, 12 Aug 2025 19:18:46 +0000 https://blogs.perficient.com/?p=385579

The pressure is on for healthcare organizations to deliver more—more value, more equity, more impact. That’s where a well-known approach is stepping back into the spotlight.

If you’ve been around healthcare conversations lately, you’ve probably heard the resurgence of term value-based care. And there’s a good reason for that. It’s not just a buzzword—it’s reshaping how we think about health, wellness, and the entire care experience.

What Is Value-Based Care, Really?

At its core, value-based care is a shift away from the old-school fee-for-service model, where providers got paid for every test, procedure, or visit, regardless of whether it actually helped the patient. Instead, value-based care rewards providers for delivering high-quality, efficient care that leads to better health outcomes.

It’s not about how much care is delivered, it’s about how effective that care is.

This shift matters because it places patients at the center of everything. It’s about making sure people get the right care, at the right time, in the right setting. That means fewer unnecessary tests, fewer duplicate procedures, and less of the fragmentation that’s plagued the system for decades.

The results? Better experiences for patients. Lower costs. Healthier communities.

Explore More: Access to Care Is Evolving: What Consumer Insights and Behavior Models Reveal

Benefits and Barriers of Value-Based Care in Healthcare Transformation

There’s a lot to be excited about, and for good reason! When we focus on prevention, chronic disease management, and whole-person wellness, we can avoid costly hospital stays and emergency room visits. That’s not just good for the healthcare system, it’s good for people, families, and communities. It moves us closer to the holy grail in healthcare: the quintuple aim. Achieving it means delivering better outcomes, elevating experiences for both patients and clinicians, reducing costs, and advancing health equity.

The challenge? Turning value-based care into a scalable, sustainable reality isn’t easy.

Despite more than a decade of pilots, programs, and well-intentioned reforms, only a small number of healthcare organizations have been able to scale their value-based care models effectively. Why? Because many still struggle with some pretty big roadblocks—like outdated technology, disconnected systems, siloed data, and limited ability to manage risk or coordinate care.

That’s where digital transformation comes in.

To make value-based care real and sustainable, healthcare organizations are rethinking their infrastructure from the ground up. They’re adopting cloud-based platforms and interoperable IT systems that allow for seamless data exchange across providers, payers, and patients. They’re tapping into advanced analytics, intelligent automation, and AI to identify at-risk patients, personalize care, and make smarter decisions faster.

As organizations work to enable VBC through digital transformation, it’s critical to really understand what the current research says. Our recent study, Access to Care: The Digital Imperative for Healthcare Leaders, backs up these trends, showing that digital convenience is no longer a differentiator—it’s a baseline expectation.

Findings show that nearly half of consumers have opted for digital-first care instead of visiting their regular physician or provider.

This shift highlights how important it is to offer simple and intuitive self-service digital tools that help people get what they need—fast. When it’s easy to find and access care, people are more likely to trust you, stick with you, and come back when they need you again.

You May Also Enjoy: How Innovative Healthcare Organizations Integrate Clinical Intelligence

Redesigning Care Models for a Consumer-Centric, Digitally Enabled Future

Care models are also evolving. Instead of reacting to illness, we’re seeing a stronger focus on prevention, early intervention, and proactive outreach. Consumer-centric tools like mobile apps, patient portals, and personalized health reminders are becoming the norm, not the exception. It’s all part of a broader movement to meet people where they are and give them more control over their health journey.

But here’s an important reminder: none of these efforts work in a vacuum.

Value-based care isn’t just a technology upgrade or a process tweak. It’s a cultural shift.

Success requires aligning people, processes, data, and technology in a way that’s intentional and strategic. It’s about creating an integrated system that’s designed to improve outcomes and then making those improvements stick.

So, while the road to value-based care may be long and winding, the destination is worth it. It’s not just a different way of delivering care—it’s a smarter, more sustainable one.

Success In Action: Empowering Healthcare Consumers and Their Care Ecosystems With Interoperable Data

Reimagine Healthcare Transformation With Confidence

If you’re exploring how to modernize your digital front door, consider starting with a strategic assessment. Align your goals, audit your content, and evaluate your tech stack. The path to better outcomes starts with a smarter, simpler way to help patients find care.

We combine strategy, industry best practices, and technology expertise to deliver award-winning results for leading healthcare organizations.

  • Business Transformation: Activate strategy for transformative outcomes and health experiences.
  • Modernization: Maximize technology to drive health innovation, efficiency, and interoperability.
  • Data + Analytics: Power enterprise agility and accelerate healthcare insights.
  • Consumer Experience: Connect, ease, and elevate impactful health journeys.

Our approach to designing and implementing AI and machine learning (ML) solutions promotes secure and responsible adoption and ensures demonstrated and sustainable business value.

Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S. Explore our healthcare expertise and contact us to learn more.

]]>
https://blogs.perficient.com/2025/08/12/why-value-based-care-needs-digital-transformation-to-succeed/feed/ 1 385579
End-to-End DevSecOps in CI/CD Pipelines: Build Secure Apps with Sast, Dast and Azure DevOps https://blogs.perficient.com/2025/08/06/devsecops-azure-devops-ci-cd-pipeline/ https://blogs.perficient.com/2025/08/06/devsecops-azure-devops-ci-cd-pipeline/#respond Wed, 06 Aug 2025 14:43:41 +0000 https://blogs.perficient.com/?p=384208

Introduction to DevSecOps

DevSecOps is the evolution of DevOps with a focused integration of security throughout the software development lifecycle (SDLC). It promotes a cultural and technical shift by “shifting security left,”  integrating security early in the CI/CD pipeline instead of treating it as an afterthought.

Dev Sec Ops Removebg Preview

While DevOps engineers focus on speed, automation, and reliability, DevSecOps engineers share the same goals with an added responsibility: ensuring security at every stage of the process.

DevSecOps = Development + Security + Operations

By embedding security from the beginning, DevSecOps enables organizations to build secure software faster, reduce costs, and minimize risks.

Why Shift Left with Security?

Dollar Logo PNG Vectors Free Download

Cost Savings

Search Bug Icons - Free SVG & PNG Search Bug Images - Noun Project

Early Detection

 

Shifting security left means embedding security checks earlier in the pipeline. This approach offers several key benefits:

 

 

  • Early Detection: Identifies vulnerabilities before they reach production.
  • Cost Savings: Fixing security issues in earlier phases of development is significantly more cost-effective.

  • Reduced Risk: Early intervention helps prevent critical vulnerabilities from being deployed.

Implementing DevSecOps in an Existing CI/CD Pipeline

Prerequisites

To implement DevSecOps in your Azure DevOps pipeline, ensure the following infrastructure is in place:

  • Azure VM (for self-hosted Azure DevOps agent)

  • Azure Kubernetes Service (AKS)

  • Azure Container Registry (ACR)

  • Azure DevOps project and repository

  • SonarQube (for static code analysis)Docker Registry Service Connection

Service Connections Setup

1. Docker Registry Connection

  • Go to Azure DevOps → Project Settings → Service Connections.

  • Click “New service connection” → Select Docker Registry.

  • Choose Docker Hub or ACR.

  • Provide Docker ID/Registry URL and credentials.

  • Verify and save the connection.

2. AKS Service Connection

  • Azure DevOps → Project Settings → Service Connections.

  • Click “New service connection” → Select Azure Resource Manager.

  • Use Service Principal (automatic).

  • Select your subscription and AKS resource group.

  • Name the connection and save.

3. SonarQube Service Connection

  • Azure DevOps → Project Settings → Service Connections.

  • New service connection → SonarQube.

  • Input the Server URL and token.

  • Save and verify.

Main Features Covered in DevSecOps Pipeline

Devsecops.drawio (1)

  • Secret Scanning

  • Dependency Scanning (SCA)

  • Static Code Analysis (SAST)

  • Container Image Scanning

  • DAST (Dynamic Application Security Testing)

  • Quality Gates Enforcement

  • Docker Build & Push

  • AKS Deployment

Pipeline Stages Overview

1. Secret Scanning

Trivy

Tools

detect-secrets, Trivy

Steps

  • Install Python and detect-secrets.

  • Scan source code for hardcoded secrets.

  • Run Trivy with --security-checks secret.

  • Save results as HTML → Publish to pipeline artifacts.

  • Apply quality gates to fail builds on critical secrets.

2. Dependency Scanning (SCA)

Containerizing OWASP Dependency Check Security Tool | by Deshani Geethika Poddenige | Medium

Tools

Safety, Trivy

Steps

  • Use requirements.txt for dependencies.

  • Run Safety to identify known vulnerabilities.

  • Scan the filesystem using Trivy fs.

  • Publish results.

  • Fail pipeline if critical vulnerabilities exceed the threshold.

3. Static Code Analysis (SAST)

SonarQube - Eclipsepedia

 

Tools

SonarQube, Bandit

Steps

  • Use Bandit for Python security issues.

  • Run SonarQube analysis via CLI.

  • Enforce SonarQube Quality Gate to fail the pipeline on low scores.

4. Container Image Build & Scan

Docker full logo transparent PNG - StickPNG

Tools

Docker, Trivy

Steps

  • Build the Docker image with a version tag.

  • Scan the image using Trivy.

  • Generate and publish scan reports.

  • Apply a security gate — fail on high-severity vulnerabilities.

  • Push image to ACR if passed.

5. DAST – OWASP ZAP Scan

Owasp Zap Logo Png, Transparent Png - kindpng

Tools

OWASP ZAP

Steps

  • Run the app in a test container network.

  • Perform ZAP baseline scan.

  • Save results as HTML.

  • Stop the test container.

  • Apply a security gate to block high-risk findings

6. Deploy to AKS

Azure Kubernetes Service: Use Cases | by Ankit Pramanik | Medium

Tools

kubectl, Kubernetes 

Steps

  • Fetch AKS credentials.

  • Use envsubst to fill in manifest variables.

  • Deploy the app via kubectl apply.

  • Trigger a pod restart to deploy a new image.

Conclusion

DevSecOps is not just a practice; it’s a mindset. By integrating security at every phase of your CI/CD pipeline, you’re not only protecting your software but also enhancing the speed and confidence with which you can deliver it.

Implementing these practices with Azure DevOps, SonarQube, Trivy, and other tools makes securing your applications systematic, efficient, and measurable.

Secure early. Secure often. Secure always. That’s the DevSecOps way.

]]>
https://blogs.perficient.com/2025/08/06/devsecops-azure-devops-ci-cd-pipeline/feed/ 0 384208
Building a Custom API with Node.js https://blogs.perficient.com/2025/08/06/building-a-custom-api-with-node-js/ https://blogs.perficient.com/2025/08/06/building-a-custom-api-with-node-js/#respond Wed, 06 Aug 2025 06:05:29 +0000 https://blogs.perficient.com/?p=384922

A custom API is a unique interface built to allow different applications to interact with your system. Unlike generic APIs, a custom API is specifically designed to meet the needs of your project, enabling tailored functionality like retrieving data, processing requests, or integrating with third-party services. Building a Custom API gives you complete control over how your application communicates with others.

In this article, you will walk through building a Custom API with Node.js, step-by-step, implementing essential CRUD operations—Create, Read, Update, and Delete, so you can create your own powerful and efficient API.

Setting Up the Project

To get started, you need to have Node.js installed. If you haven’t installed Node.js, here’s how to do it:

Once Node.js is installed, you can verify by running the following commands in your terminal:

         node -v
         npm -v

Pic22

Creating the Project Direct

Let’s create a simple directory for your API project.

  • Create a new folder for your project:

                 mkdir custom-api
                 cd custom-api

  • Initialize a new Node.js project:

                npm init -y

This creates a package.json file, which will manage the dependencies and configurations for your project.

Pic1

Installing Dependencies

You can continue with the terminal or can switch to VS Code. I’m switching to VS Code. Need Express to build the API. Express is a minimal web framework for Node.js, simplifying routing, handling requests, and creating servers.

To install Express, run:

     npm install express

Creating the Server

Now that we have Express installed, let’s create a basic server.

  1. Create a new file called app.js in the project folder.
  2. Add the following code to create a basic server:
const express = require('express');
const app = express();

// Middleware to parse JSON bodies
app.use(express.json());

// Root route
app.get('/', (req, res) => {
  res.send('Welcome to the Custom API!');
});

// Start the server on port 3000
app.listen(3000, () => {
  console.log('Server is running on http://localhost:3000');
});

 

Image1

To run your server, use:

    node app.js

Now, open your browser and navigate to http://localhost:3000. You should “ee “Welcome to the Custom “PI!” displayed.

Image2 (2)

Defining Routes (CRUD Operations)

APIs are built on routes that handle HTTP requests (GET, POST, PUT, DELETE). Let’s set up a few basic routes for our API.

Example: A simple API for managing a collection of items

  1. In app.js, define the routes:

You can find the complete source code for this project on GitHub.

Update

Here’s what each route does:

  • GET /books: Retrieves all items.
  • GET /books/:id: Retrieves an item by its ID.
  • POST /books: Adds a new item.
  • PUT /books/:id: Updates an existing item.
  • DELETE /books/:id: Deletes an item.

Testing the API

You can test your API using tools like Postman.

Picture3

Conclusion

Congratulations, you’ve built a Custom API with Node. You’ve learned to create CRUD operations, test your API, and learned how to handle requests and responses. From here, you can scale this API by adding more features like authentication, database connections, and other advanced functionalities.

Thank you for reading!

]]>
https://blogs.perficient.com/2025/08/06/building-a-custom-api-with-node-js/feed/ 0 384922
Acquia Source: What it is, and why you should be learning to use it https://blogs.perficient.com/2025/08/05/acquia-source-what-it-is-and-why-you-should-be-learning-to-use-it/ https://blogs.perficient.com/2025/08/05/acquia-source-what-it-is-and-why-you-should-be-learning-to-use-it/#comments Tue, 05 Aug 2025 14:02:37 +0000 https://blogs.perficient.com/?p=385741

Meet Acquia Source

Acquia Source powered by Drupal is Acquia’s SaaS solution to streamline building, managing, and deploying websites at scale, representing a fundamental shift in how organizations approach digital experience creation. This innovative platform combines the power of Drupal with a modern component-based architecture, delivering a unique hybrid approach that bridges traditional CMS capabilities with contemporary development practices.

At its core, Acquia Source is a SaaS offering that provides Drupal functionality enhanced with a custom component architecture built on React and Tailwind CSS. Components can be created through React, Tailwind 4, and CSS, allowing developers to write CSS and React directly within the platform without need for complex dev workflows. This approach eliminates the need for custom modules or PHP code, streamlining the development process while maintaining the robust content management capabilities that Drupal is known for.

Unlike traditional Drupal implementations that require extensive backend development, Acquia Source focuses on frontend component creation and content architecture. This makes it accessible to a broader range of developers while still leveraging Drupal’s proven content management foundation. For detailed technical specifications and implementation guides, explore the comprehensive documentation and learn more about the platform on Acquia’s official pages.

Why Acquia Source is a Game-Changer

The React-based component architecture at the heart of Acquia Source offers several compelling advantages that address common pain points in digital experience development. It provides a user-friendly Experience Builder to help you create and edit pages, robust user management features to control permissions and collaboration, and a design system approach that enables teams to define and enforce style and interaction patterns across pages.

One of the most significant benefits is the demoable, out-of-the-box feature set that allows teams to showcase functionality immediately without extensive development work. Since Acquia Source operates as a SaaS solution, updates and platform management are completely offloaded from your team, eliminating the traditional burden of infrastructure maintenance, security patching, and version upgrades that typically consume resources in custom Drupal implementations.

The platform maintains Drupal’s standard content type architecture, ensuring that content creators and administrators can leverage familiar workflows and structures. This consistency reduces training requirements and maintains efficiency while introducing modern frontend capabilities.

Perhaps most importantly for development teams, Acquia Source uses React and CSS technologies that frontend developers already understand. Unlike proprietary low-code solutions that require learning platform-specific languages or architectures, developers can immediately apply their existing React and Tailwind CSS knowledge. This eliminates the typical learning curve associated with new platforms and enables faster team onboarding and development.

Changing the Playbook for Smaller companies

Acquia Source fundamentally changes the accessibility of high-end digital Drupal experiences, particularly for smaller companies and businesses that previously couldn’t justify the cost or complexity of enterprise-level implementations. The platform’s quick spin-up capability means organizations can have a sophisticated digital presence operational in weeks/months rather than months/years.

With updates handled entirely by the SaaS solution, businesses no longer need to budget for ongoing maintenance, security updates, or platform upgrades. This predictable cost model makes enterprise-level functionality accessible to organizations with limited technical resources or budget constraints.

The platform eliminates the need for complex strategy engagements or extensive architecture planning that typically precede major Drupal implementations. For many use cases, the offering can be as simple as skinning out-of-the-box components to match brand requirements, dramatically reducing both time-to-market and project complexity. Gone are the days of extensive discussions about which address module or maps integration are required for a specific implementation.

The content-type-only architecture approach allows smaller development teams to deliver sophisticated results without deep Drupal expertise. This lower barrier of entry enables smaller firms to confidently engage with top-tier Acquia partners such as Perficient, providing access to extensive libraries of industry and technology-specific experts without requiring large internal development teams. This ease of access means that businesses can leverage enterprise-grade expertise and proven methodologies regardless of their size or internal technical capabilities.

Conclusion: Your Next Learning Priority

Acquia Source represents the future of accessible, scalable digital experience development. By combining the proven content management capabilities of Drupal with modern React-based component architecture, it offers a compelling solution for organizations seeking to deliver sophisticated digital experiences without the traditional complexity and resource requirements.

For marketing professionals, Acquia Source offers unprecedented speed-to-market, creative flexibility and ability to leverage existing frontend resources. For architects and developers, it provides a platform that leverages existing skills while eliminating infrastructure concerns and reducing project complexity.

The platform’s unique position in the market, providing advanced Drupal capabilities through a SaaS model with familiar development technologies makes it an invaluable tool for any developer or agency to have in their toolbox.

Start your Acquia Source journey today by exploring the comprehensive documentation and registering for the Partner Master Class: Introducing Acquia Source powered by Drupal to gain hands-on experience with this transformative platform.

]]>
https://blogs.perficient.com/2025/08/05/acquia-source-what-it-is-and-why-you-should-be-learning-to-use-it/feed/ 2 385741
AI in Medical Device Software: From Concept to Compliance https://blogs.perficient.com/2025/07/31/ai-in-medical-device-software-development-lifecycle/ https://blogs.perficient.com/2025/07/31/ai-in-medical-device-software-development-lifecycle/#respond Thu, 31 Jul 2025 14:30:11 +0000 https://blogs.perficient.com/?p=385582

Whether you’re building embedded software for next-gen diagnostics, modernizing lab systems, or scaling user-facing platforms, the pressure to innovate is universal, and AI is becoming a key differentiator. When embedded into the software development lifecycle (SDLC), AI offers a path to reduce costs, accelerate timelines, and equip the enterprise to scale with confidence. 

But AI doesn’t implement itself. It requires a team that understands the nuance of regulated software, SDLC complexities, and the strategic levers that drive growth. Our experts are helping MedTech leaders move beyond experimentation and into execution, embedding AI into the core of product development, testing, and regulatory readiness. 

“AI is being used to reduce manual effort and improve accuracy in documentation, testing, and validation.” – Reuters MedTech Report, 2025 

Whether it’s generating test cases from requirements, automating hazard analysis, or accelerating documentation, we help clients turn AI into a strategic accelerator. 

AI-Accelerated Regulatory Documentation 

Outcome: Faster time to submission, reduced manual burden, improved compliance confidence 

Regulatory documentation remains one of the most resource-intensive phases of medical device development.  

  • Risk classification automation: AI can analyze product attributes and applicable standards to suggest classification and required documentation. 
  • Drafting and validation: Generative AI can produce up to 75% of required documentation, which is then refined and validated by human experts. 
  • AI-assisted review: Post-editing, AI can re-analyze content to flag gaps or inconsistencies, acting as a second set of eyes before submission. 

AI won’t replace regulatory experts, but it will eliminate the grind. That’s where the value lies. 

For regulatory affairs leaders and product teams, this means faster submissions, reduced rework, and greater confidence in compliance, all while freeing up resources to focus on innovation. 

Agentic AI in the SDLC 

Outcome: Increased development velocity, reduced error rates, scalable automation 

Agentic AI—systems of multiple AI agents working in coordination—is emerging as a force multiplier in software development. 

  • Task decomposition: Complex development tasks are broken into smaller units, each handled by specialized agents, reducing hallucinations and improving accuracy. 
  • Peer review by AI: One agent can validate the output of another, creating a self-checking system that mirrors human code reviews. 
  • Digital workforce augmentation: Repetitive, labor-intensive tasks (e.g., documentation scaffolding, test case generation) are offloaded to AI, freeing teams to focus on innovation. This is especially impactful for engineering and product teams looking to scale development without compromising quality or compliance. 
  • Guardrails and oversight mechanisms: Our balanced implementation approach maintains security, compliance, and appropriate human supervision to deliver immediate operational gains and builds a foundation for continuous, iterative improvement. 

Agentic AI can surface vulnerabilities early and propose mitigations faster than traditional methods. This isn’t about replacing engineers. It’s about giving them a smarter co-pilot. 

AI-Enabled Quality Assurance and Testing 

Outcome: Higher product reliability, faster regression cycles, better user experiences 

AI is transforming QA from a bottleneck into a strategic advantage. 

  • Smart regression testing: AI frameworks run automated test suites across releases, identifying regressions with minimal human input. 
  • Synthetic test data generation: AI creates high-fidelity, privacy-safe test data in minutes—data that once took weeks to prepare. 
  • GenAI-powered visual testing: AI evaluates UI consistency and accessibility, flagging issues that traditional automation often misses. 
  • Chatbot validation: AI tools now test AI-powered support interfaces, ensuring they provide accurate, compliant responses. 

We’re not just testing functionality—we’re testing intelligence. That requires a new kind of QA.

Organizations managing complex software portfolios can unlock faster, safer releases. 

AI-Enabled, Scalable Talent Solutions 

Outcome: Scalable expertise without long onboarding cycles 

AI tools are only as effective as the teams that deploy them. We provide specialized talent—regulatory technologists, QA engineers, data scientists—that bring both domain knowledge and AI fluency. 

  • Accelerate proof-of-concept execution: Our teams integrate quickly into existing workflows, leveraging Agile and SAFe methodologies to deliver iterative value and maintain velocity. 
  • Reduce internal training burden: AI-fluent professionals bring immediate impact, minimizing ramp-up time and aligning with sprint-based development cycles. 
  • Ensure compliance alignment from day one: Specialists understand regulated environments and embed quality and traceability into every phase of the SDLC, consistent with Agile governance models. 

Whether you’re a CIO scaling digital health initiatives or a VP of Software managing multiple product lines, our AI-fluent teams integrate seamlessly to accelerate delivery and reduce risk. 

Proof of Concept Today, Scalable Solution Tomorrow 

Outcome: Informed investment decisions, future-ready capabilities 

Many of the AI capabilities discussed are already in early deployment or active pilot phases. Others are in proof-of-concept, with clear paths to scale. 

We understand that every organization is on a unique AI journey. Whether you’re starting from scratch, experimenting with pilots, or scaling AI across your enterprise, we meet you where you are. Our structured approach delivers value at every stage, helping you turn AI from an idea into a business advantage. 

As you evaluate your innovation and investment priorities across the SDLC, consider these questions: 

  1. Are we spending too much time on manual documentation?
  2. Do we have visibility into risk classification and mitigation?
  3. Can our QA processes scale with product complexity?
  4. How are we building responsible AI governance?
  5. Do we have the right partner to operationalize AI?

Final Thought: AI Demands a Partner, Not Just a Platform 

AI isn’t the new compliance partner. It’s the next competitive edge, but only when guided by the right strategy. For MedTech leaders, AI’s real opportunity comes by adopting and scaling it with precision, speed, and confidence. That kind of transformation can be accelerated by a partner who understands the regulatory terrain, the complexity of the SDLC, and the business outcomes that matter most. 

No matter where you sit — on the engineering team, in the lab, in business leadership, or in patient care — AI is reshaping how MedTech companies build, test, and deliver value. 

From insight to impact, our industry, platform, data, and AI expertise help organizations modernize systems, personalize engagement, and scale innovation. We deliver AI-powered transformation that drives engagement, efficiency, and loyalty throughout the lifecycle—from product development to commercial success. 

  • Business Transformation: Deepen collaboration, integration, and support throughout the value chain, including channel sales, providers, and patients. 
  • Modernization: Streamline legacy systems to drive greater connectivity, reduce duplication, and enhance employee and consumer experiences. 
  • Data + Analytics: Harness real-time data to support business success and to impact health outcomes. 
  • Consumer Experience: Support patient and consumer decision making, product usage, and outcomes through tailored digital experiences. 

Ready to move from AI potential to performance? Let’s talk about how we can accelerate your roadmap with the right talent, tools, and strategy. Contact us to get started. 

]]>
https://blogs.perficient.com/2025/07/31/ai-in-medical-device-software-development-lifecycle/feed/ 0 385582
Cypress Automation: Tag-Based Parallel Execution with Custom Configuration https://blogs.perficient.com/2025/07/30/cypress-automation-tag-based-parallel-execution-with-custom-configuration/ https://blogs.perficient.com/2025/07/30/cypress-automation-tag-based-parallel-execution-with-custom-configuration/#respond Wed, 30 Jul 2025 07:52:32 +0000 https://blogs.perficient.com/?p=385318

Custom Parallel Execution Using Tags:

To enhance the performance of Cypress tests, running them in parallel is a proven approach. While Cypress offers a built-in parallel execution feature, a more flexible and powerful method is tag-based parallel execution using a custom configuration. This method allows to fine-tune which tests are executed concurrently, based on tags in. feature files.

 


What Is Tag-Based Parallel Execution?

Tag-based execution filters test scenarios using custom tags (e.g., @login, @checkout) defined in you. feature files. Instead of running all tests or manually selecting files, this method dynamically identifies and runs only the tagged scenarios. It’s particularly useful for CI/CD pipelines and large test suites.

Key Components:

This approach uses several cores Node.js modules: 

  • child process – To execute terminal commands. 
  • glob – To search. feature files based on patterns. 
  • fs – To read file content for tag matching. 
  • crypto – To generate unique hashes for port management. 

Execution Strategy:

1. Set Tags and Config via Environment Variables:

You define which tests to run by setting environment variables:

  • TAGS='@db' → runs only tests with @db tag
  • <strong>THREADS=2 → number of parallel threads
  • SPEC='cypress/support/feature/*.feature' → file location pattern
    These variables help dynamically control test selection and concurrency.

2. Collect All Matching Feature Files:

Using the glob package, the script searches for all . feature files that match the provided pattern (e.g., *. feature). This gives a complete list of potential test files before filtering by tag.

3. Filter Feature Files by Tag:

Each . feature file is opened and scanned using fs.readFileSync(). If it contains the specified tag (like @db or @smoke), it gets added to the list for execution. This ensures only relevant tests run.

4. Assign Unique Ports for Each File:

To avoid port conflicts during parallel execution, the script uses crypto.createHash('md5') on the file path + tag combination. A slice of the hash becomes the unique port number. This is crucial when running UI-based tests in parallel.

5. Run Cypress Tests in Parallel:

The script spawns multiple Cypress instances using child_process.exec or spawn, one per tagged test file. Each command is built with its own spec file and unique port, and all are run simultaneously using Promises.

6. Error Handling and Logging:

If no files match the tag, the script logs a warning and exits cleanly. If any Cypress test fails, the corresponding error is caught, logged, and the overall process exits early to prevent false positives in CI pipelines.

7. Trigger the Execution from Terminal:

The full command is triggered from the terminal via a script in package.json:
"cy:parallel-tag-exec": "cross-env TAGS='@db' THREADS=2 SPEC='cypress/support/feature/*.feature' ts-node parallel-tag-config.ts"

8. Run the below command:

npm run cy:parallel-tag-exec

This executes the full workflow with just one command.


Complete TypeScript Code

The code handles the entire logic: matching tags, assigning ports, and running Cypress commands in parallel. Refer to the image below for the full implementation.

P1

 

P2

 


Benefits of This Approach:

  • Greatly reduces overall test runtime.
  • Offers flexibility with test selection using tags.
  • Avoids port conflict issues through dynamic assignment.
  • Works well with CI pipelines and large-scale projects.

 

Final Thoughts:

This custom configuration allows you to harness the full power of parallel testing with Cypress in a tag-specific, efficient manner. It’s scalable, highly customizable, and especially suitable for complex projects where targeted test runs are required.

For more information, you can refer to this website: https://testgrid.io/blog/cypress-parallel-testing/

 

Similar Approach for Cypress Testing:

  1. Cypress Grep Plugin – https://github.com/cypress-io/cypress-grep

  2. Nx Dev Tools (Monorepo) – https://nx.dev/technologies/test-tools/cypress/api


 

]]>
https://blogs.perficient.com/2025/07/30/cypress-automation-tag-based-parallel-execution-with-custom-configuration/feed/ 0 385318