Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ Expert Digital Insights Mon, 22 Dec 2025 08:06:05 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ 32 32 30508587 Unifying Hybrid and Multi-Cloud Environments with Azure Arc https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/ https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/#respond Mon, 22 Dec 2025 08:06:05 +0000 https://blogs.perficient.com/?p=389202

1. Introduction to Modern Cloud Architecture

In today’s world, architects generally prefer to keep their compute resources—such as virtual machines and Kubernetes servers—spread across multiple clouds and on-premises environments. Specifically, they do this to achieve the best possible resilience through high-availability and disaster recovery. Moreover, this approach allows for better cost efficiency and higher security.

2. The Challenge of Management Complexity

However, this distributed strategy brings additional challenges. Specifically, it increases the complexity of maintaining and managing resources from different consoles, such as Azure, AWS, and Google portals. Consequently, even for basic operations like restarts or updates, administrators often struggle with multiple disparate portals. As a result, basic administration tasks become too complex and cumbersome.

3. How Azure Arc Provides a Solution

Azure Arc solves this problem by providing a simple “pane of glass” to manage and monitor servers regardless of their location. In addition, it simplifies governance by delivering a consistent management platform for both multi-cloud and on-premises resources. Specifically, it provides a centralized way to project existing non-Azure resources directly into the Azure Resource Manager (ARM).

4. Understanding Key Capabilities

Currently, Azure Arc allows you to manage several resource types outside of Azure. For instance, it supports servers, Kubernetes clusters, and databases. Furthermore, it offers several specific functionalities:

  • Azure Arc-enabled Servers: Connects physical or virtual Windows and Linux servers to Azure for centralized visibility.

  • Azure Arc-enabled Kubernetes: Additionally, you can onboard any CNCF-conformant Kubernetes cluster to enable GitOps-based management.

  • Azure Arc-enabled SQL Server: This brings external SQL Server instances under Azure governance for advanced security.

5. Architectural Implementation Details

The Azure Arc architecture revolves primarily around the Azure Resource Manager. Specifically, when a resource is onboarded, it receives a unique resource ID and becomes part of Azure’s management plane. Consequently, each resource installs a local agent that communicates with Azure to receive policies and upload logs.

6. The Role of the Connected Machine Agent

The agent package contains several logical components bundled together. For instance, the Hybrid Instance Metadata service (HIMDS) manages the connection and the machine’s Azure identity. Moreover, the guest configuration agent assesses whether the machine complies with required policies. In addition, the Extension agent manages VM extensions, including their installation and upgrades.

7. Onboarding and Deployment Methods

Onboarding machines can be accomplished using different methods depending on your scale. For example, you might use interactive scripts for small deployments or service principals for large-scale automation. Specifically, the following options are available:

  • Interactive Deployment: Manually install the agent on a few machines.

  • At-Scale Deployment: Alternatively, connect machines using a service principal.

  • Automated Tooling: Furthermore, you can utilize Group Policy for Windows machines.

8. Strategic Benefits for Governance

Ultimately, Azure Arc provides numerous strategic benefits for modern enterprises. Specifically, organizations can leverage the following:

  • Governance and Compliance: Apply Azure Policy to ensure consistent configurations across all environments.

  • Enhanced Security: Moreover, use Defender for Cloud to detect threats and integrate vulnerability assessments.

  • DevOps Efficiency: Enable GitOps-based deployments for Kubernetes clusters.

9. Important Limitations to Consider

However, there are a few limitations to keep in mind before starting your deployment. First, continuous internet connectivity is required for full functionality. Secondly, some features may not be available for all operating systems. Finally, there are cost implications based on the data services and monitoring tools used.

10. Conclusion and Summary

In conclusion, Azure Arc empowers organizations to standardize and simplify operations across heterogeneous environments. Whether you are managing legacy infrastructure or edge devices, it brings everything under one governance model. Therefore, if you are looking to improve control and agility, Azure Arc is a tool worth exploring.

]]>
https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/feed/ 0 389202
How to Secure Applications During Modernization on AWS https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/ https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/#respond Fri, 19 Dec 2025 06:40:17 +0000 https://blogs.perficient.com/?p=389050

Why Do We Need to Secure Our Applications?  

Cloud environments are very dynamic and interconnected. A single misconfiguration or exposed API key can lead to:  

  • Data breaches 
  • Compliance violations 
  • Costly downtime 

Attackers often target application-level weaknesses, not just infrastructure gaps. If any application handles sensitive data, financial transactions, or user credentials, security is critical. 

Common Mistakes Made When Building Applications

  • Hardcoding API keys and credentials 
  • Ignoring dependency vulnerabilities 
  • Skipping encryption/decryption for sensitive data 

Essential Security Best Practices

1. Identity and Access Management (IAM)

  • Create dedicated IAM roles for your Lambda functions, EC2 instances, or ECS tasks instead of hardcoding access keys in your application. 
  • We must regularly review who has permissions using the IAM Access Analyzer. 
  • We must avoid using the root account for day-to-day operations/ any operations as a developer. 

Role Creation

 

Role Creation1

2. Don’t Store/Share Secrets in Your Code

Your appsettings.json is not the right place for secret keys. Storing API keys or database passwords. 

  • We must use AWS Secrets Manager or Parameter Store to keep secrets safe. 
  • Fetch keys at runtime by using AWS SDK for .NET or the AWSSDK.Extensions.NETCore.Setup configuration provider 

Secretmanager Creation2

Secretmanager Reading

3. Always Encrypt Data 

Encryption is one of the best practices to encrypt our sensitive data 

  • Enable HTTPS by default for all your endpoints.  
  • Use AWS Certificate Manager (ACM) to issue and manage SSL/TLS certificates. 
  • In your application, make sure that all traffic is redirected to HTTPS by adding app.UseHttpsRedirection(); 
  • AWS KMS to encrypt your S3 buckets, RDS databases, and EBS volumes.
  • If you’re using SQL Server on RDS, enable Transparent Data Encryption (TDE). 

 Encrypt & Decrypt API Key with KMS 

Encryption Steps

Encryption Decrypt Code

4. Build a Secure Network Foundation

  • Must use VPCs with private subnets for backend services. 
  • Control the traffic with Security Groups and Network ACLs. 
  • Use VPC Endpoints to keep traffic within AWS’s private network  
  • Use AWS WAF to protect your APIs, and enable AWS Shield to guard against DDoS attacks. 

Security Group

Vpc Creation

5. Keep Your Code and Dependencies Clean

Even the best infrastructure can’t save a vulnerable codebase. 

  • Update your .NET SDK and NuGet packages regularly. 
  • Use Amazon Inspector for runtime and AWS environment security, and tools like Dependabot for Development-time dependency security to find vulnerabilities early. 
  • Add code review analysis tools (like SonarQube) in your CI/CD pipeline. 

AWS Inspector

6. Log Everything and Watch

  • Enable Amazon AWS CloudWatch for all central logging and use AWS X-Ray to trace requests through the application. 
  • Turn on CloudTrail to track every API call across your account. 
  • Enable GuardDuty for continuous threat detection. 

 

]]>
https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/feed/ 0 389050
Deploy Microservices On AKS using GitHub Actions https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/ https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/#respond Thu, 18 Dec 2025 05:30:05 +0000 https://blogs.perficient.com/?p=389089

Deploying microservices in a cloud-native environment requires an efficient container orchestration platform and an automated CI/CD pipeline. Azure Kubernetes Service (AKS) is a Kubernetes solution that is managed by Azure. GitHub Actions makes it easy to automate your CI/CD processes from the source code repository.

Image (1)

Why Use GitHub Actions with AKS

Using GitHub Actions for AKS deployments provides:

  • Automated and consistent deployments
  • Faster release cycles
  • Reduced manual intervention
  • Easy Integration with GitHub repositories
  • Better visibility into build and deployment status

Architecture Overview

The deployment workflow follows a CI/CD approach:

  • Microservices packaged as Docker images
  • Images pushed to ACR
  • AKS pulls the image from ACR
  • GitHub Actions automates
  • Build & Push Docker Images
  • Deploy manifests to AKS

Image

Prerequisites

Before proceeding with the implementation, ensure the following   prerequisites are in place:

  • Azure Subscriptions
  • Azure CLI Installed and authenticated (AZ)
  • An existing Azure Kubernetes Service (AKS) cluster
  • Kubectl is installed and configured for your cluster
  • Azure Container Registry (ACR) associated with the AKS cluster
  • GitHub repository with microservices code

Repository Structure

Each microservice is maintained in a separate repository with the following structure:  .github/workflows/name.yml

CI/CD Pipeline Stages Overview

  • Source Code Checkout
  • Build Docker Images
  • Push images to ACR
  • Authenticate to AKS
  • Deploy Microservices using kubectl

Configure GitHub Secrets

Go to GitHub – repository – Settings – Secrets and Variables – Actions  

Add the following secrets:

  • ACR_LOGIN_SERVER
  • ACR_USERNAME
  • ACR_PASSWORD
  • KUBECONFIG

Stage 1: Source Code Checkout

The Pipeline starts by pulling the latest code from the GitHub repository

Stage 2: Build Docker Images

For each microservice:

  • A Docker image is built
  • A unique tag (commit ID and version) is assigned

Images are prepared for deployment

Stage 3: Push Images to Azure Container Registry

Once the images are built:

  • GitHub Actions authenticates to ACR
  • Images are pushed securely to the registry
  • After the initial setup, AKS subsequently pulls the images directly from ACR

Stage 4: Authenticate to AKS

GitHub Actions connects to the AKS cluster using kubeconfig

Stage 5: Deploy Microservices to AKS

In this stage:

  • Kubernetes manifests are applied
  • Services are exposed via the Load Balancer

Deployment Validation

After deployment:

  • Pods are verified to be in a running state
  • Check the service for external access

Best Practices

To make the pipeline production Ready:

  • Use commit-based image tagging
  • Separate environments (dev, stage, prod)
  • Use namespace in AKS
  • Store secrets securely using GitHub Secrets

Common Challenges and Solutions

  • Image pull failures: Verify ACR permission
  • Pipeline authentication errors: Validate Azure credentials
  • Pod crashes: Review container logs and resource limits

Benefits of CI/CD with AKS and GitHub Actions

  • Faster deployments
  • Improved reliability
  • Scalable microservices architecture
  • Better developer productivity
  • Reduced operational overhead

Conclusion

Deploying microservices on AKS using GitHub Actions provides a robust, scalable, and automated CI/CD solution. By integrating container builds, registry management, and Kubernetes deployments into a single pipeline, teams can deliver applications faster and more reliably.

CI/CD is not just about automation – it’s about confidence, consistency, and continuous improvement.

 

]]>
https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/feed/ 0 389089
Why Inter-Plan Collaboration Is the Competitive Edge for Health Insurers https://blogs.perficient.com/2025/12/05/why-inter-plan-collaboration-is-the-competitive-edge-for-health-insurers/ https://blogs.perficient.com/2025/12/05/why-inter-plan-collaboration-is-the-competitive-edge-for-health-insurers/#respond Fri, 05 Dec 2025 13:00:12 +0000 https://blogs.perficient.com/?p=387904

A health insurance model built for yesterday won’t meet the demands of today’s consumers. Expectations for seamless, intuitive experiences are accelerating, while fragmented systems continue to drive up costs, create blind spots, and erode trust.

Addressing these challenges takes more than incremental fixes. The path forward requires breaking down silos and creating synergy across plans, while aligning technology, strategy, and teams to deliver human-centered experiences at scale. This is more than operational; it’s strategic. It’s how health insurers build resilience, move with speed and purpose, and stay ahead of evolving demands.

Reflecting on recent industry conversations, we’re proud to have sponsored LeadersIgnite and the 2025 Inter-Plan Solutions Forum. As Hari Madamalla shared:

Hari Madamalla Headshot“When insurers share insights, build solutions together, and scale what works, they can cut costs, streamline prior authorization and pricing, and deliver the experiences members expect.”– Hari Madamalla, Senior Vice President, Healthcare + Life Sciences

To dig deeper into these challenges, we spoke with healthcare leaders Hari Madamalla, senior vice president, and directors Pavan Madhira and Priyal Patel about how health insurers can create a competitive edge by leveraging digital innovation with inter-plan collaboration.

The Complexity Challenge Health Insurers Can’t Ignore

Health insurance faces strain from every angle: slow authorizations, confusing pricing, fragmented data, and widening care gaps. The reality is, manual fixes won’t solve these challenges. Plans need smarter systems that deliver clarity and speed at scale. AI and automation make it possible to turn data into insight, reduce fragmentation, and meet mandates without adding complexity.

Headshot Pavan Madhira“Healthcare has long struggled with inefficiencies and slow tech adoption—but the AI revolution is changing that. We’re at a pivotal moment, similar to the digital shift of the 1990s, where AI is poised to disrupt outdated processes and drive real transformation.” – Pavan Madhira, Director, Healthcare + Life Sciences

But healthcare organizations face unique constraints, including HIPAA, PHI, and PII regulations that limit the utility of plug-and-play AI solutions. To meet these challenges, we apply our PACE framework—Policies, Advocacy, Controls, and Enablement—to ensure AI is not only innovative but also rooted in trust. This approach ensures AI is deployed with purpose, aligned to business goals, and embedded with safeguards that protect consumers and organizations.

Still, technology alone isn’t enough though. Staying relevant means designing human-centered experiences that reduce friction and build trust. Perficient’s award-winning Access to Care research study reveals that friction in the care journey directly impacts consumer loyalty and revenue.

More than 45% of consumers aged 18–64 have used digital-first care instead of their regular provider, and 92% of them believe the quality is equal to—or better.

That’s a signal healthcare leaders can’t afford to ignore. It tells us when experiences fall short, consumers go elsewhere, and they won’t always come back.

For health insurers, that shift creates issues. When members seek care outside your ecosystem, you risk losing visibility into care journeys, creating gaps in data and blind spots in member health management. The result? Higher costs, duplicative services, and missed opportunities for proactive coordination. Fragmented care journeys also undermine efforts to deliver a true 360-degree view of the member. The solution lies in intuitive digital transformation that turns complexity into clarity.

Explore More: Empathy, Resilience, Innovation, and Speed: The Blueprint for Intelligent Healthcare Transformation

Where Inter-Plan Collaboration Creates Real Momentum

When health plans work together, the payoff is significant. Collaboration moves the industry from silos to synergy, enabling human-centered experiences across networks that keep members engaged and revenue intact.

Building resilience is key to that success. Leaders need systems that anticipate member needs and remove barriers before they impact access to care. That means reducing friction in scheduling and follow-up, enabling seamless coordination across networks, and delivering digital experiences that feel as simple and intuitive as consumer platforms like Amazon or Uber. Resilience also means preparing for the unexpected and being able to pivot quickly.

When plans take this approach, the impact is clear:

  • Higher Quality Scores and Star Ratings: Shared strategies for closing gaps and improving provider data can help lift HEDIS scores and Star Ratings, unlocking higher reimbursement and bonus pools.
  • Faster Prior Authorizations: Coordinated rules and automation help reduce delays and meet new regulatory requirements like CMS Interoperability and Prior Authorization Final Rule (CMS-0057-F).
  • True Price Transparency: Consistent, easy-to-understand cost and quality information across plans helps consumers make confident choices and stay in-network.
  • Stronger Member Loyalty: Unified digital experiences across plans help improve satisfaction and engagement.
  • Lower Administrative Overhead: Cleaner member data means fewer errors, less duplication, and lower compliance risk.

Priyal Patel Headshot“When plans work together, they can better serve their vulnerable populations, reduce disparities, and really drive to value based care. It’s about building trust, sharing responsibility, and innovating with empathy.” – Priyal Patel, Director, Healthcare + Life Sciences

Resilience and speed go hand in hand though. Our experts help health insurers deliver both by:

This approach supports the Quintuple Aim: better outcomes, lower costs, improved experiences, clinician well-being, and health equity. It also ensures that innovation is not just fast, but focused, ethical, and sustainable.

You May Also Enjoy: Access to Care is Evolving: What Consumer Insights and Behavior Models Reveal

Accelerating Impact With Digital Innovation and Inter-Plan Collaboration

Beyond these outcomes, collaboration paired with digital innovation unlocks even greater opportunities to build a smarter, more connected future of healthcare. It starts with aligning consumer expectations, digital infrastructure, and data governance to strategic business goals.

Here’s how plans can accelerate impact:

  • Real-Time Data Sharing and Interoperability: Shared learning ensures insights aren’t siloed. By pooling knowledge across plans, leaders can identify patterns, anticipate emerging trends, and act faster on what works. Real-time interoperability, like FHIR-enabled solutions, gives plans the visibility needed for accurate risk adjustment and timely quality reporting. AI enhances this by predicting gaps and surface actionable insights, helping plans act faster and reduce costs.
  • Managing Coding Intensity in the AI Era: As provider AI tools capture more diagnoses, insurers can see risk scores and costs rise, creating audit risk and financial exposure. This challenge requires proactive oversight. Collaboration helps by establishing shared standards and applying predictive analytics to detect anomalies early, turning a potential cost driver into a managed risk.
  • Prior Authorization Modernization: Prior authorization delays drive up costs and erode member experience. Aligning on streamlined processes and leveraging intelligent automation can help meet mandates like CMS-0057-F, while predicting approval likelihood, flagging exceptions early, and accelerating turnaround times.
  • Joint Innovation Pilots: Co-development of innovation means plans can shape technology together. This approach balances unique needs with shared goals, creating solutions that cut costs, accelerate time to value, and ensure compliance stays front and center.
  • Engaging Member Experience Frameworks: Scaling proven approaches across plans amplifies impact. When plans collaborate on digital experience standards and successful capabilities are replicated, members enjoy seamless interactions across networks. Building these experiences on solid foundations with purpose-driven AI is key to delivering stronger engagement and loyalty at scale.
  • Shared Governance and Policy Alignment: Joint governance establishes accountability, aligns incentives for value-based care, and reduces compliance risk while protecting revenue.

Success in Action: Empowering Healthcare Consumers and Their Care Ecosystems With Interoperable Data

Make Inter-Plan Collaboration Your Strategic Advantage

Ready to move from insight to impact? Our healthcare expertise equips leaders to modernize, personalize, and scale care. We drive resilient, AI-powered transformation to shape the experiences and engagement of healthcare consumers, streamline operations, and improve the cost, quality, and equity of care.

  • Business Transformation: Activate strategy for transformative outcomes and health experiences.
  • Modernization: Maximize technology to drive health innovation, efficiency, and interoperability.
  • Data + Analytics: Power enterprise agility and accelerate healthcare insights.
  • Consumer Experience: Connect, ease, and elevate impactful health journeys.

We have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S., and Modern Healthcare consistently ranks us as one of the largest healthcare consulting firms.

]]>
https://blogs.perficient.com/2025/12/05/why-inter-plan-collaboration-is-the-competitive-edge-for-health-insurers/feed/ 0 387904
Monitoring and Logging in Sitecore AI https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/ https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/#respond Mon, 24 Nov 2025 21:04:34 +0000 https://blogs.perficient.com/?p=388586

Why Observability Matters More Than Ever

Moving from traditional Sitecore deployments Sitecore AI means the infrastructure is abstracted away. That’s fantastic for agility, but it also changes how we troubleshoot. You can’t RDP onto a server and tail a file anymore; your lifeline is observability: clear signals from logs, metrics, and governed automation that tell you what’s happening across the platform and the front‑end.

What’s Different in Sitecore AI?

Logs and diagnostics are centralized. You access them via the Sitecore AI portal and the Sitecore CLI. They’re organized by environment and by role. Your front‑end application or rendering host, often a Next.js site deployed on Vercel, responsible for headless rendering and user experience has its own telemetry separate from the CMS.

So, your monitoring picture spans three surfaces: Sitecore AI logs for CMS and deployment activity, rendering host telemetry for front‑end performance, and Experience Edge signals for content delivery. Together, they describe the health of the experience, not just the servers.

 

Understanding the Logging Surfaces

In Sitecore AI, logs are grouped into three primary areas that each play a distinct role in diagnosing issues:

Content Management (CM) logs

  • These are your first stop for diagnosing publishing failures, broken workflows, template errors, and serialization mismatches. When a publish fails, CM logs help you separate permissions or workflow problems from data or serialization issues.

Rendering Host logs

  • Think front‑end behavior and performance. If personalization falls back, pages render slowly, or API responses seem sluggish, the rendering host logs surface cache misses, API latency, and rendering errors that directly impact Core Web Vitals and UX.

Deployment logs

  • The “narrative” of your CI/CD run. When a build fails or a promotion doesn’t complete, deployment logs pinpoint CLI command failures, artifact mismatches, or environment configuration issues. They also provide stage-by-stage visibility (provisioning, build, deploy, post‑actions), which speeds triage and supports audits.

Access these logs quickly in the Deploy app’s environment view or programmatically via the Sitecore CLI for listing, viewing, and downloading logs as part of your pipeline artifacts.

Integration Patterns for Enterprise Monitoring

Centralizing is helpful; correlating is essential. The pragmatic pattern I recommend is:

Sitecore AI → Azure Monitor/Application Insights

  • Forward CMS and deployment logs so you can correlate spikes in errors with deployments, content bursts, or traffic changes. KQL lets you slice by environment, role, and severity for root cause analysis.

Rendering Host → APM (Datadog/New Relic)

  • Use front‑end analytics to track TTFB, cache hit ratio, route errors, and API dependency health. Pair this with Vercel’s own analytics for global edge performance.

Experience Edge → Webhook Monitoring

  • Register webhooks so you can track publish‑to‑Edge latency and trigger alerts or redeploys when content propagation slows or fails.

SIEM Integration (today’s reality)

  • For unified audit across Sitecore SaaS, stream supported Common Audit Logs (CAL) via webhooks (Personalize/CDP/Connect) and, for Sitecore AI, pull environment and deployment logs via CLI on a schedule until broader CAL coverage lands.

Metrics That Matter

In a SaaS world, traditional “server up” checks don’t describe user experience. Focus on metrics that map directly to reliability and business impact:

Deployment success & promotion health

  • Failed builds or promotions block content and features. Tracking rates and mean time to recovery reveals pipeline reliability.

Publish‑to‑Edge latency

  • Authors expect content to reach Experience Edge quickly. Latency here affects real‑time campaigns, previews, and editorial confidence.

Rendering host performance

  • P95/P99 TTFB, cache hit ratio, and error rates impact Core Web Vitals, SEO, and conversion. They also help you spot regressions after releases.

Agent activity & governance

  • With Sitecore AI’s agentic capabilities, monitoring agent runs, approvals, and failures protects compliance and prevents unintended bulk changes.

Governance Signals in Sitecore AI

Sitecore AI introduces Agentic Studio: a governed workspace to design, run, and oversee automation. Work is organized around four building blocks, Agents, Flows, Spaces, and Signals. Practically, that means you can automate complex operations while maintaining human review and auditability.

  • Agents: Handle focused tasks (e.g., content migration, metadata updates).
  • Flows: Orchestrate agents into multi‑step workflows with visibility across stages.
  • Spaces: Provide shared context for teams to collaborate on active runs.

Signals surface trends and triggers that can start or adjust flows. Together, these give marketers and developers a safe frame to scale automation without losing control.

How Agent Flows Are Monitored

Monitoring agent flows blends product‑level visibility with enterprise analytics:

Run visibility in Agentic Studio:

  • Each flow run exposes status, participants (human and agent), timestamps, and outcomes. Because flows are orchestrated in a governed workspace, you get “full visibility” into progression from brief to publish/optimization, including approvals where human review is required.

Governance signals and audit trails:

  • Signals can trigger flows and also act as governance inputs (for example, trend alerts requiring approval). Capture audit trails of who initiated a run, which agents executed steps, and what content or configurations changed.

Alerting and dashboards:

  • Mirror key flow events into your monitoring plane: start, paused awaiting approval, failed step, completed. Route these into Azure Monitor or your SIEM so operations sees agentic activity alongside deployments and content events.

Integration approach:

  • Where Common Audit Logs (CAL) are available (Personalize/CDP/Connect), stream events via webhooks. For Sitecore AI and Agentic activity not yet covered by CAL, use scheduled CLI log exports and APIs the platform exposes to assemble a unified view. Normalize event schemas (runId, agentId, flowId, environment, severity) to enable cross‑product correlation.

The outcome: agent automation becomes observable. Teams can answer “what changed, when, by whom, and why” and tie those answers to performance and compliance dashboards.

Final Thoughts

Observability in Sitecore AI isn’t about servers; it’s about experience health and trusted automation. When you combine SaaS‑native logs, front‑end telemetry, Edge events, and agentic governance signals, you gain a single narrative across deployments, content, and automation, the narrative you need to keep teams fast, safe, and accountable.

]]>
https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/feed/ 0 388586
A Tool For CDOs to Keep Their Cloud Secure: AWS GuardDuty Is the Saw and Perficient Is the Craftsman https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/ https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/#respond Tue, 18 Nov 2025 13:20:08 +0000 https://blogs.perficient.com/?p=388374

In the rapidly expanding realm of cloud computing, Amazon Web Services (AWS) provides the infrastructure for countless businesses to operate and innovate. But with an ever-increasing amount of data, applications, and workloads on the cloud protecting this data poses significant security challenges. As a firm’s data, applications, and workloads migrate to the cloud, protecting them from both sophisticated threats as well as brute force digital attacks is of paramount importance. This is where Amazon GuardDuty enters as a powerful, vigilant sentinel.

What is Amazon GuardDuty?

At its core, Amazon GuardDuty is a continuous security monitoring service designed to protect your AWS accounts and workloads. The software serves as a 24/7 security guard for your entire AWS environment, not just individual applications, and is constantly scanning for malicious activity and unauthorized behavior.

The software works by analyzing a wide variety of data sources within your firm’s AWS account—including AWS CloudTrail event logs, VPC flow logs, and DNS query logs—using machine learning, threat intelligence feeds, and anomaly detection techniques.

If an external party tries a brute-force login, a compromised instance is communicating with a known malicious IP address, or an unusual API call is made, GuardDuty is there to spot it and can be configured to trigger automated actions through services can trigger automated actions through services like Amazon CloudWatch Events and AWS Lambda when a threat is found as well as alert human administrators to take action.

When a threat is detected, GuardDuty generates a finding with a severity level (high, medium, or low) and a score. The severity and score both help minimize time spent on more routine exceptions while highlighting significant events to your data security team.

Why is GuardDuty So Important?

In today’s digital landscape, relying solely on traditional, static security measures is not sufficient. Cybercriminals are constantly evolving their tactics, which is why GuardDuty is an essential component of your AWS security strategy:

  1. Proactive, Intelligent Threat Detection

GuardDuty moves beyond simple rule-based systems. Its use of machine learning allows it to detect anomalies that human security administrators might miss, identifying zero-day threats and subtle changes in behavior that indicate a compromise. It continuously learns and adapts to new threats without requiring manual updates from human security administrators.

  1. Near Real-Time Monitoring and Alerting

Speed is critical in incident response. GuardDuty provides findings in near real-time, delivering detailed security alerts directly to the AWS Management Console, Amazon EventBridge, and Amazon Security Hub. This immediate notification allows your firm’s security teams to investigate and remediate potential issues quickly, minimizing potential damage and alerting your firm’s management.

  1. Broad Protection Across AWS Services

GuardDuty doesn’t just watch over your firm’s Elastic Compute Cloud (“EC2”) instances. GuardDuty also protects a wide array of AWS services, including:

  • Simple Storage Service (“S3”) Buckets: Detecting potential data exfiltration or policy changes that expose sensitive data.
  • EKS/Kubernetes: Monitoring for threats to your container workloads.  No more running malware or mining bitcoin in your firm’s containers.
  • Databases (Aurora; RDS – MySQL, PostgreSQL, MariaDB, Oracle, and Microsoft SQL Server; and Redshift): Identifying potential compromise or unauthorized access to data.

Conclusion:

In the cloud, security is a shared responsibility. While AWS manages the security of the cloud infrastructure itself, you are responsible for security in the cloud—protecting your data, accounts, and workloads. Amazon GuardDuty is an indispensable tool in fulfilling that responsibility. It provides an automated, intelligent, and scalable layer of defense that empowers you to stay ahead of malicious actors.

To enable Amazon GuardDuty, consider contacting Perficient to help enable, configure, and train staff. Perficient is an AWS partner and has achieved Premier Tier Services Partner status, the highest tier in the Amazon Web Services (AWS) Partner Network. This elevated status reflects Perficient’s expertise, long-term investment, and commitment to delivering customer solutions on AWS.

Besides the firm’s Partner Status, Perficient has demonstrated significant expertise in areas like cloud migration, modernization, and AI-driven solutions, with a large team of AWS-certified professionals.

In addition to these competencies, Perficient has been designated for specific service deliveries, such as AWS Glue Service Delivery, and also has available Amazon-approved software in the AWS Marketplace.

Our financial services experts continuously monitor the financial services landscape and deliver pragmatic, scalable solutions that meet the required mandate and more. Reach out to Perficient’s Director and Head of Payments Practice Amanda Estiverne-Colas to discover why Perficient has been trusted by 18 of the top 20 banks, 16 of the 20 largest wealth and asset management firms, and 25+ leading payment + card processing companies.

 

]]>
https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/feed/ 0 388374
From XM Cloud to SitecoreAI: A Developer’s Guide to the Platform Evolution https://blogs.perficient.com/2025/11/10/from-xm-cloud-to-sitecoreai-a-developers-guide-to-the-platform-evolution/ https://blogs.perficient.com/2025/11/10/from-xm-cloud-to-sitecoreai-a-developers-guide-to-the-platform-evolution/#respond Mon, 10 Nov 2025 16:34:28 +0000 https://blogs.perficient.com/?p=388270

What developers need to know about the architectural changes that launched on November 10th

Last week at Sitecore Symposium 2025 was one of those rare industry events that reminded me why this community is so special. I got to reconnect with former colleagues I hadn’t seen in years, finally meet current team members face-to-face who had only been voices on video calls, and form genuine new relationships with peers across the ecosystem. Beyond the professional connections, we spent time with current customers and had fascinating conversations with potential new ones about their challenges and aspirations. And let’s be honest—the epic Universal Studios party that capped off the event didn’t hurt either.

Now that we’re settling back into routine work, it’s time to unpack everything that was announced. The best part? As of today, November 10th, it’s all live. When you log into the platform, you can see and experience everything that was demonstrated on stage.

After a decade of Sitecore development, I’ve learned to separate marketing announcements from actual technical changes. This one’s different: SitecoreAI represents a genuine architectural shift toward AI-first design that changes how we approach development.

Here’s what developers need to know about the platform evolution that launched today.

Architecture Changes That Matter

Cloud-Native Foundation with New Deployment Model

SitecoreAI maintains XM Cloud’s Azure-hosted foundation while introducing four connected environments:

  • Agentic Studio – where marketers and AI collaborate to plan, create, and personalize experiences
  • App Studio – dedicated space for custom application development
  • Sitecore Connect – for integrations
  • Marketplace – for sharing and discovering solutions

If you’re already on XM Cloud, your existing implementations transition without breaking changes. That’s genuinely good news—no major refactoring required. The platform adds enhanced governance with enterprise deployment controls without sacrificing the SaaS agility we’ve come to expect. There’s also a dedicated App Studio environment specifically for custom application development.

The entire platform is API-first, with RESTful APIs for all platform functions, including AI agent interaction. The key difference from traditional on-premises complexity is that you get cloud-native scaling with enterprise-grade governance built right in.

Unified Architecture vs. Integration Complexity

The biggest architectural change is having unified content, customer data, personalization, and AI in a single platform. This fundamentally changes how we think about integrations.

Instead of connecting separate CMS, CDP, personalization, and AI tools, everything operates within one data model. Your external system integrations change from multi-platform orchestration to single API framework connections. There are trade-offs here—you gain architectural simplicity but need to evaluate vendor lock-in versus best-of-breed flexibility for your specific requirements.

The Development Paradigm Shift: AI Agents

The most significant change for developers is the introduction of autonomous AI agents as a platform primitive. They’ve gone ahead and built this functionality right into the platform, so we’re not trying to bolt it on as an addon. This feels like it’s going to be big.

What AI Agents Mean for Developers

AI agents operate within the platform to handle marketing workflows autonomously—content generation, A/B testing, personalization optimization. They’re not replacing custom code; they’re handling repeatable marketing tasks.

As developers, our responsibilities shift to designing the underlying data models that agents consume, creating integration patterns for agent-external system interactions, building governance frameworks that define agent operational boundaries, and handling complex customizations that exceed agent capabilities.

Marketers can configure basic agents without developer involvement, but custom data models, security frameworks, and complex integrations still require development expertise. So our role evolves rather than disappears.

New Skillset Requirements

Working with AI agents requires understanding several new concepts. You need to know how to design secure, compliant boundaries for agent operations and governed AI frameworks. You’ll also need to structure data so agents can operate effectively, understand how agents learn and improve from configuration and usage, and know when to use agents versus traditional custom development.

This combines traditional technical architecture with AI workflow design.  A new skillset that bridges development and intelligent automation.

Migration Path from XM Cloud

What “Seamless Transition” Actually Means

For XM Cloud customers, the upgrade path is genuinely straightforward. There are no breaking changes.  Existing customizations, integrations, and content work without modification. AI capabilities layer on top of current functionality, and the transition can happen immediately.  When you log in today it’ll all be there waiting for you, no actions needed.

Legacy Platform Migrations

For developers migrating from older Sitecore implementations or other platforms, SitecoreAI provides SitecoreAI Pathway tooling that claims 70% faster migration timelines. The tooling includes automated content conversion with intelligent mapping of existing content structures, schema translation with automated data model conversion and manual review points, and workflow recreation tools to either replicate existing processes or redesign them with AI agent capabilities.

Migration Planning Approach

Based on what I’ve seen, successful migrations follow a clear pattern. Start with an assessment phase to catalog existing customizations, integrations, and workflows. Then make strategy decisions about whether to replicate each component exactly or reimagine it with AI agents. Use a phased implementation that starts with core functionality and gradually add AI-enhanced workflows. Don’t forget team training to educate developers on agent architecture and governance patterns.

The key architectural question becomes: which processes should remain as traditional custom code versus be reimagined as AI agent workflows?

Integration Strategy Considerations

API Framework and Connectivity

SitecoreAI’s unified architecture changes integration patterns significantly. You get native ecosystem integration with direct connectivity to Sitecore XP, Search, CDP, and Personalize without separate integration layers. Third-party integration happens through a single API framework with webhook support for real-time external system connectivity. Authentication is unified across all platform functions.

Data Flow Changes

The unified customer data model affects how you architect integrations. You now have a single customer profile across content, behavior, and AI operations. Real-time data synchronization happens without ETL complexity, and there’s centralized data governance for AI agent operations.

One important note: existing integrations that rely on separate CDP or personalization APIs may need updates to leverage the unified data model.

What This Means for Your Development Team

Immediate Action Items

If you’re currently on XM Cloud, start by documenting your existing custom components for compatibility assessment. Review your integrations to evaluate which external system connections could benefit from unified architecture. Look for repetitive marketing workflows that could be handled by agents.

If you’re planning a migration, use this as an opportunity to modernize rather than just lift-and-shift. Evaluate whether SitecoreAI Pathway’s claimed time savings match your migration complexity. Factor in the learning curve for AI agent architecture when planning team skills development.

Skills to Develop

You’ll want to focus on AI workflow design and understand how to structure processes for agent automation. Learn about building secure, compliant boundaries for autonomous operations. Get comfortable designing for a single customer data model versus traditional integration patterns. Become proficient working in the five-environment Studio model.

Developer’s Bottom Line

For XM Cloud developers, this is evolutionary, not revolutionary. Your existing skills remain relevant while the platform adds AI agent capabilities that reduce routine customization work.

For legacy Sitecore developers, the migration path provides an opportunity to modernize architecture while gaining AI automation capabilities but requires learning cloud-native development patterns.

The strategic shift is clear: development work shifts from building everything custom to designing frameworks where AI agents can operate effectively. You’re architecting for intelligent automation, not just content management.

The platform launched today. For developers, the key question isn’t whether AI will change digital platforms, it’s whether you want to learn agent-based architecture now or catch up later.  The future is here and I’m for it.


Coming Up: I’ll be writing follow-up posts on AI agent development patterns, integration architecture deep dives, and migration playbooks.

]]>
https://blogs.perficient.com/2025/11/10/from-xm-cloud-to-sitecoreai-a-developers-guide-to-the-platform-evolution/feed/ 0 388270
Use Cases on AWS AI Services https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/ https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/#comments Sun, 09 Nov 2025 14:48:42 +0000 https://blogs.perficient.com/?p=386758

In today’s AI activated world, there are ample number of AI related tools that organizations can use to tackle diverse business challenges. In line with this, Amazon has it’s set of Amazon Web Services for AI and ML, to address the real-world needs.

This blog provides details on AWS services, but by understanding this writeup you can also get to know how AI and ML capabilities can be used to address various business challenges. To illustrate how these services can be leveraged, I have used a few simple and straightforward use cases and mapped the AWS solutions to them.

 

AI Use Cases : Using AWS Services

1. Employee On boarding process

Any employee onboarding process has its own challenges which can be improved by better information discovery, shortening the onboarding timelines, providing more flexibility to the new hire, option for learning and re-visiting the learning multiple times and enhancing both the security and personalization of the induction experience.

Using natural language queries, the AWS AI service – Amazon Kendra, enables new hires to easily find HR manuals, IT instructions, leave policies, and company guidelines, without needing to know exact file names or bookmark multiple URLs.

Amazon Kendra uses Semantic Search which understands the user’s intent and contextual meaning. Semantic search relies on Vector embeddings, Vector search, Pattern matching and Natural Language Processing.

Real-time data retrieval through Retrieval-augmented Generation (RAG) in Amazon Kendra empowers employees to access up-to-date content securely and efficiently.

Following are examples of few prompts a new hire can use to retrieve information:

  • How can I access my email on my laptop and on my phone.
  • How do I contact the IT support.
  • How can I apply for a leave and who do I reach out to for approvals.
  • How do I submit my timesheet.
  • Where can I find the company training portal.
  • ….etcetera.

Data Security

To protect organizational data and ensure compliance with enterprise security standards, Amazon Kendra supports robust data security measures, including encryption in transit and at rest, and seamless integration with AWS Identity and Access Management (IAM).

Role-based access ensures that sensitive information is only visible to authorized personnel.

Thus, in the Onboarding process, the HR team can provide the personalized touch, and the AI agent ensures the employees have easy, anytime access to the right information throughout their on-boarding journey.

.

2. Healthcare: Unlocking Insights from Unstructured Clinical Data

Healthcare providers always need to extract critical patient information and support timely decision-making. They face the challenge of rapidly analyzing vast amounts of unstructured medical records, such as physician notes, discharge summaries, and clinical reports.

From a data perspective two key features are required, namely, Entity Recognition and Attribute detection. Medical entities include symptoms, medications, diagnoses, and treatment plans. Similarly Attribute detection includes identifying the dosage, frequency and severity associated with these entities.

Amazon provides the service, Amazon Comprehend Medical which uses NLP and ML models for extracting such information from unstructured data available with healthcare organizations.

One of the crucial aspects in healthcare is to handle Security and compliance related to patient’s health data. AWS has Amazon Macie as a security related service which employs machine learning & pattern matching to discover, classify, and protect Protected Health Information (PHI) within Amazon S3 bucket. Such a service helps organizations maintain HIPAA compliance through automated data governance.

 

3. Enterprise data insights

Any large enterprise has data spread across various tools like SharePoint, Salesforce, Leave management portals or some accounting applications.

From these data sets, executives can extract great insights, evaluate what-if scenarios, check on some key performance indicators, and utilize all this for decision making.

We can use AWS AI service, Amazon Q business for this very purpose using various plugins, connectors to DBs, and Retrieval Augmented Generation for up-to-date information.

The user can use natural language to query the system and Amazon Q performs Semantic search to return back contextually appropriate information. It also uses Knowledge Grounding which eventually helps in providing accurate answers not relying solely on training data sets.

To ensure that AI-generated responses adhere strictly to approved enterprise protocols, provide accurate and relevant information, we can define built-in guardrails within Amazon Q, such as Global Controls and Topic blocking.

 

4. Retail company use cases

a) Reading receipts and invoices

The company wants to automate the financial auditing process. In order to achieve this we can use Amazon Textract to read receipts and invoices as it uses machine learning algorithms to accurately identify and extract key information like product names, prices, and reviews.

b) Analyse customer purchasing patterns

The company intends to analyse customer purchasing patterns to predict future sales trends from their large datasets of historical sales data. For these analyses the company wants to build, train, and deploy machine learning models quickly and efficiently.

Amazon SageMaker is the ideal service for such a development.

c) Customer support Bot

The firm receives thousands of customer calls daily. In order to smoothen the process, the firm is looking to create a conversational AI bot which can take text inputs and voice commands.

We can use Amazon Bedrock to create a custom AI application from a dataset of ready to use Foundation models. These models can process large volumes of customer data, generate personalized responses and integrate with other AWS services like Amazon SageMaker for additional processing and analytics.

We can use Amazon Lex to create the bot, and Amazon Polly for text to speech purposes.

d) Image analyses

The company might want to identify and categorize their products based on the images uploaded. To implement this, we can use Amazon S3 and Amazon Rekognition to analyze images as soon as the new product image is uploaded into the storage service.

 

AWS Services for Compliance & Regulations

AWS AI Services for Compliance

AWS Services for Compliance & Regulations

In order to manage complex customer requirements and handling large volumes of sensitive data it becomes essential for us to adhere to various regulations.

Key AWS services supporting these compliance and governance needs include:

  1. AWS Config
    Continuously monitors and records resource configurations to help assess compliance.
  2. AWS Artifact
    Centralized repository for on-demand access to AWS compliance reports and agreements.
  3. AWS CloudTrail
    Logs and tracks all user activity and API calls within your AWS environment for audit purposes.
  4. AWS Inspector
    Automated security assessment service that identifies vulnerabilities and deviations from best practices.
  5. AWS Audit Manager
    Simplifies audit preparation by automating evidence collection and compliance reporting.
  6. AWS Trusted Advisor
    Provides real-time recommendations to optimize security, performance, and cost efficiency.

 

Security and Privacy risks: Vulnerabilities in LLMs

Vulnerabilities in LLMs

Vulnerabilities in LLMs

While dealing with LLMs there are ways available to attack the prompts, however there are various safeguards also against them. Keeping in view the attacks I am noting down some vulnerabilities which are useful to understand the risks around your LLMs.

S.No Vulnerability Description
1 Prompt Injection User input intended to manipulate the LLM
2 Insecure o/p handling Un-validated model’s output.
3 Training data poisoning Malicious data introduced in training set.
4 Model Denial Of Service Disrupting availability by identifying architecture weaknesses.
5 Supply chain vulnerabilities Weakness in s/w, h/w, services used to build or deploy the model.
6 Leakage Leakage of sensitive data.
7 Insecure plugins Flaws in model components.
8 Excessive autonomy Autonomy to the model in decision making.
9 Over – reliance Relying heavily on model’s capabilities.
10 Model theft. Leading to unauthorized re-use of the copies of the model

 

Can you co-relate the above use cases with any of your challenges at hand? Have you been able to use any of the AWS services or other AI platforms for dealing with such challenges?

References:

https://aws.amazon.com/ai/services/
https://www.udemy.com/share/10bvuD/

]]>
https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/feed/ 1 386758
Perficient Honored as Organization of the Year for Cloud Computing https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/ https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/#comments Tue, 28 Oct 2025 20:43:03 +0000 https://blogs.perficient.com/?p=388091

Perficient has been named Cloud Computing Organization of the Year by the 2025 Stratus Awards, presented by the Business Intelligence Group. This prestigious recognition celebrates our leadership in cloud innovation and the incredible work of our entire Cloud team.

Now in its 12th year, the Stratus Awards honor the companies, products, and individuals that are reshaping the digital frontier. This year’s winners are leading the way in cloud innovation across AI, cybersecurity, sustainability, scalability, and service delivery — and we’re proud to be among them.

“Cloud computing is the foundation of today’s most disruptive technologies,” said Russ Fordyce, Chief Recognition Officer of the Business Intelligence Group. “The 2025 Stratus Award winners exemplify how cloud innovation can drive competitive advantage, customer success and global impact.”

This award is a direct reflection of the passion, expertise, and dedication of our Cloud team — a group of talented professionals who consistently deliver transformative solutions for our clients. From strategy and migration to integration and acceleration, their work is driving real business outcomes and helping organizations thrive in an AI-forward world.

We’re honored to receive this recognition and remain committed to pushing the boundaries of what’s possible in the cloud with AI.

Read more about our Cloud Practice.

]]>
https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/feed/ 1 388091
Perficient Wins Silver w3 Award for AI Utility Integration https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/ https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/#respond Fri, 24 Oct 2025 15:49:49 +0000 https://blogs.perficient.com/?p=387677

We’re proud to announce that we’ve been honored with a Silver w3 Award in the Emerging Tech Features – AI Utility Integration category for our work with a top 20 U.S. utility provider. This recognition from the Academy of Interactive and Visual Arts (AIVA) celebrates our commitment to delivering cutting-edge, AI-powered solutions that drive real-world impact in the energy and utilities sector.

“Winning this w3 Award speaks to our pragmatism–striking the right balance between automation capabilities and delivering true business outcomes through purposeful AI adoption,” said Mwandama Mutanuka, Managing Director of Perficient’s Intelligent Automation practice. “Our approach focuses on understanding the true cost of ownership, evaluating our clients’ existing automation tech stack, and building solutions with a strong business case to drive impactful transformation.”

Modernizing Operations with AI

The award-winning solution centered on the implementation of a ServiceNow Virtual Agent to streamline internal service desk operations for a major utility provider serving millions of homes and businesses across the United States. Faced with long wait times and a high volume of repetitive service requests, the client sought a solution that would enhance productivity, reduce costs, and improve employee satisfaction.

Our experts delivered a two-phase strategy that began with deploying an out-of-the-box virtual agent capable of handling low-complexity, high-volume requests. We then customized the solution using ServiceNow’s Conversational Interfaces module, tailoring it to the organization’s unique needs through data-driven topic recommendations and user behavior analysis. The result was an intuitive, AI-powered experience that allowed employees and contractors to self-serve common IT requests, freeing up service desk agents to focus on more complex work and significantly improving operational efficiency.

Driving Adoption Through Strategic Change Management

Adoption is the key to unlocking the full value of any technology investment. That’s why our team partnered closely with the client’s corporate communications team to launch a robust change management program. We created a branded identity for the virtual agent, developed engaging training materials, and hosted town halls to build awareness and excitement across the organization. This holistic approach ensured high engagement and a smooth rollout, setting the foundation for long-term success.

Looking Ahead

The w3 Award is a reflection of our continued dedication to innovation, collaboration, and excellence. As we look to the future, we remain committed to helping enterprises across industries harness the full power of AI to transform their operations. Explore the full success story to learn more about how we’re powering productivity with AI, and visit the w3 Awards Winners Gallery to see our recognition among the best in digital innovation.

For more information on how Perficient can help your business with integrated AI services, contact us today.

]]>
https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/feed/ 0 387677
See Perficient’s Amarender Peddamalku at the Microsoft 365, Power Platform & Copilot Conference https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/ https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/#respond Thu, 23 Oct 2025 17:35:19 +0000 https://blogs.perficient.com/?p=388040

As the year wraps up, so does an incredible run of conferences spotlighting the best in Microsoft 365, Power Platform, and Copilot innovation. We’re thrilled to share that Amarender Peddamalku, Microsoft MVP and Practice Lead for Microsoft Modern Work at Perficient, will be speaking at the Microsoft 365, Power Platform & Copilot Conference in Dallas, November 3–7.

Amarender has been a featured speaker at every TechCon365, DataCon, and PWRCon event this year—and Dallas marks the final stop on this year’s tour. If you’ve missed him before, now’s your chance to catch his insights live!

With over 15 years of experience in Microsoft technologies and a deep focus on Power Platform, SharePoint, and employee experience, Amarender brings practical, hands-on expertise to every session. Here’s where you can find him in Dallas:

Workshops & Sessions

  • Power Automate Bootcamp: From Basics to Brilliance
    Mon, Nov 3 | 9:00 AM – 5:00 PM | Room G6
    A full-day, hands-on workshop for Power Automate beginners.

 

  • Power Automate Multi-Stage Approval Workflows
    Tue, Nov 4 | 9:00 AM – 5:00 PM | Room G2
    Wed, Nov 5 | 3:50 PM – 5:00 PM | Room G6
    Learn how to build dynamic, enterprise-ready approval workflows.

 

  • Ask the Experts
    Wed, Nov 5 | 12:50 PM – 2:00 PM | Expo Hall
    Bring your questions and get real-time answers from Amarender and other experts.

 

  • Build External-Facing Websites Using Power Pages
    Thu, Nov 6 | 1:00 PM – 2:10 PM | Room D
    Discover how to create secure, low-code websites with Power Pages.

 

  • Automate Content Processing Using AI & SharePoint Premium
    Thu, Nov 6 | 4:20 PM – 5:30 PM | Room G6
    Explore how AI and SharePoint Premium (formerly Syntex) can transform content into knowledge.

 

Whether you’re just getting started with Power Platform or looking to scale your automation strategy, Amarender’s sessions will leave you inspired and equipped to take action.

Register now!

]]>
https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/feed/ 0 388040
Datadog Synthetic Monitoring Integration with Azure DevOps Pipeline for Sitecore https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/ https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/#respond Thu, 23 Oct 2025 15:35:10 +0000 https://blogs.perficient.com/?p=387828

Datadog Synthetic Monitoring provides automated, simulated user journeys to proactively confirm the health and performance of websites and APIs, helping detect issues before users experience them. Integrating this into our Azure DevOps pipeline ensures that only builds where core site functionality is verified get promoted, reducing the risk of regressions in production. This approach is especially valuable in Sitecore projects, where critical web journeys and API endpoints are essential to user experience.

Why Use This Approach?

  • Immediate feedback: Failing releases are blocked before merging, saving post-release firefighting.
  • Coverage: Synthetic tests simulate real browser actions and API calls over real user flows.
  • Reliability: Automated testing delivers consistent, repeatable validation without manual steps.
  • Visibility: Results are unified within both Datadog and Azure DevOps for full traceability.
  • Scalability: As Sitecore projects grow, synthetic tests can be expanded to cover new endpoints and user scenarios without significant pipeline changes.
  • Environment parity: Tests can be run against staging, UAT, or pre-production environments before the live rollouts for safer releases.

Prerequisites

  • Active Datadog account with Synthetic Monitoring enabled.
  • Datadog API and Application keys created with the appropriate access scope.
  • Azure DevOps project with a working YAML-based CI/CD pipeline.
  • Secure variable storage in Azure DevOps (e.g., Variable Groups, Secret Variables) for credentials.
  • Stable and accessible endpoint URLs for Sitecore environment(s) under test.

High-Level Integration Process

1. Datadog Synthetic Test Creation

  • Create Browser and/or HTTP Synthetic Tests in Datadog tailored for key Sitecore application flows, such as:
    • Homepage load and rendering
    • Login flow and user dashboard navigation
    • Core API calls (search, content retrieval)
    • Critical commerce or form submissions
  • Use relevant tags (e.g., premerge) for search/query filtering by the CI pipeline.
  • Configure assertions to confirm critical elements:
    • Content correctness
    • HTTP status codes
    • Redirect targets
    • Response time SLAs
  • Validate tests in Datadog’s UI with multiple runs before pipeline integration.

Datadogdashboard1

2. Azure DevOps Pipeline Configuration

The Azure DevOps YAML pipeline is set up to invoke Datadog CI, run all tests matching our tag criteria, and fail the pipeline if any test fails.

Key Pipeline Steps

  • Install Datadog CI binary: Downloads and installs the CLI in the build agent.
  • Run Synthetic Tests: Uses the environment variables and search tags to pick synthetic tests (e.g., all with type: browser tag: remerge) and runs them directly.
  • JUnit Reporting & Artifacts: The CLI output is saved, and a JUnit-formatted result file is generated for Azure DevOps’ Tests UI. All test outputs are attached as build artifacts.
  • Conditional Fast-forward Merge: The pipeline proceeds to a gated merge to release/production only if all synthetics pass.

How Results and Flow Work

When All Tests Pass

  • The pipeline completes the Premerge_Datadog_Synthetics stage successfully.
  • Test summaries (JUnit) and CLI outputs are attached to the pipeline run.
  • Approval-gated merge to the Release branch is unblocked; approvers can verify test results before promotion.

Build artifacts include full logs for further review.

     Pipelinepassed

When Any Test Fails

  • If any synthetic (browser/API) test fails, the CLI exits with a non-zero exit code.
  • The JUnit summary will contain failure info and a link to the log details.
  • The pipeline stage fails (Premerge_Datadog_Synthetics), halting the fast-forward merge.
  • Approvers can review the failure in test results and attached artifacts within Azure DevOps.

Only successful resolution and green reruns allow code promotion.

Pipelinefailed

Best Practices for Datadog Synthetic Monitoring

  • Run tests in parallel to reduce wait times.
  • Use separate synthetic tests per microservice or major Sitecore area to isolate failures.
  • Monitor test trends in Datadog to detect gradual performance regression over time.
  • Limit sensitive data in synthetic flows by avoiding the storage of actual credentials.
  • Schedule periodic synthetic runs outside CI/CD to catch environment fluctuations unrelated to code changes.

Security Considerations

  • Store Datadog keys as secret variables in Azure DevOps.
  • Restrict permission for synthetic management to trusted CICD admins.
  • Avoid embedding credentials or sensitive payloads in test scripts.

Conclusion

By integrating Datadog Synthetic Monitoring directly into our CI/CD pipeline with Azure DevOps. Sitecore teams gain a safety net that blocks faulty builds before they hit production, while keeping a detailed audit trail. Combined with careful test design, secure key management, and continuous expansion of coverage, this approach becomes a cornerstone of proactive web application quality assurance.

 

]]>
https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/feed/ 0 387828