Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ Expert Digital Insights Wed, 07 Jan 2026 15:22:02 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ 32 32 30508587 Bruno : The Developer-Friendly Alternative to Postman https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/ https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/#respond Fri, 02 Jan 2026 08:25:16 +0000 https://blogs.perficient.com/?p=389232

If you’re knee-deep in building apps, you already know APIs are the backbone of everything. Testing them? That’s where the real magic happens. For years, we’ve relied on tools like Postman and Insomnia to send requests, debug issues, and keep things running smoothly. But lately, there’s a buzz about something new: Bruno. It’s popping up everywhere, and developers are starting to make the switch. Why? Let’s dive in.

What Exactly is Bruno?

Picture this: an open-source, high-performance API client that puts your privacy first. Bruno isn’t some bloated app that shoves your stuff into the cloud. “No,” it keeps everything right on your local machine. Your API collections, requests, all of it? Safe and sound where you control it, no cloud drama required.

Bruno is built for developers who want:

  • Simplicity without compromise
  • High performance without unnecessary extras
  • Complete freedom with open-source flexibility

It’s like the minimalist toolbox you’ve been waiting for.

Why is Bruno Suddenly Everywhere?

Bruno solves the pain points that frustrate us with other API tools:

  • Privacy First: No forced cloud uploads, your collections stay local. No hidden syncing; your data stays completely under your control.
  • Fast and Lightweight: Loads quickly and handles requests without lag. Perfect for quick tests on the go.
  • Open-Source Freedom: No fees, no lock-in. Collections are Git-friendly and saved as plain text for easy version control.
  • No Extra Bloat: Focused on what matters, API testing without unnecessary features.

Bottom line: Bruno fits the way we work today, collaboratively, securely, and efficiently. It’s not trying to do everything; it’s just good at API testing.

Key Features

Bruno keeps it real with features that matter. Here are the highlights:

  1. Totally Open-Source

  • No sneaky costs or paywalls.
  • Peek under the hood anytime—the code’s all there.
  • A group of developers is contributing to GitHub, making it better every day. Wanna join? Hit up their repo and contribute.
  1. Privacy from the Ground Up

  • Everything lives locally.
  • No accounts, no cloud pushes—your requests don’t leave your laptop.
  • Ideal if you’re handling sensitive APIs and don’t want Big Tool Company snooping.
  • Bonus: Those plain-text files integrate well with Git, so team handoffs are seamless.
  1. Light as a Feather, Fast as Lightning

  • Clean UI, no extra bells and whistles slowing you down.
  • Starts up quickly and zips through responses.
  • Great for solo endpoint tweaks or managing large workflows without your machine slowing.

Getting Bruno Up and Running

Installing Bruno is simple. It works on Windows, macOS, and Linux. Just choose your platform, and you’re good to go.

#3. Quick Install Guide

Windows

  1. Head to Bruno’s GitHub Releases page.
  2. Grab the latest .exe file.
  3. Run it and follow the prompts.
  4. Boom—find it in your Start Menu.

macOS

  1. Download the .dmg from Releases.
  2. Drag it to Applications.
  3. Fire it up and get testing.

Linux

  1. Snag the .AppImage or .deb from Releases.
  2. For AppImage: chmod +x Bruno.AppImage then ./Bruno.AppImage.
  3. For .deb: sudo dpkg -i bruno.deb and sudo apt-get install -f.

GUI or CLI? Your Call

  • GUI: Feels like Postman but cleaner. Visual, easy-to-build requests on the fly.
  • CLI: For the terminal lovers. Automate tests, integrate with CI/CD, or run collections: bruno run collection.bru –env dev.

Build Your First Collection in Minutes

Bruno makes organizing APIs feel effortless. Here’s a no-sweat walkthrough.

Step 1: Fire It Up

Launch Bruno. You’ll see a simple welcome screen prompting you to create a new collection.

Step 2: New Collection Time

  1. Hit “New Collection.”
  2. Name it (say, “My API Playground”).
  3. Pick a folder—it’s all plain text, so Git loves it.

Step 3: Add a Request

  1. Inside the collection, click “New Request.”
  2. Pick your method (GET, POST, etc.).
  3. Enter the URL: https://jsonplaceholder.typicode.com/posts.

Step 4: Headers and Body Magic

  • Add the header: Content-Type: application/json.
  • For POSTs, add a body like:

JSON

{
"title": "Bruno Blog",
"body": "Testing Bruno API Client",
"userId": 1
}

Step 5: Hit Send

Click it, and watch the response pop: status, timing, pretty JSON—all right there.

Step 6: Save and Sort

Save the request, create folders for environments or APIs, and use variables to switch setups.

Bruno vs. Postman: Head-to-Head

Postman’s the OG, but Bruno’s the scrappy challenger winning hearts. Let’s compare.

  1. Speed

  • Bruno: Lean and mean—quick loads, low resource hog.
  • Postman: Packed with features, but it can feel sluggish on big projects. Edge: Bruno
  1. Privacy

  • Bruno: Local only, no cloud creep.
  • Postman: Syncs to their servers—handy for teams, sketchy for secrets. Edge: Bruno
  1. Price Tag

  • Bruno: Free forever, open-source vibes.
  • Postman: Free basics, but teams and extras? Pay up. Edge: Bruno

 

Feature Bruno Postman
Open Source ✅ Yes ❌ No
Cloud Sync ❌ No ✅ Yes
Performance ✅ Lightweight ❌ Heavy
Privacy ✅ Local Storage ❌ Cloud-Based
Cost ✅ Free ❌ Paid Plans

Level up With Advanced Tricks

Environmental Variables

Swap envs easy-peasy:

  • Make files for dev/staging/prod.
  • Use {{baseUrl}} in requests.
  • Example:
{
"baseUrl": "https://api.dev.example.com",
"token": "your-dev-token"
}

 

Scripting Smarts

Add pre/post scripts for:

  • Dynamic auth: request.headers[“Authorization”] = “Bearer ” + env.token;
  • Response checks or automations.

Community & Contribution

It’s community-driven:

Conclusion

Bruno isn’t just another API testing tool; it’s designed for developers who want simplicity and control. With local-first privacy, fast performance, open-source flexibility, and built-in Git support, Bruno delivers everything you need without unnecessary complexity.
If you’re tired of heavy, cloud-based clients, it’s time to switch. Download Bruno today and experience the difference: Download here.

 

]]>
https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/feed/ 0 389232
GitLab to GitHub Migration https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/ https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/#respond Mon, 29 Dec 2025 07:59:05 +0000 https://blogs.perficient.com/?p=389333

1. Why Modern Teams Choose GitHub

Migrating from GitLab to GitHub represents a strategic shift for many engineering teams. Organizations often move to leverage GitHub’s massive open-source community and superior third-party tool integrations. Moreover, GitHub Actions provides a powerful, modern ecosystem for automating complex developer workflows. Ultimately, this transition simplifies standardization across multiple teams while improving overall project visibility.

2. Prepare Your Migration Strategy

A successful transition requires more than just moving code. You must account for users, CI/CD pipelines, secrets, and governance to avoid data loss. Consequently, a comprehensive plan should cover the following ten phases:

  • Repository and Metadata Transfer

  • User Access Mapping

  • CI/CD Pipeline Conversion

  • Security and Secret Management

  • Validation and Final Cutover

3. Execute the Repository Transfer

The first step involves migrating your source code, including branches, tags, and full commit history.

  • Choose the Right Migration Tool

For straightforward transfers, the GitHub Importer works well. However, if you manage a large organization, the GitHub Enterprise Importer offers better scale. For maximum control, technical teams often prefer the Git CLI.

Command Line Instructions:

git clone –mirror gitlab_repo_url
cd repo.git
git push –mirror github_repo_url

Manage Large Files and History:

During this phase, audit your repository for large binary files. Specifically, you should use Git LFS (Large File Storage) for any assets that exceed GitHub’s standard limits.

4. Map Users and Recreate Secrets

GitLab and GitHub use distinct identity systems, so you cannot automatically migrate user accounts. Instead, you must map GitLab user emails to GitHub accounts and manually invite them to your new organization.

Secure Your Variables and Secrets:

For security reasons, GitLab prevents the export of secrets. Therefore, you must recreate them in GitHub using the following hierarchy:

  • Repository Secrets: Use these for project-level variables.

  • Organization Secrets: Use these for shared variables across multiple repos.

  • Environment Secrets: Use these to protect variables in specific deployment stages.

5.Migrating Variables and Secrets

Securing your environment requires a clear strategy for moving CI/CD variables and secrets. Specifically, GitLab project variables should move to GitHub Repository Secrets, while group variables should be placed in Organization Secrets. Notably, secrets must be recreated manually or via the GitHub API because they cannot be exported from GitLab for security reasons.

6. Convert GitLab CI to GitHub Actions

Translating your CI/CD pipelines often represents the most challenging part of the migration. While GitLab uses a single.gitlab-ci.yml file, GitHub Actions utilizes separate workflow files in the .github/workflows/ directory.

Syntax and Workflow Changes:

When converting, map your GitLab “stages” into GitHub “jobs”. Moreover, replace custom GitLab scripts with pre-built actions from the GitHub Marketplace to save time. Finally, ensure your new GitHub runners have the same permissions as your old GitLab runners.

7.Finalize the Metadata and Cutover

Metadata like Issues, Pull Requests (Merge Requests in GitLab), and Wikis require special handling because Git itself does not track them.

The Pre-Cutover Checklist:

Before the official switch, verify the following:

  1. Freeze all GitLab repositories to stop new pushes.

  2. Perform a final sync of code and metadata.

  3. Update webhooks for tools like Slack, Jira, or Jenkins.

  4. Verify that all CI/CD pipelines run successfully.

8. Post-Migration Best Practices

After completing the cutover, archive your old GitLab repositories to prevent accidental updates. Furthermore, enable GitHub’s built-in security features like Dependabot and Secret Scanning to protect your new environment. Finally, provide training sessions to help your team master the new GitHub-centric workflow.

.

9. Final Cutover and Post-Migration Best Practices

Ultimately, once all repositories are validated and secrets are verified, you can execute the final cutover. Specifically, you should freeze your GitLab repositories and perform a final sync before switching your DNS and webhooks. Finally, once the move is complete, remember to archive your old GitLab repositories and enable advanced security features like Dependabot and secret scanning.

10.Summary and Final Thoughts

In conclusion, a GitLab to GitHub migration is a significant but rewarding effort. By following a structured plan that includes proper validation and team training, organizations can achieve a smooth transition. Therefore, with the right tooling and preparation, you can successfully improve developer productivity and cross-team collaboration

]]>
https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/feed/ 0 389333
Unifying Hybrid and Multi-Cloud Environments with Azure Arc https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/ https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/#respond Mon, 22 Dec 2025 08:06:05 +0000 https://blogs.perficient.com/?p=389202

1. Introduction to Modern Cloud Architecture

In today’s world, architects generally prefer to keep their compute resources—such as virtual machines and Kubernetes servers—spread across multiple clouds and on-premises environments. Specifically, they do this to achieve the best possible resilience through high-availability and disaster recovery. Moreover, this approach allows for better cost efficiency and higher security.

2. The Challenge of Management Complexity

However, this distributed strategy brings additional challenges. Specifically, it increases the complexity of maintaining and managing resources from different consoles, such as Azure, AWS, and Google portals. Consequently, even for basic operations like restarts or updates, administrators often struggle with multiple disparate portals. As a result, basic administration tasks become too complex and cumbersome.

3. How Azure Arc Provides a Solution

Azure Arc solves this problem by providing a simple “pane of glass” to manage and monitor servers regardless of their location. In addition, it simplifies governance by delivering a consistent management platform for both multi-cloud and on-premises resources. Specifically, it provides a centralized way to project existing non-Azure resources directly into the Azure Resource Manager (ARM).

4. Understanding Key Capabilities

Currently, Azure Arc allows you to manage several resource types outside of Azure. For instance, it supports servers, Kubernetes clusters, and databases. Furthermore, it offers several specific functionalities:

  • Azure Arc-enabled Servers: Connects physical or virtual Windows and Linux servers to Azure for centralized visibility.

  • Azure Arc-enabled Kubernetes: Additionally, you can onboard any CNCF-conformant Kubernetes cluster to enable GitOps-based management.

  • Azure Arc-enabled SQL Server: This brings external SQL Server instances under Azure governance for advanced security.

5. Architectural Implementation Details

The Azure Arc architecture revolves primarily around the Azure Resource Manager. Specifically, when a resource is onboarded, it receives a unique resource ID and becomes part of Azure’s management plane. Consequently, each resource installs a local agent that communicates with Azure to receive policies and upload logs.

6. The Role of the Connected Machine Agent

The agent package contains several logical components bundled together. For instance, the Hybrid Instance Metadata service (HIMDS) manages the connection and the machine’s Azure identity. Moreover, the guest configuration agent assesses whether the machine complies with required policies. In addition, the Extension agent manages VM extensions, including their installation and upgrades.

7. Onboarding and Deployment Methods

Onboarding machines can be accomplished using different methods depending on your scale. For example, you might use interactive scripts for small deployments or service principals for large-scale automation. Specifically, the following options are available:

  • Interactive Deployment: Manually install the agent on a few machines.

  • At-Scale Deployment: Alternatively, connect machines using a service principal.

  • Automated Tooling: Furthermore, you can utilize Group Policy for Windows machines.

8. Strategic Benefits for Governance

Ultimately, Azure Arc provides numerous strategic benefits for modern enterprises. Specifically, organizations can leverage the following:

  • Governance and Compliance: Apply Azure Policy to ensure consistent configurations across all environments.

  • Enhanced Security: Moreover, use Defender for Cloud to detect threats and integrate vulnerability assessments.

  • DevOps Efficiency: Enable GitOps-based deployments for Kubernetes clusters.

9. Important Limitations to Consider

However, there are a few limitations to keep in mind before starting your deployment. First, continuous internet connectivity is required for full functionality. Secondly, some features may not be available for all operating systems. Finally, there are cost implications based on the data services and monitoring tools used.

10. Conclusion and Summary

In conclusion, Azure Arc empowers organizations to standardize and simplify operations across heterogeneous environments. Whether you are managing legacy infrastructure or edge devices, it brings everything under one governance model. Therefore, if you are looking to improve control and agility, Azure Arc is a tool worth exploring.

]]>
https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/feed/ 0 389202
How to Secure Applications During Modernization on AWS https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/ https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/#respond Fri, 19 Dec 2025 06:40:17 +0000 https://blogs.perficient.com/?p=389050

Why Do We Need to Secure Our Applications?  

Cloud environments are very dynamic and interconnected. A single misconfiguration or exposed API key can lead to:  

  • Data breaches 
  • Compliance violations 
  • Costly downtime 

Attackers often target application-level weaknesses, not just infrastructure gaps. If any application handles sensitive data, financial transactions, or user credentials, security is critical. 

Common Mistakes Made When Building Applications

  • Hardcoding API keys and credentials 
  • Ignoring dependency vulnerabilities 
  • Skipping encryption/decryption for sensitive data 

Essential Security Best Practices

1. Identity and Access Management (IAM)

  • Create dedicated IAM roles for your Lambda functions, EC2 instances, or ECS tasks instead of hardcoding access keys in your application. 
  • We must regularly review who has permissions using the IAM Access Analyzer. 
  • We must avoid using the root account for day-to-day operations/ any operations as a developer. 

Role Creation

 

Role Creation1

2. Don’t Store/Share Secrets in Your Code

Your appsettings.json is not the right place for secret keys. Storing API keys or database passwords. 

  • We must use AWS Secrets Manager or Parameter Store to keep secrets safe. 
  • Fetch keys at runtime by using AWS SDK for .NET or the AWSSDK.Extensions.NETCore.Setup configuration provider 

Secretmanager Creation2

Secretmanager Reading

3. Always Encrypt Data 

Encryption is one of the best practices to encrypt our sensitive data 

  • Enable HTTPS by default for all your endpoints.  
  • Use AWS Certificate Manager (ACM) to issue and manage SSL/TLS certificates. 
  • In your application, make sure that all traffic is redirected to HTTPS by adding app.UseHttpsRedirection(); 
  • AWS KMS to encrypt your S3 buckets, RDS databases, and EBS volumes.
  • If you’re using SQL Server on RDS, enable Transparent Data Encryption (TDE). 

 Encrypt & Decrypt API Key with KMS 

Encryption Steps

Encryption Decrypt Code

4. Build a Secure Network Foundation

  • Must use VPCs with private subnets for backend services. 
  • Control the traffic with Security Groups and Network ACLs. 
  • Use VPC Endpoints to keep traffic within AWS’s private network  
  • Use AWS WAF to protect your APIs, and enable AWS Shield to guard against DDoS attacks. 

Security Group

Vpc Creation

5. Keep Your Code and Dependencies Clean

Even the best infrastructure can’t save a vulnerable codebase. 

  • Update your .NET SDK and NuGet packages regularly. 
  • Use Amazon Inspector for runtime and AWS environment security, and tools like Dependabot for Development-time dependency security to find vulnerabilities early. 
  • Add code review analysis tools (like SonarQube) in your CI/CD pipeline. 

AWS Inspector

6. Log Everything and Watch

  • Enable Amazon AWS CloudWatch for all central logging and use AWS X-Ray to trace requests through the application. 
  • Turn on CloudTrail to track every API call across your account. 
  • Enable GuardDuty for continuous threat detection. 

 

]]>
https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/feed/ 0 389050
Deploy Microservices On AKS using GitHub Actions https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/ https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/#respond Thu, 18 Dec 2025 05:30:05 +0000 https://blogs.perficient.com/?p=389089

Deploying microservices in a cloud-native environment requires an efficient container orchestration platform and an automated CI/CD pipeline. Azure Kubernetes Service (AKS) is a Kubernetes solution that is managed by Azure. GitHub Actions makes it easy to automate your CI/CD processes from the source code repository.

Image (1)

Why Use GitHub Actions with AKS

Using GitHub Actions for AKS deployments provides:

  • Automated and consistent deployments
  • Faster release cycles
  • Reduced manual intervention
  • Easy Integration with GitHub repositories
  • Better visibility into build and deployment status

Architecture Overview

The deployment workflow follows a CI/CD approach:

  • Microservices packaged as Docker images
  • Images pushed to ACR
  • AKS pulls the image from ACR
  • GitHub Actions automates
  • Build & Push Docker Images
  • Deploy manifests to AKS

Image

Prerequisites

Before proceeding with the implementation, ensure the following   prerequisites are in place:

  • Azure Subscriptions
  • Azure CLI Installed and authenticated (AZ)
  • An existing Azure Kubernetes Service (AKS) cluster
  • Kubectl is installed and configured for your cluster
  • Azure Container Registry (ACR) associated with the AKS cluster
  • GitHub repository with microservices code

Repository Structure

Each microservice is maintained in a separate repository with the following structure:  .github/workflows/name.yml

CI/CD Pipeline Stages Overview

  • Source Code Checkout
  • Build Docker Images
  • Push images to ACR
  • Authenticate to AKS
  • Deploy Microservices using kubectl

Configure GitHub Secrets

Go to GitHub – repository – Settings – Secrets and Variables – Actions  

Add the following secrets:

  • ACR_LOGIN_SERVER
  • ACR_USERNAME
  • ACR_PASSWORD
  • KUBECONFIG

Stage 1: Source Code Checkout

The Pipeline starts by pulling the latest code from the GitHub repository

Stage 2: Build Docker Images

For each microservice:

  • A Docker image is built
  • A unique tag (commit ID and version) is assigned

Images are prepared for deployment

Stage 3: Push Images to Azure Container Registry

Once the images are built:

  • GitHub Actions authenticates to ACR
  • Images are pushed securely to the registry
  • After the initial setup, AKS subsequently pulls the images directly from ACR

Stage 4: Authenticate to AKS

GitHub Actions connects to the AKS cluster using kubeconfig

Stage 5: Deploy Microservices to AKS

In this stage:

  • Kubernetes manifests are applied
  • Services are exposed via the Load Balancer

Deployment Validation

After deployment:

  • Pods are verified to be in a running state
  • Check the service for external access

Best Practices

To make the pipeline production Ready:

  • Use commit-based image tagging
  • Separate environments (dev, stage, prod)
  • Use namespace in AKS
  • Store secrets securely using GitHub Secrets

Common Challenges and Solutions

  • Image pull failures: Verify ACR permission
  • Pipeline authentication errors: Validate Azure credentials
  • Pod crashes: Review container logs and resource limits

Benefits of CI/CD with AKS and GitHub Actions

  • Faster deployments
  • Improved reliability
  • Scalable microservices architecture
  • Better developer productivity
  • Reduced operational overhead

Conclusion

Deploying microservices on AKS using GitHub Actions provides a robust, scalable, and automated CI/CD solution. By integrating container builds, registry management, and Kubernetes deployments into a single pipeline, teams can deliver applications faster and more reliably.

CI/CD is not just about automation – it’s about confidence, consistency, and continuous improvement.

 

]]>
https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/feed/ 0 389089
Monitoring and Logging in Sitecore AI https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/ https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/#respond Mon, 24 Nov 2025 21:04:34 +0000 https://blogs.perficient.com/?p=388586

Why Observability Matters More Than Ever

Moving from traditional Sitecore deployments Sitecore AI means the infrastructure is abstracted away. That’s fantastic for agility, but it also changes how we troubleshoot. You can’t RDP onto a server and tail a file anymore; your lifeline is observability: clear signals from logs, metrics, and governed automation that tell you what’s happening across the platform and the front‑end.

What’s Different in Sitecore AI?

Logs and diagnostics are centralized. You access them via the Sitecore AI portal and the Sitecore CLI. They’re organized by environment and by role. Your front‑end application or rendering host, often a Next.js site deployed on Vercel, responsible for headless rendering and user experience has its own telemetry separate from the CMS.

So, your monitoring picture spans three surfaces: Sitecore AI logs for CMS and deployment activity, rendering host telemetry for front‑end performance, and Experience Edge signals for content delivery. Together, they describe the health of the experience, not just the servers.

 

Understanding the Logging Surfaces

In Sitecore AI, logs are grouped into three primary areas that each play a distinct role in diagnosing issues:

Content Management (CM) logs

  • These are your first stop for diagnosing publishing failures, broken workflows, template errors, and serialization mismatches. When a publish fails, CM logs help you separate permissions or workflow problems from data or serialization issues.

Rendering Host logs

  • Think front‑end behavior and performance. If personalization falls back, pages render slowly, or API responses seem sluggish, the rendering host logs surface cache misses, API latency, and rendering errors that directly impact Core Web Vitals and UX.

Deployment logs

  • The “narrative” of your CI/CD run. When a build fails or a promotion doesn’t complete, deployment logs pinpoint CLI command failures, artifact mismatches, or environment configuration issues. They also provide stage-by-stage visibility (provisioning, build, deploy, post‑actions), which speeds triage and supports audits.

Access these logs quickly in the Deploy app’s environment view or programmatically via the Sitecore CLI for listing, viewing, and downloading logs as part of your pipeline artifacts.

Integration Patterns for Enterprise Monitoring

Centralizing is helpful; correlating is essential. The pragmatic pattern I recommend is:

Sitecore AI → Azure Monitor/Application Insights

  • Forward CMS and deployment logs so you can correlate spikes in errors with deployments, content bursts, or traffic changes. KQL lets you slice by environment, role, and severity for root cause analysis.

Rendering Host → APM (Datadog/New Relic)

  • Use front‑end analytics to track TTFB, cache hit ratio, route errors, and API dependency health. Pair this with Vercel’s own analytics for global edge performance.

Experience Edge → Webhook Monitoring

  • Register webhooks so you can track publish‑to‑Edge latency and trigger alerts or redeploys when content propagation slows or fails.

SIEM Integration (today’s reality)

  • For unified audit across Sitecore SaaS, stream supported Common Audit Logs (CAL) via webhooks (Personalize/CDP/Connect) and, for Sitecore AI, pull environment and deployment logs via CLI on a schedule until broader CAL coverage lands.

Metrics That Matter

In a SaaS world, traditional “server up” checks don’t describe user experience. Focus on metrics that map directly to reliability and business impact:

Deployment success & promotion health

  • Failed builds or promotions block content and features. Tracking rates and mean time to recovery reveals pipeline reliability.

Publish‑to‑Edge latency

  • Authors expect content to reach Experience Edge quickly. Latency here affects real‑time campaigns, previews, and editorial confidence.

Rendering host performance

  • P95/P99 TTFB, cache hit ratio, and error rates impact Core Web Vitals, SEO, and conversion. They also help you spot regressions after releases.

Agent activity & governance

  • With Sitecore AI’s agentic capabilities, monitoring agent runs, approvals, and failures protects compliance and prevents unintended bulk changes.

Governance Signals in Sitecore AI

Sitecore AI introduces Agentic Studio: a governed workspace to design, run, and oversee automation. Work is organized around four building blocks, Agents, Flows, Spaces, and Signals. Practically, that means you can automate complex operations while maintaining human review and auditability.

  • Agents: Handle focused tasks (e.g., content migration, metadata updates).
  • Flows: Orchestrate agents into multi‑step workflows with visibility across stages.
  • Spaces: Provide shared context for teams to collaborate on active runs.

Signals surface trends and triggers that can start or adjust flows. Together, these give marketers and developers a safe frame to scale automation without losing control.

How Agent Flows Are Monitored

Monitoring agent flows blends product‑level visibility with enterprise analytics:

Run visibility in Agentic Studio:

  • Each flow run exposes status, participants (human and agent), timestamps, and outcomes. Because flows are orchestrated in a governed workspace, you get “full visibility” into progression from brief to publish/optimization, including approvals where human review is required.

Governance signals and audit trails:

  • Signals can trigger flows and also act as governance inputs (for example, trend alerts requiring approval). Capture audit trails of who initiated a run, which agents executed steps, and what content or configurations changed.

Alerting and dashboards:

  • Mirror key flow events into your monitoring plane: start, paused awaiting approval, failed step, completed. Route these into Azure Monitor or your SIEM so operations sees agentic activity alongside deployments and content events.

Integration approach:

  • Where Common Audit Logs (CAL) are available (Personalize/CDP/Connect), stream events via webhooks. For Sitecore AI and Agentic activity not yet covered by CAL, use scheduled CLI log exports and APIs the platform exposes to assemble a unified view. Normalize event schemas (runId, agentId, flowId, environment, severity) to enable cross‑product correlation.

The outcome: agent automation becomes observable. Teams can answer “what changed, when, by whom, and why” and tie those answers to performance and compliance dashboards.

Final Thoughts

Observability in Sitecore AI isn’t about servers; it’s about experience health and trusted automation. When you combine SaaS‑native logs, front‑end telemetry, Edge events, and agentic governance signals, you gain a single narrative across deployments, content, and automation, the narrative you need to keep teams fast, safe, and accountable.

]]>
https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/feed/ 0 388586
A Tool For CDOs to Keep Their Cloud Secure: AWS GuardDuty Is the Saw and Perficient Is the Craftsman https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/ https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/#respond Tue, 18 Nov 2025 13:20:08 +0000 https://blogs.perficient.com/?p=388374

In the rapidly expanding realm of cloud computing, Amazon Web Services (AWS) provides the infrastructure for countless businesses to operate and innovate. But with an ever-increasing amount of data, applications, and workloads on the cloud protecting this data poses significant security challenges. As a firm’s data, applications, and workloads migrate to the cloud, protecting them from both sophisticated threats as well as brute force digital attacks is of paramount importance. This is where Amazon GuardDuty enters as a powerful, vigilant sentinel.

What is Amazon GuardDuty?

At its core, Amazon GuardDuty is a continuous security monitoring service designed to protect your AWS accounts and workloads. The software serves as a 24/7 security guard for your entire AWS environment, not just individual applications, and is constantly scanning for malicious activity and unauthorized behavior.

The software works by analyzing a wide variety of data sources within your firm’s AWS account—including AWS CloudTrail event logs, VPC flow logs, and DNS query logs—using machine learning, threat intelligence feeds, and anomaly detection techniques.

If an external party tries a brute-force login, a compromised instance is communicating with a known malicious IP address, or an unusual API call is made, GuardDuty is there to spot it and can be configured to trigger automated actions through services can trigger automated actions through services like Amazon CloudWatch Events and AWS Lambda when a threat is found as well as alert human administrators to take action.

When a threat is detected, GuardDuty generates a finding with a severity level (high, medium, or low) and a score. The severity and score both help minimize time spent on more routine exceptions while highlighting significant events to your data security team.

Why is GuardDuty So Important?

In today’s digital landscape, relying solely on traditional, static security measures is not sufficient. Cybercriminals are constantly evolving their tactics, which is why GuardDuty is an essential component of your AWS security strategy:

  1. Proactive, Intelligent Threat Detection

GuardDuty moves beyond simple rule-based systems. Its use of machine learning allows it to detect anomalies that human security administrators might miss, identifying zero-day threats and subtle changes in behavior that indicate a compromise. It continuously learns and adapts to new threats without requiring manual updates from human security administrators.

  1. Near Real-Time Monitoring and Alerting

Speed is critical in incident response. GuardDuty provides findings in near real-time, delivering detailed security alerts directly to the AWS Management Console, Amazon EventBridge, and Amazon Security Hub. This immediate notification allows your firm’s security teams to investigate and remediate potential issues quickly, minimizing potential damage and alerting your firm’s management.

  1. Broad Protection Across AWS Services

GuardDuty doesn’t just watch over your firm’s Elastic Compute Cloud (“EC2”) instances. GuardDuty also protects a wide array of AWS services, including:

  • Simple Storage Service (“S3”) Buckets: Detecting potential data exfiltration or policy changes that expose sensitive data.
  • EKS/Kubernetes: Monitoring for threats to your container workloads.  No more running malware or mining bitcoin in your firm’s containers.
  • Databases (Aurora; RDS – MySQL, PostgreSQL, MariaDB, Oracle, and Microsoft SQL Server; and Redshift): Identifying potential compromise or unauthorized access to data.

Conclusion:

In the cloud, security is a shared responsibility. While AWS manages the security of the cloud infrastructure itself, you are responsible for security in the cloud—protecting your data, accounts, and workloads. Amazon GuardDuty is an indispensable tool in fulfilling that responsibility. It provides an automated, intelligent, and scalable layer of defense that empowers you to stay ahead of malicious actors.

To enable Amazon GuardDuty, consider contacting Perficient to help enable, configure, and train staff. Perficient is an AWS partner and has achieved Premier Tier Services Partner status, the highest tier in the Amazon Web Services (AWS) Partner Network. This elevated status reflects Perficient’s expertise, long-term investment, and commitment to delivering customer solutions on AWS.

Besides the firm’s Partner Status, Perficient has demonstrated significant expertise in areas like cloud migration, modernization, and AI-driven solutions, with a large team of AWS-certified professionals.

In addition to these competencies, Perficient has been designated for specific service deliveries, such as AWS Glue Service Delivery, and also has available Amazon-approved software in the AWS Marketplace.

Our financial services experts continuously monitor the financial services landscape and deliver pragmatic, scalable solutions that meet the required mandate and more. Reach out to Perficient’s Director and Head of Payments Practice Amanda Estiverne-Colas to discover why Perficient has been trusted by 18 of the top 20 banks, 16 of the 20 largest wealth and asset management firms, and 25+ leading payment + card processing companies.

 

]]>
https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/feed/ 0 388374
From XM Cloud to SitecoreAI: A Developer’s Guide to the Platform Evolution https://blogs.perficient.com/2025/11/10/from-xm-cloud-to-sitecoreai-a-developers-guide-to-the-platform-evolution/ https://blogs.perficient.com/2025/11/10/from-xm-cloud-to-sitecoreai-a-developers-guide-to-the-platform-evolution/#respond Mon, 10 Nov 2025 16:34:28 +0000 https://blogs.perficient.com/?p=388270

What developers need to know about the architectural changes that launched on November 10th

Last week at Sitecore Symposium 2025 was one of those rare industry events that reminded me why this community is so special. I got to reconnect with former colleagues I hadn’t seen in years, finally meet current team members face-to-face who had only been voices on video calls, and form genuine new relationships with peers across the ecosystem. Beyond the professional connections, we spent time with current customers and had fascinating conversations with potential new ones about their challenges and aspirations. And let’s be honest—the epic Universal Studios party that capped off the event didn’t hurt either.

Now that we’re settling back into routine work, it’s time to unpack everything that was announced. The best part? As of today, November 10th, it’s all live. When you log into the platform, you can see and experience everything that was demonstrated on stage.

After a decade of Sitecore development, I’ve learned to separate marketing announcements from actual technical changes. This one’s different: SitecoreAI represents a genuine architectural shift toward AI-first design that changes how we approach development.

Here’s what developers need to know about the platform evolution that launched today.

Architecture Changes That Matter

Cloud-Native Foundation with New Deployment Model

SitecoreAI maintains XM Cloud’s Azure-hosted foundation while introducing four connected environments:

  • Agentic Studio – where marketers and AI collaborate to plan, create, and personalize experiences
  • App Studio – dedicated space for custom application development
  • Sitecore Connect – for integrations
  • Marketplace – for sharing and discovering solutions

If you’re already on XM Cloud, your existing implementations transition without breaking changes. That’s genuinely good news—no major refactoring required. The platform adds enhanced governance with enterprise deployment controls without sacrificing the SaaS agility we’ve come to expect. There’s also a dedicated App Studio environment specifically for custom application development.

The entire platform is API-first, with RESTful APIs for all platform functions, including AI agent interaction. The key difference from traditional on-premises complexity is that you get cloud-native scaling with enterprise-grade governance built right in.

Unified Architecture vs. Integration Complexity

The biggest architectural change is having unified content, customer data, personalization, and AI in a single platform. This fundamentally changes how we think about integrations.

Instead of connecting separate CMS, CDP, personalization, and AI tools, everything operates within one data model. Your external system integrations change from multi-platform orchestration to single API framework connections. There are trade-offs here—you gain architectural simplicity but need to evaluate vendor lock-in versus best-of-breed flexibility for your specific requirements.

The Development Paradigm Shift: AI Agents

The most significant change for developers is the introduction of autonomous AI agents as a platform primitive. They’ve gone ahead and built this functionality right into the platform, so we’re not trying to bolt it on as an addon. This feels like it’s going to be big.

What AI Agents Mean for Developers

AI agents operate within the platform to handle marketing workflows autonomously—content generation, A/B testing, personalization optimization. They’re not replacing custom code; they’re handling repeatable marketing tasks.

As developers, our responsibilities shift to designing the underlying data models that agents consume, creating integration patterns for agent-external system interactions, building governance frameworks that define agent operational boundaries, and handling complex customizations that exceed agent capabilities.

Marketers can configure basic agents without developer involvement, but custom data models, security frameworks, and complex integrations still require development expertise. So our role evolves rather than disappears.

New Skillset Requirements

Working with AI agents requires understanding several new concepts. You need to know how to design secure, compliant boundaries for agent operations and governed AI frameworks. You’ll also need to structure data so agents can operate effectively, understand how agents learn and improve from configuration and usage, and know when to use agents versus traditional custom development.

This combines traditional technical architecture with AI workflow design.  A new skillset that bridges development and intelligent automation.

Migration Path from XM Cloud

What “Seamless Transition” Actually Means

For XM Cloud customers, the upgrade path is genuinely straightforward. There are no breaking changes.  Existing customizations, integrations, and content work without modification. AI capabilities layer on top of current functionality, and the transition can happen immediately.  When you log in today it’ll all be there waiting for you, no actions needed.

Legacy Platform Migrations

For developers migrating from older Sitecore implementations or other platforms, SitecoreAI provides SitecoreAI Pathway tooling that claims 70% faster migration timelines. The tooling includes automated content conversion with intelligent mapping of existing content structures, schema translation with automated data model conversion and manual review points, and workflow recreation tools to either replicate existing processes or redesign them with AI agent capabilities.

Migration Planning Approach

Based on what I’ve seen, successful migrations follow a clear pattern. Start with an assessment phase to catalog existing customizations, integrations, and workflows. Then make strategy decisions about whether to replicate each component exactly or reimagine it with AI agents. Use a phased implementation that starts with core functionality and gradually add AI-enhanced workflows. Don’t forget team training to educate developers on agent architecture and governance patterns.

The key architectural question becomes: which processes should remain as traditional custom code versus be reimagined as AI agent workflows?

Integration Strategy Considerations

API Framework and Connectivity

SitecoreAI’s unified architecture changes integration patterns significantly. You get native ecosystem integration with direct connectivity to Sitecore XP, Search, CDP, and Personalize without separate integration layers. Third-party integration happens through a single API framework with webhook support for real-time external system connectivity. Authentication is unified across all platform functions.

Data Flow Changes

The unified customer data model affects how you architect integrations. You now have a single customer profile across content, behavior, and AI operations. Real-time data synchronization happens without ETL complexity, and there’s centralized data governance for AI agent operations.

One important note: existing integrations that rely on separate CDP or personalization APIs may need updates to leverage the unified data model.

What This Means for Your Development Team

Immediate Action Items

If you’re currently on XM Cloud, start by documenting your existing custom components for compatibility assessment. Review your integrations to evaluate which external system connections could benefit from unified architecture. Look for repetitive marketing workflows that could be handled by agents.

If you’re planning a migration, use this as an opportunity to modernize rather than just lift-and-shift. Evaluate whether SitecoreAI Pathway’s claimed time savings match your migration complexity. Factor in the learning curve for AI agent architecture when planning team skills development.

Skills to Develop

You’ll want to focus on AI workflow design and understand how to structure processes for agent automation. Learn about building secure, compliant boundaries for autonomous operations. Get comfortable designing for a single customer data model versus traditional integration patterns. Become proficient working in the five-environment Studio model.

Developer’s Bottom Line

For XM Cloud developers, this is evolutionary, not revolutionary. Your existing skills remain relevant while the platform adds AI agent capabilities that reduce routine customization work.

For legacy Sitecore developers, the migration path provides an opportunity to modernize architecture while gaining AI automation capabilities but requires learning cloud-native development patterns.

The strategic shift is clear: development work shifts from building everything custom to designing frameworks where AI agents can operate effectively. You’re architecting for intelligent automation, not just content management.

The platform launched today. For developers, the key question isn’t whether AI will change digital platforms, it’s whether you want to learn agent-based architecture now or catch up later.  The future is here and I’m for it.


Coming Up: I’ll be writing follow-up posts on AI agent development patterns, integration architecture deep dives, and migration playbooks.

]]>
https://blogs.perficient.com/2025/11/10/from-xm-cloud-to-sitecoreai-a-developers-guide-to-the-platform-evolution/feed/ 0 388270
Use Cases on AWS AI Services https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/ https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/#comments Sun, 09 Nov 2025 14:48:42 +0000 https://blogs.perficient.com/?p=386758

In today’s AI activated world, there are ample number of AI related tools that organizations can use to tackle diverse business challenges. In line with this, Amazon has it’s set of Amazon Web Services for AI and ML, to address the real-world needs.

This blog provides details on AWS services, but by understanding this writeup you can also get to know how AI and ML capabilities can be used to address various business challenges. To illustrate how these services can be leveraged, I have used a few simple and straightforward use cases and mapped the AWS solutions to them.

 

AI Use Cases : Using AWS Services

1. Employee On boarding process

Any employee onboarding process has its own challenges which can be improved by better information discovery, shortening the onboarding timelines, providing more flexibility to the new hire, option for learning and re-visiting the learning multiple times and enhancing both the security and personalization of the induction experience.

Using natural language queries, the AWS AI service – Amazon Kendra, enables new hires to easily find HR manuals, IT instructions, leave policies, and company guidelines, without needing to know exact file names or bookmark multiple URLs.

Amazon Kendra uses Semantic Search which understands the user’s intent and contextual meaning. Semantic search relies on Vector embeddings, Vector search, Pattern matching and Natural Language Processing.

Real-time data retrieval through Retrieval-augmented Generation (RAG) in Amazon Kendra empowers employees to access up-to-date content securely and efficiently.

Following are examples of few prompts a new hire can use to retrieve information:

  • How can I access my email on my laptop and on my phone.
  • How do I contact the IT support.
  • How can I apply for a leave and who do I reach out to for approvals.
  • How do I submit my timesheet.
  • Where can I find the company training portal.
  • ….etcetera.

Data Security

To protect organizational data and ensure compliance with enterprise security standards, Amazon Kendra supports robust data security measures, including encryption in transit and at rest, and seamless integration with AWS Identity and Access Management (IAM).

Role-based access ensures that sensitive information is only visible to authorized personnel.

Thus, in the Onboarding process, the HR team can provide the personalized touch, and the AI agent ensures the employees have easy, anytime access to the right information throughout their on-boarding journey.

.

2. Healthcare: Unlocking Insights from Unstructured Clinical Data

Healthcare providers always need to extract critical patient information and support timely decision-making. They face the challenge of rapidly analyzing vast amounts of unstructured medical records, such as physician notes, discharge summaries, and clinical reports.

From a data perspective two key features are required, namely, Entity Recognition and Attribute detection. Medical entities include symptoms, medications, diagnoses, and treatment plans. Similarly Attribute detection includes identifying the dosage, frequency and severity associated with these entities.

Amazon provides the service, Amazon Comprehend Medical which uses NLP and ML models for extracting such information from unstructured data available with healthcare organizations.

One of the crucial aspects in healthcare is to handle Security and compliance related to patient’s health data. AWS has Amazon Macie as a security related service which employs machine learning & pattern matching to discover, classify, and protect Protected Health Information (PHI) within Amazon S3 bucket. Such a service helps organizations maintain HIPAA compliance through automated data governance.

 

3. Enterprise data insights

Any large enterprise has data spread across various tools like SharePoint, Salesforce, Leave management portals or some accounting applications.

From these data sets, executives can extract great insights, evaluate what-if scenarios, check on some key performance indicators, and utilize all this for decision making.

We can use AWS AI service, Amazon Q business for this very purpose using various plugins, connectors to DBs, and Retrieval Augmented Generation for up-to-date information.

The user can use natural language to query the system and Amazon Q performs Semantic search to return back contextually appropriate information. It also uses Knowledge Grounding which eventually helps in providing accurate answers not relying solely on training data sets.

To ensure that AI-generated responses adhere strictly to approved enterprise protocols, provide accurate and relevant information, we can define built-in guardrails within Amazon Q, such as Global Controls and Topic blocking.

 

4. Retail company use cases

a) Reading receipts and invoices

The company wants to automate the financial auditing process. In order to achieve this we can use Amazon Textract to read receipts and invoices as it uses machine learning algorithms to accurately identify and extract key information like product names, prices, and reviews.

b) Analyse customer purchasing patterns

The company intends to analyse customer purchasing patterns to predict future sales trends from their large datasets of historical sales data. For these analyses the company wants to build, train, and deploy machine learning models quickly and efficiently.

Amazon SageMaker is the ideal service for such a development.

c) Customer support Bot

The firm receives thousands of customer calls daily. In order to smoothen the process, the firm is looking to create a conversational AI bot which can take text inputs and voice commands.

We can use Amazon Bedrock to create a custom AI application from a dataset of ready to use Foundation models. These models can process large volumes of customer data, generate personalized responses and integrate with other AWS services like Amazon SageMaker for additional processing and analytics.

We can use Amazon Lex to create the bot, and Amazon Polly for text to speech purposes.

d) Image analyses

The company might want to identify and categorize their products based on the images uploaded. To implement this, we can use Amazon S3 and Amazon Rekognition to analyze images as soon as the new product image is uploaded into the storage service.

 

AWS Services for Compliance & Regulations

AWS AI Services for Compliance

AWS Services for Compliance & Regulations

In order to manage complex customer requirements and handling large volumes of sensitive data it becomes essential for us to adhere to various regulations.

Key AWS services supporting these compliance and governance needs include:

  1. AWS Config
    Continuously monitors and records resource configurations to help assess compliance.
  2. AWS Artifact
    Centralized repository for on-demand access to AWS compliance reports and agreements.
  3. AWS CloudTrail
    Logs and tracks all user activity and API calls within your AWS environment for audit purposes.
  4. AWS Inspector
    Automated security assessment service that identifies vulnerabilities and deviations from best practices.
  5. AWS Audit Manager
    Simplifies audit preparation by automating evidence collection and compliance reporting.
  6. AWS Trusted Advisor
    Provides real-time recommendations to optimize security, performance, and cost efficiency.

 

Security and Privacy risks: Vulnerabilities in LLMs

Vulnerabilities in LLMs

Vulnerabilities in LLMs

While dealing with LLMs there are ways available to attack the prompts, however there are various safeguards also against them. Keeping in view the attacks I am noting down some vulnerabilities which are useful to understand the risks around your LLMs.

S.No Vulnerability Description
1 Prompt Injection User input intended to manipulate the LLM
2 Insecure o/p handling Un-validated model’s output.
3 Training data poisoning Malicious data introduced in training set.
4 Model Denial Of Service Disrupting availability by identifying architecture weaknesses.
5 Supply chain vulnerabilities Weakness in s/w, h/w, services used to build or deploy the model.
6 Leakage Leakage of sensitive data.
7 Insecure plugins Flaws in model components.
8 Excessive autonomy Autonomy to the model in decision making.
9 Over – reliance Relying heavily on model’s capabilities.
10 Model theft. Leading to unauthorized re-use of the copies of the model

 

Can you co-relate the above use cases with any of your challenges at hand? Have you been able to use any of the AWS services or other AI platforms for dealing with such challenges?

References:

https://aws.amazon.com/ai/services/
https://www.udemy.com/share/10bvuD/

]]>
https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/feed/ 1 386758
Perficient Honored as Organization of the Year for Cloud Computing https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/ https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/#comments Tue, 28 Oct 2025 20:43:03 +0000 https://blogs.perficient.com/?p=388091

Perficient has been named Cloud Computing Organization of the Year by the 2025 Stratus Awards, presented by the Business Intelligence Group. This prestigious recognition celebrates our leadership in cloud innovation and the incredible work of our entire Cloud team.

Now in its 12th year, the Stratus Awards honor the companies, products, and individuals that are reshaping the digital frontier. This year’s winners are leading the way in cloud innovation across AI, cybersecurity, sustainability, scalability, and service delivery — and we’re proud to be among them.

“Cloud computing is the foundation of today’s most disruptive technologies,” said Russ Fordyce, Chief Recognition Officer of the Business Intelligence Group. “The 2025 Stratus Award winners exemplify how cloud innovation can drive competitive advantage, customer success and global impact.”

This award is a direct reflection of the passion, expertise, and dedication of our Cloud team — a group of talented professionals who consistently deliver transformative solutions for our clients. From strategy and migration to integration and acceleration, their work is driving real business outcomes and helping organizations thrive in an AI-forward world.

We’re honored to receive this recognition and remain committed to pushing the boundaries of what’s possible in the cloud with AI.

Read more about our Cloud Practice.

]]>
https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/feed/ 1 388091
Datadog Synthetic Monitoring Integration with Azure DevOps Pipeline for Sitecore https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/ https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/#respond Thu, 23 Oct 2025 15:35:10 +0000 https://blogs.perficient.com/?p=387828

Datadog Synthetic Monitoring provides automated, simulated user journeys to proactively confirm the health and performance of websites and APIs, helping detect issues before users experience them. Integrating this into our Azure DevOps pipeline ensures that only builds where core site functionality is verified get promoted, reducing the risk of regressions in production. This approach is especially valuable in Sitecore projects, where critical web journeys and API endpoints are essential to user experience.

Why Use This Approach?

  • Immediate feedback: Failing releases are blocked before merging, saving post-release firefighting.
  • Coverage: Synthetic tests simulate real browser actions and API calls over real user flows.
  • Reliability: Automated testing delivers consistent, repeatable validation without manual steps.
  • Visibility: Results are unified within both Datadog and Azure DevOps for full traceability.
  • Scalability: As Sitecore projects grow, synthetic tests can be expanded to cover new endpoints and user scenarios without significant pipeline changes.
  • Environment parity: Tests can be run against staging, UAT, or pre-production environments before the live rollouts for safer releases.

Prerequisites

  • Active Datadog account with Synthetic Monitoring enabled.
  • Datadog API and Application keys created with the appropriate access scope.
  • Azure DevOps project with a working YAML-based CI/CD pipeline.
  • Secure variable storage in Azure DevOps (e.g., Variable Groups, Secret Variables) for credentials.
  • Stable and accessible endpoint URLs for Sitecore environment(s) under test.

High-Level Integration Process

1. Datadog Synthetic Test Creation

  • Create Browser and/or HTTP Synthetic Tests in Datadog tailored for key Sitecore application flows, such as:
    • Homepage load and rendering
    • Login flow and user dashboard navigation
    • Core API calls (search, content retrieval)
    • Critical commerce or form submissions
  • Use relevant tags (e.g., premerge) for search/query filtering by the CI pipeline.
  • Configure assertions to confirm critical elements:
    • Content correctness
    • HTTP status codes
    • Redirect targets
    • Response time SLAs
  • Validate tests in Datadog’s UI with multiple runs before pipeline integration.

Datadogdashboard1

2. Azure DevOps Pipeline Configuration

The Azure DevOps YAML pipeline is set up to invoke Datadog CI, run all tests matching our tag criteria, and fail the pipeline if any test fails.

Key Pipeline Steps

  • Install Datadog CI binary: Downloads and installs the CLI in the build agent.
  • Run Synthetic Tests: Uses the environment variables and search tags to pick synthetic tests (e.g., all with type: browser tag: remerge) and runs them directly.
  • JUnit Reporting & Artifacts: The CLI output is saved, and a JUnit-formatted result file is generated for Azure DevOps’ Tests UI. All test outputs are attached as build artifacts.
  • Conditional Fast-forward Merge: The pipeline proceeds to a gated merge to release/production only if all synthetics pass.

How Results and Flow Work

When All Tests Pass

  • The pipeline completes the Premerge_Datadog_Synthetics stage successfully.
  • Test summaries (JUnit) and CLI outputs are attached to the pipeline run.
  • Approval-gated merge to the Release branch is unblocked; approvers can verify test results before promotion.

Build artifacts include full logs for further review.

     Pipelinepassed

When Any Test Fails

  • If any synthetic (browser/API) test fails, the CLI exits with a non-zero exit code.
  • The JUnit summary will contain failure info and a link to the log details.
  • The pipeline stage fails (Premerge_Datadog_Synthetics), halting the fast-forward merge.
  • Approvers can review the failure in test results and attached artifacts within Azure DevOps.

Only successful resolution and green reruns allow code promotion.

Pipelinefailed

Best Practices for Datadog Synthetic Monitoring

  • Run tests in parallel to reduce wait times.
  • Use separate synthetic tests per microservice or major Sitecore area to isolate failures.
  • Monitor test trends in Datadog to detect gradual performance regression over time.
  • Limit sensitive data in synthetic flows by avoiding the storage of actual credentials.
  • Schedule periodic synthetic runs outside CI/CD to catch environment fluctuations unrelated to code changes.

Security Considerations

  • Store Datadog keys as secret variables in Azure DevOps.
  • Restrict permission for synthetic management to trusted CICD admins.
  • Avoid embedding credentials or sensitive payloads in test scripts.

Conclusion

By integrating Datadog Synthetic Monitoring directly into our CI/CD pipeline with Azure DevOps. Sitecore teams gain a safety net that blocks faulty builds before they hit production, while keeping a detailed audit trail. Combined with careful test design, secure key management, and continuous expansion of coverage, this approach becomes a cornerstone of proactive web application quality assurance.

 

]]>
https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/feed/ 0 387828
Terraform Code Generator Using Ollama and CodeGemma https://blogs.perficient.com/2025/09/25/terraform-code-generator-using-ollama-and-codegemma/ https://blogs.perficient.com/2025/09/25/terraform-code-generator-using-ollama-and-codegemma/#comments Thu, 25 Sep 2025 10:34:37 +0000 https://blogs.perficient.com/?p=387185

In modern cloud infrastructure development, writing Terraform code manually can be time-consuming and error-prone—especially for teams that frequently deploy modular and scalable environments. There’s a growing need for tools that:

  • Allow natural language input to describe infrastructure requirements.
  • Automatically generate clean, modular Terraform code.
  • Integrate with cloud authentication mechanisms.
  • Save and organize code into execution-ready files.

This model bridges the gap between human-readable Infrastructure descriptions and machine-executable Terraform scripts, making infrastructure-as-code more accessible and efficient. To build this model, we utilize CodeGemma, a lightweight AI model optimized for coding tasks, which runs locally via Ollama.

Qadkyxzvpwpsnkuajbujylwozlw36aeyw Mos4qgcxocvikd9fqwlwi18nu1eejv9khrb52r Ak3lastherfdzlfuhwfzzf4kelmucdplzzkdezh90a

In this blog, we explore how to build a Terraform code generator web app using:

  • Flask for the web interface
  • Ollama’s CodeGemma model for AI-powered code generation
  • Azure CLI authentication using service principal credentials
  • Modular Terraform file creation based on user queries

This tool empowers developers to describe infrastructure needs in natural language and receive clean, modular Terraform code ready for deployment.

Technologies Used

CodeGemma

CodeGemma is a family of lightweight, open-source models optimized for coding tasks. It supports code generation from natural language.

Running CodeGemma locally via Ollama means:

  • No cloud dependency: You don’t need to send data to external APIs.
  • Faster response times: Ideal for iterative development.
  • Privacy and control: Your infrastructure queries and generated code stay on your machine.
  • Offline capability: Ideal for use in restricted or secure environments.
  • Zero cost: Since the model runs locally, there’s no usage fee or subscription required—unlike cloud-based AI services.

Flask

We chose Flask as the web framework for this project because of its:

  • Simplicity and flexibility: Flask is a lightweight and easy-to-set-up framework, making it ideal for quick prototyping.

Initial Setup

  • Install Python.
winget install Python.Python.3
ollama pull codegemma:7b
ollama run codegemma:7b
  • Install the Ollama Python library to use Gemma 3 in your Python projects.
pip install ollama

Folder Structure

Folder Structure

 

Code

from flask import Flask, jsonify, request, render_template_string
from ollama import generate
import subprocess
import re
import os

app = Flask(__name__)
# Azure credentials
CLIENT_ID = "Enter your credentials here."
CLIENT_SECRET = "Enter your credentials here."
TENANT_ID = "Enter your credentials here."

auth_status = {"status": "not_authenticated", "details": ""}
input_fields_html = ""
def authenticate_with_azure():
    try:
        result = subprocess.run(
            ["cmd.exe", "/c", "C:\\Program Files\\Microsoft SDKs\\Azure\\CLI2\\wbin\\az.cmd",
             "login", "--service-principal", "-u", CLIENT_ID, "-p", CLIENT_SECRET, "--tenant", TENANT_ID],
            capture_output=True, text=True, check=True
        )
        auth_status["status"] = "success"
        auth_status["details"] = result.stdout
    except subprocess.CalledProcessError as e:
        auth_status["status"] = "failed"
        auth_status["details"] = e.stderr
    except Exception as ex:
        auth_status["status"] = "terminated"
        auth_status["details"] = str(ex)

@app.route('/', methods=['GET', 'POST'])
def home():
    terraform_code = ""
    user_query = ""
    input_fields_html = ""

    if request.method == 'POST':
        user_query = request.form.get('query', '')

        base_prompt = (
            "Generate modular Terraform code using best practices. "
            "Create separate files for main.tf, vm.tf, vars.tf, terraform.tfvars, subnet.tf, kubernetes_cluster etc. "
            "Ensure the code is clean and execution-ready. "
            "Use markdown headers like ## Main.tf: followed by code blocks."
        )

        full_prompt = base_prompt + "\n" + user_query
        try:
            response_cleaned = generate(model='codegemma:7b', prompt=full_prompt)
            terraform_code = response_cleaned.get('response', '').strip()
        except Exception as e:
            terraform_code = f"# Error generating code: {str(e)}"

            provider_block = f"""
              provider "azurerm" {{
              features {{}}
              subscription_id = "Enter your credentials here."
              client_id       = "{CLIENT_ID}"
              client_secret   = "{CLIENT_SECRET}"
              tenant_id       = "{TENANT_ID}"
            }}"""
            terraform_code = provider_block + "\n\n" + terraform_code

        with open('main.tf', 'w', encoding='utf-8') as f:
            f.write(terraform_code)


        # Create output directory
        output_dir = r"C:\Users\riya.achkarpohre\Desktop\AI\test7\terraform_output"
        os.makedirs(output_dir, exist_ok=True)

        # Define output paths
        paths = {
            "main.tf": os.path.join(output_dir, "Main.tf"),
            "vm.tf": os.path.join(output_dir, "VM.tf"),
            "subnet.tf": os.path.join(output_dir, "Subnet.tf"),
            "vpc.tf": os.path.join(output_dir, "VPC.tf"),
            "vars.tf": os.path.join(output_dir, "Vars.tf"),
            "terraform.tfvars": os.path.join(output_dir, "Terraform.tfvars"),
            "kubernetes_cluster.tf": os.path.join(output_dir, "kubernetes_cluster.tf")
        }

        # Split response using markdown headers
        sections = re.split(r'##\s*(.*?)\.tf:\s*\n+```(?:terraform)?\n', terraform_code)

        # sections = ['', 'Main', '<code>', 'VM', '<code>', ...]
        for i in range(1, len(sections), 2):
            filename = sections[i].strip().lower() + '.tf'
            code_block = sections[i + 1].strip()

            # Remove closing backticks if present
            code_block = re.sub(r'```$', '', code_block)

            # Save to file if path is defined
            if filename in paths:
                with open(paths[filename], 'w', encoding='utf-8') as f:
                    f.write(code_block)
                    print(f"\n--- Written: {filename} ---")
                    print(code_block)
            else:
                print(f"\n--- Skipped unknown file: {filename} ---")

        return render_template_string(f"""
        <html>
        <head><title>Terraform Generator</title></head>
        <body>
            <form method="post">
                <center>
                    <label>Enter your query:</label><br>
                    <textarea name="query" rows="6" cols="80" placeholder="Describe your infrastructure requirement here..."></textarea><br><br>
                    <input type="submit" value="Generate Terraform">
                </center>
            </form>
            <hr>
            <h2>Generated Terraform Code:</h2>
            <pre>{terraform_code}</pre>
            <h2>Enter values for the required variables:</h2>
            <h2>Authentication Status:</h2>
            <pre>Status: {auth_status['status']}\n{auth_status['details']}</pre>
        </body>
        </html>
        """)

    # Initial GET request
    return render_template_string('''
    <html>
    <head><title>Terraform Generator</title></head>
    <body>
        <form method="post">
            <center>
                <label>Enter your query:</label><br>
                <textarea name="query" rows="6" cols="80" placeholder="Describe your infrastructure requirement here..."></textarea><br><br>
                <input type="submit" value="Generate Terraform">
            </center>
        </form>
    </body>
    </html>
    ''')

authenticate_with_azure()
@app.route('/authenticate', methods=['POST'])
def authenticate():
    authenticate_with_azure()
    return jsonify(auth_status)

if __name__ == '__main__':
    app.run(debug=True)

Open Visual Studio, create a new file named file.py, and paste the code into it. Then, open the terminal and run the script by typing:

python file.py

Flask Development Server

Out1

Code Structure Explanation

  • Azure Authentication
    • The app uses the Azure CLI (az.cmd) via Python’s subprocess.run() to authenticate with Azure using a service principal. This ensures secure access to Azure resources before generating Terraform code.
  • User Query Handling
    • When a user submits a query through the web form, it is captured using:
user_query = request.form.get('query', '')
  • Prompt Construction
    • The query is appended to a base prompt that instructs CodeGemma to generate modular Terraform code using best practices. This prompt includes instructions to split the code into files, such as main.tf, vm.tf, subnet.tf, etc.
  • Code Generation via CodeGemma
    • The prompt is sent to the CodeGemma:7b model using:
response_cleaned = generate(model='codegemma:7b', prompt=full_prompt)
  • Saving the Full Response
    • The entire generated Terraform code is first saved to a main.tf file as a backup.
  • Output Directory Setup
    • A specific output directory is created using os.makedirs() to store the split .tf files:
output_dir = r"C:\Users\riya.achkarpohre\Desktop\AI\test7\terraform_output"
  • File Path Mapping
    • A dictionary maps expected filenames (such as main.tf and vm.tf) to their respective output paths. This ensures each section of the generated code is saved correctly.
  • Code Splitting Logic
    • The response is split using a regex-based approach, based on markdown headers like ## main.tf: followed by Terraform code blocks. This helps isolate each module.
  • Conditional File Writing
    • For each split section, the code checks if the filename exists in the predefined path dictionary:
      • If defined, the code block is written to the corresponding file.
      • If not defined, the section is skipped and logged as  “unknown file”.
  • Web Output Rendering
    • The generated code and authentication status are displayed on the webpage using render_template_string().

Terminal

Term1

The Power of AI in Infrastructure Automation

This project demonstrates how combining AI models, such as CodeGemma, with simple tools like Flask and Terraform can revolutionize the way we approach cloud infrastructure provisioning. By allowing developers to describe their infrastructure in natural language and instantly receive clean, modular Terraform code, we eliminate the need for repetitive manual scripting and reduce the chances of human error.

Running CodeGemma locally via Ollama ensures:

  • Full control over data
  • Zero cost for code generation
  • Fast and private execution
  • Seamless integration with existing workflows

The use of Azure CLI authentication adds a layer of real-world applicability, making the generated code deployable in enterprise environments.

Whether you’re a cloud engineer, DevOps practitioner, or technical consultant, this tool empowers you to move faster, prototype smarter, and deploy infrastructure with confidence.

As AI continues to evolve, tools like this will become essential in bridging the gap between human intent and machine execution, making infrastructure-as-code not only powerful but also intuitive.

]]>
https://blogs.perficient.com/2025/09/25/terraform-code-generator-using-ollama-and-codegemma/feed/ 3 387185