Sitecore Articles / Blogs / Perficient https://blogs.perficient.com/tag/sitecore/ Expert Digital Insights Mon, 24 Nov 2025 21:04:34 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Sitecore Articles / Blogs / Perficient https://blogs.perficient.com/tag/sitecore/ 32 32 30508587 Monitoring and Logging in Sitecore AI https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/ https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/#respond Mon, 24 Nov 2025 21:04:34 +0000 https://blogs.perficient.com/?p=388586

Why Observability Matters More Than Ever

Moving from traditional Sitecore deployments Sitecore AI means the infrastructure is abstracted away. That’s fantastic for agility, but it also changes how we troubleshoot. You can’t RDP onto a server and tail a file anymore; your lifeline is observability: clear signals from logs, metrics, and governed automation that tell you what’s happening across the platform and the front‑end.

What’s Different in Sitecore AI?

Logs and diagnostics are centralized. You access them via the Sitecore AI portal and the Sitecore CLI. They’re organized by environment and by role. Your front‑end application or rendering host, often a Next.js site deployed on Vercel, responsible for headless rendering and user experience has its own telemetry separate from the CMS.

So, your monitoring picture spans three surfaces: Sitecore AI logs for CMS and deployment activity, rendering host telemetry for front‑end performance, and Experience Edge signals for content delivery. Together, they describe the health of the experience, not just the servers.

 

Understanding the Logging Surfaces

In Sitecore AI, logs are grouped into three primary areas that each play a distinct role in diagnosing issues:

Content Management (CM) logs

  • These are your first stop for diagnosing publishing failures, broken workflows, template errors, and serialization mismatches. When a publish fails, CM logs help you separate permissions or workflow problems from data or serialization issues.

Rendering Host logs

  • Think front‑end behavior and performance. If personalization falls back, pages render slowly, or API responses seem sluggish, the rendering host logs surface cache misses, API latency, and rendering errors that directly impact Core Web Vitals and UX.

Deployment logs

  • The “narrative” of your CI/CD run. When a build fails or a promotion doesn’t complete, deployment logs pinpoint CLI command failures, artifact mismatches, or environment configuration issues. They also provide stage-by-stage visibility (provisioning, build, deploy, post‑actions), which speeds triage and supports audits.

Access these logs quickly in the Deploy app’s environment view or programmatically via the Sitecore CLI for listing, viewing, and downloading logs as part of your pipeline artifacts.

Integration Patterns for Enterprise Monitoring

Centralizing is helpful; correlating is essential. The pragmatic pattern I recommend is:

Sitecore AI → Azure Monitor/Application Insights

  • Forward CMS and deployment logs so you can correlate spikes in errors with deployments, content bursts, or traffic changes. KQL lets you slice by environment, role, and severity for root cause analysis.

Rendering Host → APM (Datadog/New Relic)

  • Use front‑end analytics to track TTFB, cache hit ratio, route errors, and API dependency health. Pair this with Vercel’s own analytics for global edge performance.

Experience Edge → Webhook Monitoring

  • Register webhooks so you can track publish‑to‑Edge latency and trigger alerts or redeploys when content propagation slows or fails.

SIEM Integration (today’s reality)

  • For unified audit across Sitecore SaaS, stream supported Common Audit Logs (CAL) via webhooks (Personalize/CDP/Connect) and, for Sitecore AI, pull environment and deployment logs via CLI on a schedule until broader CAL coverage lands.

Metrics That Matter

In a SaaS world, traditional “server up” checks don’t describe user experience. Focus on metrics that map directly to reliability and business impact:

Deployment success & promotion health

  • Failed builds or promotions block content and features. Tracking rates and mean time to recovery reveals pipeline reliability.

Publish‑to‑Edge latency

  • Authors expect content to reach Experience Edge quickly. Latency here affects real‑time campaigns, previews, and editorial confidence.

Rendering host performance

  • P95/P99 TTFB, cache hit ratio, and error rates impact Core Web Vitals, SEO, and conversion. They also help you spot regressions after releases.

Agent activity & governance

  • With Sitecore AI’s agentic capabilities, monitoring agent runs, approvals, and failures protects compliance and prevents unintended bulk changes.

Governance Signals in Sitecore AI

Sitecore AI introduces Agentic Studio: a governed workspace to design, run, and oversee automation. Work is organized around four building blocks, Agents, Flows, Spaces, and Signals. Practically, that means you can automate complex operations while maintaining human review and auditability.

  • Agents: Handle focused tasks (e.g., content migration, metadata updates).
  • Flows: Orchestrate agents into multi‑step workflows with visibility across stages.
  • Spaces: Provide shared context for teams to collaborate on active runs.

Signals surface trends and triggers that can start or adjust flows. Together, these give marketers and developers a safe frame to scale automation without losing control.

How Agent Flows Are Monitored

Monitoring agent flows blends product‑level visibility with enterprise analytics:

Run visibility in Agentic Studio:

  • Each flow run exposes status, participants (human and agent), timestamps, and outcomes. Because flows are orchestrated in a governed workspace, you get “full visibility” into progression from brief to publish/optimization, including approvals where human review is required.

Governance signals and audit trails:

  • Signals can trigger flows and also act as governance inputs (for example, trend alerts requiring approval). Capture audit trails of who initiated a run, which agents executed steps, and what content or configurations changed.

Alerting and dashboards:

  • Mirror key flow events into your monitoring plane: start, paused awaiting approval, failed step, completed. Route these into Azure Monitor or your SIEM so operations sees agentic activity alongside deployments and content events.

Integration approach:

  • Where Common Audit Logs (CAL) are available (Personalize/CDP/Connect), stream events via webhooks. For Sitecore AI and Agentic activity not yet covered by CAL, use scheduled CLI log exports and APIs the platform exposes to assemble a unified view. Normalize event schemas (runId, agentId, flowId, environment, severity) to enable cross‑product correlation.

The outcome: agent automation becomes observable. Teams can answer “what changed, when, by whom, and why” and tie those answers to performance and compliance dashboards.

Final Thoughts

Observability in Sitecore AI isn’t about servers; it’s about experience health and trusted automation. When you combine SaaS‑native logs, front‑end telemetry, Edge events, and agentic governance signals, you gain a single narrative across deployments, content, and automation, the narrative you need to keep teams fast, safe, and accountable.

]]>
https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/feed/ 0 388586
Migrating Redirects in Sitecore to Vercel Edge Config: A Practical Path https://blogs.perficient.com/2025/11/23/migrating-redirects-in-sitecore-to-vercel-edge-config-a-practical-path/ https://blogs.perficient.com/2025/11/23/migrating-redirects-in-sitecore-to-vercel-edge-config-a-practical-path/#respond Mon, 24 Nov 2025 05:51:56 +0000 https://blogs.perficient.com/?p=388284

In my previous post, Simplifying Redirect Management in Sitecore XM Cloud with Next.js and Vercel Edge Config,  I explored how Vercel Edge Config can completely transform how we manage redirects in Sitecore XM Cloud. Traditionally, redirects have lived inside Sitecore – often stored as content items or within custom redirect modules – which works well until scale, speed, and operational agility become priorities.

That’s where Vercel Edge Config steps in. By managing redirects at the edge, we can push this logic closer to users, reduce load on Sitecore instances, and make updates instantly available without redeployments. The result is faster performance, easier maintenance, and a cleaner separation of content from infrastructure logic.

In this short follow-up, I will walk you through a step-by-step migration path  from auditing your current redirects to validating, deploying, and maintaining them on Vercel Edge Config. Along the way, I will share practical tips, lessons learned, and common pitfalls to watch out for during the migration process.

Audit Existing Redirects

Before you begin the migration, take time to analyze and clean up your existing redirect setup. In many legacy websites that have been live for years, redirects often accumulate from multiple releases, content restructures, or rebranding efforts. Over time, they become scattered across modules or spreadsheets, and many of them may no longer be relevant.
This is your chance to comb through and make your redirect set current – remove obsolete mappings, consolidate duplicates, and simplify the structure before moving them to Vercel Edge Config. A clean starting point will make your new setup easier to maintain and more reliable in the long run.
Here is a good checklist to follow during the audit:
  • Export all existing redirects from Sitecore or any external sources where they might be managed.
  • Identify and remove obsolete redirects, especially those pointing to pages that no longer exist or have already been redirected elsewhere.
  • Combine duplicate or overlapping entries to ensure a single source of truth for each URL.
  • Validate destination URLs – make sure they’re live and resolve correctly.
  • Categorize by purpose – for example, marketing redirects, content migration redirects, or structural redirects.
  • If you want to store them separately, you can even use different Edge Config stores for each category. This approach can make management easier and reduce the risk of accidental overrides – I have demonstrated this setup in my previous blog.
  • Keep it simple – since we’re dealing with static one-to-one redirects, focus on maintaining clean mappings that are easy to review and maintain.

Define a Flexible JSON Schema

Once you have audited and cleaned up your redirects, the next step is to decide how they will be structured and stored in Edge Config. Unlike Sitecore, where redirects might be stored as content items or within a module, Edge Config uses a key-value data model, which makes JSON the most natural format for managing redirects efficiently.

The goal here is to define a clear and reusable JSON schema that represents your redirects consistently – simple enough to maintain manually, yet flexible enough to scale across multiple environments or stores.

Here’s the schema I used in my implementation:

{
  "/old-page": { 
    "destination": "/new-page", 
    "permanent": true 
  },
  "/legacy-section": { 
    "destination": "/resources", 
    "permanent": false 
  }
}
In this structure:
  • Each key (for example, “/old-page”) is the source path that should be redirected.
  • Each value contains two properties:
    • destination – the target path where the request should redirect.
    • permanent – a boolean flag (true or false) that determines whether the redirect should use a 308 (permanent) or 307 (temporary) status code.

Automate the Export

Once you’ve finalized your redirect list and defined your JSON structure, the next step is to automate the conversion process — so you can easily transform your audited data into the format that Vercel Edge Config expects.

In my implementation, I created a C# console application that automates this step. The tool takes a simple CSV file as input and converts it into the JSON format used by Edge Config.

The CSV file includes three columns: source, destination, permanent. The application reads this CSV and generates a JSON file in the format mentioned in the above section. You can find the complete source code and instructions for this utility on my GitHub repository here: ConvertCsvToJson

This approach is both simple and scalable:
  • You can collect and audit redirects collaboratively in a CSV format, which non-developers can easily work with.
  • Once finalized, simply run the console application to convert the CSV into JSON and upload it to Vercel Edge Config.
  • If you have multiple redirect categories or stores, you can generate separate JSON files for each using different input CSVs.
Tip: If you are working with a large set of redirects, this process ensures consistency, eliminates manual JSON editing errors, and provides an auditable version of your data before it’s deployed.
By automating this step, you save significant time and reduce the risk of human error – ensuring your Edge Config store always stays synchronized with your latest validated redirect list.

Validate & Test

Before you roll out your new redirect setup, it’s important to thoroughly validate and test the data and the middleware behavior. This stage ensures your redirects work exactly as expected once they’re moved to Vercel Edge Config.

A solid validation process will help you catch issues early – like typos in paths, invalid destinations, or accidental redirect loops – while maintaining confidence in your migration.

  • Validate that your JSON is correctly formatted, follows your destination + permanent schema, starts with /, and contains no duplicates.
  • Test redirects locally using the JSON generated from your console app to ensure redirects fire correctly, status codes behave as expected, and unmatched URLs load normally.
  • Check for redirect loops or chains so no route redirects back to itself or creates multiple hops.
  • Upload to a preview/test environment and repeat the tests to confirm the middleware works the same with the actual Edge Config store

Gradual Rollout

Once your redirects have been validated locally and in your preview environment, the next step is to roll them out safely and incrementally. The advantage of using Vercel Edge Config is that updates propagate globally within seconds – but that’s exactly why taking a controlled, phased approach is important.

After validating your redirects, roll them out gradually to avoid unexpected issues in production. Begin by deploying your Next.js middleware and Edge Config integration to a preview/test environment. This helps confirm that the application is fetching from the correct store and that updates in Edge Config appear instantly without redeployments.

Once everything looks stable, publish your redirect JSON to the production Edge Config store. Changes propagate globally within seconds, but it’s still good practice to test a few key URLs immediately. If you have logging or analytics set up (such as Analytics or custom logs), monitor request patterns for any unusual spikes, new 404s, or unexpected redirect hits.

If you’re using multiple Edge Config stores, roll them out one at a time to keep things isolated and easier to debug.
And always keep a simple rollback plan – because Edge Config maintains backup- it creates a backup each time the json is updated, you can always rollback to the previous version, with no redeploy required.

Monitor & Maintain

Once your redirects are live in Vercel Edge Config, it’s important to keep an eye on how they behave over time. Redirects aren’t a “set and forget” feature especially on sites that evolve frequently.

Use logging, analytics, or Vercel’s built-in monitoring to watch for patterns like unexpected 404s, high redirect activity, or missed routes. These signals can help you identify gaps in your redirect set or highlight URLs that need cleanup.

Review and update your redirect JSON regularly. Legacy redirects may become irrelevant as site structures change, so a quick quarterly cleanup helps keep things lean. And since your JSON is version-controlled, maintaining and rolling back changes stays simple and predictable.

If you use multiple Edge Config stores, make sure the separation stays intentional. Periodically check that each store contains only the redirects meant for it—this avoids duplication and keeps your redirect logic easy to understand.

Consistent monitoring ensures your redirect strategy remains accurate, fast, and aligned with your site’s current structure.

 

Migrating redirects from Sitecore to Vercel Edge Config isn’t just a technical shift – it’s an opportunity to simplify how your site handles routing, clean up years of legacy entries, and move this logic to a place thats faster, cleaner, and easier to maintain. With a thoughtful audit, a clear JSON structure, and an automated export process, the migration becomes surprisingly smooth.

As you move forward, keep an eye on the small details: avoid accidental loops, stay consistent with your paths, and use the permanent flag intentionally. A few mindful checks during rollout and a bit of monitoring afterward go a long way in keeping your redirect setup predictable and high-performing.

Ultimately, this approach not only modernizes how redirects are handled in an XM Cloud setup – it also gives you a structured, version-controlled system that’s flexible for future changes and scalable as your site evolves. Its a clean foundation you can build on confidently.

]]>
https://blogs.perficient.com/2025/11/23/migrating-redirects-in-sitecore-to-vercel-edge-config-a-practical-path/feed/ 0 388284
Simplifying Redirect Management in Sitecore XM Cloud with Next.js and Vercel Edge Config https://blogs.perficient.com/2025/10/31/simplifying-redirects-in-sitecore-xm-cloud-using-vercel-edge-config/ https://blogs.perficient.com/2025/10/31/simplifying-redirects-in-sitecore-xm-cloud-using-vercel-edge-config/#respond Fri, 31 Oct 2025 18:19:55 +0000 https://blogs.perficient.com/?p=388136

As organizations continue their journey toward composable and headless architectures, the way we manage even simple things like redirects evolves too. Redirects are essential for SEO and user experience, but managing them within a CMS often introduces unnecessary complexity. In this blog, I will share how we streamlined redirect management for a Sitecore XM Cloud + Next.js implementation using Vercel Edge Config  – a modern, edge-based approach that improves performance, scalability, and ease of maintenance.

Why Move Redirects Out of Sitecore?

Traditionally, redirects were managed within Sitecore through redirect items stored in the Content Tree. While functional, this approach introduced challenges such as scattered items, and added routing overhead. With Sitecore XM Cloud and Next.js, we now have the opportunity to offload this logic to the frontend layer – closer to where routing happens. By using Vercel Edge Config, redirects are processed at the edge, improving site performance and allowing instant updates without redeployments.

By leveraging Vercel Edge Config and Next.js Middleware, redirects are evaluated before the request reaches the application’s routing or backend systems. This approach ensures:

  1. Redirects are processed before routing to Sitecore.
  2. Updates are instant and do not require deployments.
  3. Configuration is centralized and easily maintainable.

The New Approach: Redirects at the Edge

In the new setup:

  1. Redirect rules are stored in Vercel Edge Config in JSON format.
  2. Next.js middleware runs at the edge layer before routing.
  3. Middleware fetches redirect rules and checks for matches.
  4. Matching requests are redirected immediately – bypassing Sitecore.
  5. Non-matching requests continue to the standard rendering process.

Technical Details and Implementation

Edge Config Setup in Vercel

Redirect rules are stored in Vercel Edge Config, a globally distributed key-value store that allows real-time configuration access at the edge. In Vercel, each project can be linked to one or more Edge Config stores.

You can create edge config stores at project level as well as at account level. In this document, we will be creating the store at account level and this edge config store will be shared across all the projects within the account.

Steps:

  1.  Open the Vercel Dashboard.
  2. Go to Storage -> Edge Config.
  3. Create a new store (for example: redirects-store).
    Createedgeconfig
  4. Add a key named redirects with redirect data in JSON format.
    Example JSON structure:

    {
      "redirects": {
        "/old-page": {
          "destination": "/new-page",
          "permanent": true
        },
        "/old-page/item-1": {
          "destination": "/new-page./item-1",
          "permanent": false
        }
      }
    }
  1. To connect your store to a project, navigate to Projects tab and click on Connect Project button.

  2. Select the project from the dropdown and click Connect.
    Nextjs Dashboard Projects

  3. Vercel automatically generates a unique Edge Config Connection String for your project which is stored as an environment variable in your project. This connection string securely links your Next.js app to the Edge Config store. You can choose to edit the environment variable name and token name from the Advanced Options while connecting a project.

  4. Please note that EDGE_CONFIG environment that is added by default (if you do not update the name of the env. variable as mentioned in step #7). This environment variable is automatically available inside the Edge Runtime and used by the Edge Config SDK.

Implementing Redirect Logic in Next.js Middleware

  1. Install the Vercel Edge Config SDK to fetch data from the Edge Config store:
    npm install @vercel/edge-config

    The SDK provides low-latency, read-only access to configuration data replicated across Vercel’s global edge network. Import the SDK and use it within your middleware to fetch redirect data efficiently.

  2. Middleware Configuration: All redirect logic is handled in the middleware.ts file located at the root of the Next.js application. This setup ensures that every incoming request is intercepted, evaluated against the defined redirect rules, and redirected if necessary – before the request proceeds through the rest of the lifecycle.Code when using single store and the default env. variable EDGE_CONFIG
    import { NextResponse } from 'next/server';
    import type { NextFetchEvent, NextRequest } from 'next/server';
    import { get } from '@vercel/edge-config';
    
    export async function middleware(req: NextRequest, ev: NextFetchEvent) {
      try {
        const pathname = req.nextUrl.pathname;
    
        // Normalize the pathname to ensure consistent matching
        const normalizedPathname = pathname.replace(/\/$/, '').toLowerCase();
    
        // Fetch redirects from Vercel Edge Config using the EDGE_CONFIG connection
        const redirects = await get('redirects');
    
        const redirectEntries = typeof redirects === 'string' ? JSON.parse(redirects) : redirects;
    
        // Match redirect rule
        const redirect = redirectEntries[normalizedPathname];
    
        if (redirect) {
          const statusCode = redirect.permanent ? 308 : 307;
          let destinationUrl = redirect.destination;
          //avoid cyclic redirects
          if (normalizedPathname !== destinationUrl) {
            // Handle relative URLs
            if (!/^https?:\/\//.test(redirect.destination)) {
              const baseUrl = `${req.nextUrl.protocol}//${req.nextUrl.host}`;
              destinationUrl = new URL(redirect.destination, baseUrl).toString();
            }
            return NextResponse.redirect(destinationUrl, statusCode);
          }
        }
    
        return middleware(req, ev);
      } catch (error) {
        console.error('Error in middleware:', error);
        return middleware(req, ev);
      }
    }
    
    export const config = {
      /*
       * Match all paths except for:
       * 1. /api routes
       * 2. /_next (Next.js internals)
       * 3. /sitecore/api (Sitecore API routes)
       * 4. /- (Sitecore media)
       * 5. /healthz (Health check)
       * 6. all root files inside /public
       */
      matcher: ['/', '/((?!api/|_next/|healthz|sitecore/api/|-/|favicon.ico|sc_logo.svg|throw/).*)'],
    };

    Code when using multiple stores and custom environment variables. In this example, there are two Edge Config stores, each linked to its own environment variable: EDGE_CONFIG_CONSTANT_REDIRECTS and EDGE_CONFIG_AUTHORABLE_REDIRECTS. The code first checks for a redirect in the first store, and if not found, it checks the second. An Edge Config Client is required to retrieve values from each store.

    import { NextRequest, NextFetchEvent } from 'next/server';
    import { NextResponse } from 'next/server';
    import middleware from 'lib/middleware';
    import { createClient } from '@vercel/edge-config';
    
    export default async function (req: NextRequest, ev: NextFetchEvent) {
      try {
        const pathname = req.nextUrl.pathname;
    
        // Normalize the pathname to ensure consistent matching
        const normalizedPathname = pathname.replace(/\/$/, '').toLowerCase();
    
        // Fetch Redirects from Store1
        const store1RedirectsClient = createClient(process.env.EDGE_CONFIG_CONSTANT_REDIRECTS);
        const store1Redirects = await store1RedirectsClient .get('redirects');
    
        //Fetch Redirects from Store2
        const store2RedirectsClient = createClient(process.env.EDGE_CONFIG_AUTHORABLE_REDIRECTS);
        const store2Redirects = await store2RedirectsClient.get('redirects');
    
        let redirect;
    
        if (store1Redirects) {
          const redirectEntries =
            typeof store1Redirects === 'string'
              ? JSON.parse(store1Redirects)
              : store1Redirects;
    
          redirect = redirectEntries[normalizedPathname];
        }
    
        // If redirect is not present in permanent redirects, lookup in the authorable redirects store.
        if (!redirect) {
          if (store2Redirects) {
            const store2RedirectEntries =
              typeof store2Redirects === 'string'
                ? JSON.parse(store2Redirects)
                : store2Redirects;
    
            redirect = store2RedirectEntries[normalizedPathname];
          }
        }
    
        if (redirect) {
          const statusCode = redirect.permanent ? 308 : 307;
          let destinationUrl = redirect.destination;
    
          if (normalizedPathname !== destinationUrl) {
            // Handle relative URLs
            if (!/^https?:\/\//.test(redirect.destination)) {
              const baseUrl = `${req.nextUrl.protocol}//${req.nextUrl.host}`;
              destinationUrl = new URL(redirect.destination, baseUrl).toString();
            }
            return NextResponse.redirect(destinationUrl, statusCode);
          }
        }
    
        return middleware(req, ev);
      } catch (error) {
        console.error('Error in middleware:', error);
        return middleware(req, ev);
      }
    }
    
    export const config = {
      /*
       * Match all paths except for:
       * 1. /api routes
       * 2. /_next (Next.js internals)
       * 3. /sitecore/api (Sitecore API routes)
       * 4. /- (Sitecore media)
       * 5. /healthz (Health check)
       * 6. all root files inside /public
       */
      matcher: [
        '/',
        '/((?!api/|_next/|healthz|sitecore/api/|-/|favicon.ico|sc_logo.svg|throw/).*)',
      ],
    };

Summary

With this setup:

  • The Edge Config store is linked to your Vercel project via environment variables.
  • Redirect data is fetched instantly at the Edge Runtime through the SDK.
  • Each project can maintain its own independent redirect configuration.
  • All updates reflect immediately – no redeployment required.

Points to Remember:

  • Avoid overlapping or cyclic redirects.
  • Keep all redirects lowercase and consistent.
  • The Edge Config connection string acts as a secure token – it should never be exposed in the client or source control.
  • Always validate JSON structure before saving in Edge Config.
  • A backup is created on every write, maintaining a version history that can be accessed from the Backups tab of the Edge Config store.
  • Sitecore-managed redirects remain supported when necessary for business or content-driven use cases.

Managing redirects at the edge has made our Sitecore XM Cloud implementations cleaner, faster, and easier to maintain. By shifting this responsibility to Next.js Middleware and Vercel Edge Config, we have created a more composable and future-ready approach that aligns perfectly with modern digital architectures.

At Perficient, we continue to adopt and share solutions that simplify development while improving site performance and scalability. If you are working on XM Cloud or planning a headless migration, this edge-based redirect approach is a great way to start modernizing your stack.

]]>
https://blogs.perficient.com/2025/10/31/simplifying-redirects-in-sitecore-xm-cloud-using-vercel-edge-config/feed/ 0 388136
Planning Sitecore Migration: Things to consider https://blogs.perficient.com/2025/08/29/planning-sitecore-migration-things-to-consider/ https://blogs.perficient.com/2025/08/29/planning-sitecore-migration-things-to-consider/#respond Fri, 29 Aug 2025 10:49:31 +0000 https://blogs.perficient.com/?p=386668

Migrating a website or upgrading to a new Sitecore platform is more than a technical lift — it’s a business transformation and an opportunity to align your site and platform with your business goals and take full advantage of Sitecore’s capabilities. A good migration protects functionality, reduces risk, and creates an opportunity to improve user experience, operational efficiency, and measurable business outcomes.

Before jumping to the newest version or the most hyped architecture, pause and assess. Start with a thorough discovery: review current architecture, understand what kind of migration is required, and decide what can realistically be reused versus what should be refactored or rebuilt, along with suitable topology and Sitecore products.

This blog expands the key considerations before committing to a Sitecore-specific migration, translating them into detailed, actionable architecture decisions and migration patterns that guide impactful implementation.

 

1) Clarifying client requirements

Before starting any Sitecore migration or implementation, it’s crucial to clarify client’s requirements thoroughly. This ensures the solution aligns with actual business needs, not just technical requests and helps avoid rework or misaligned outcomes.

Scope goes beyond just features: Don’t settle for “migrate this” as the requirement. Ask deeper questions to shape the right migration strategy:

  • Business goals: Is the aim a redesign, conversion uplift, version upgrade, multi-region rollout, or compliance?
  • Functional scope: Are we redesigning the entire site or specific flows like checkout/login, or making back office changes?
  • Non-functional needs: What are the performance SLAs, uptime expectations, compliance (e.g.: PCI/GDPR), and accessibility standards?
  • Timeline: Is a phased rollout preferred, or a big-bang launch?

Requirements can vary widely, from full redesigns using Sitecore MVC or headless (JSS/Next.js), to performance tuning (caching, CDN, media optimization), security enhancements (role-based access, secure publishing), or integrating new business flows into Sitecore workflows.
Sometimes, the client may not fully know what’s needed, it’s up to us to assess the current setup and recommend improvements. Don’t assume the ask equals the need, A full rewrite isn’t always the best path. A focused pilot or proof of value can deliver better outcomes and helps validate the direction before scaling.

 

2) Architecture of the client’s system

Migration complexity varies significantly based on what the client is currently using. You need to evaluate current system and its uses and reusability.

Key Considerations

  • If the client is already on Sitecore, the version matters. Older versions may require reworking the content model, templates, and custom code to align with modern Sitecore architecture (e.g.: SXA, JSS).
  • If the client is not on Sitecore, evaluate their current system, infrastructure, and architecture. Identify what can be reused—such as existing servers(in case of on-prem), services, or integrations—to reduce effort.
  • Legacy systems often include deprecated APIs, outdated connectors, or unsupported modules, which increase technical risk and require reengineering.
  • Historical content, such as outdated media, excessive versioning, or unused templates, can bloat the migration. It’s important to assess what should be migrated, cleaned, or archived.
  • Map out all customizations, third-party integrations, and deprecated modules to estimate the true scope, effort, and risk involved.
  • Understanding the current system’s age, architecture, and dependencies is essential for planning a realistic and efficient migration path.

 

3) Media Strategy

When planning a Sitecore migration or upgrade, media handling can lead to major performance issues post-launch. These areas are critical for user experience, scalability, and operational efficiency, so they need attention early in the planning phase. Digital Asset Management (DAM) determines how assets are stored, delivered, and governed.

Key Considerations

  • Inventory: Assess media size, formats, CDN references, metadata, and duplicates. Identify unused assets, and plat to adopt modern formats (e.g., WebP).
  • Storage Decisions: Analyze and decide whether assets stay in Sitecore Media Library, move to Content Hub, or use other cloud storage (Azure Blob, S3)?
  • Reference Updates: Plan for content reference updates to avoid broken links.

 

4) Analytics, personalization, A/B testing, and forms

These features often carry stateful data and behavioral dependencies that can easily break during migration if not planned for. Ignoring them can lead to data loss and degraded user experience.

Key Considerations

  • Analytics: Check if xDB, Google Analytics, or other trackers are in use? Decide how historical analytics data will be preserved, validated, and integrated into the new environment?
  • Personalization: Confirm use of Sitecore rules, xConnect collections, or an external personalization engine. Plan to migrate segments, conditions, and audience definitions accurately.
  • A/B Testing & Experiments: Draft a plan to export experiment definitions and results is present.
  • Forms: Analyze which forms collects data, and how do they integrate with CRM or marketing automation?

Above considerations play important role in choosing Sitecore topology, if there is vast use of analytics XP makes a suitable option, forms submission consent flows have different approach in different topologies.

 

5) Search Strategy

Search is critical for user experience, and a migration is the right time to reassess whether your current search approach still makes sense.

Key Considerations

  • Understand how users interact with the site, Is search a primary navigation tool or a secondary feature? Does it significantly impact conversion or engagement?
  • Identify current search engine if any. Access its features, if advanced capabilities like AI recommendations, synonyms, or personalization being used effectively.
  • If the current engine is underutilized, note that maintaining it may add unnecessary cost and complexity. If search is business-critical, ensure feature parity or enhancement in the new architecture.
  • Future Alignment:  Based on requirements, determine whether the roadmap supports:
    • Sitecore Search (SaaS) for composable and cloud-first strategies.
    • Solr for on-prem or PaaS environments.
    • Third-party engines for enterprise-wide search needs.

 

6) Integrations, APIs & Data Flows

Integrations are often the hidden complexity in Sitecore migrations. They connect critical business systems, and any disruption can lead to post-go-live incidents. For small, simple content-based sites with no integrations, migrations tend to be quick and straightforward. However, for more complex environments, it’s essential to analyze all layers of the architecture to understand where and how data flows. This includes:

Key Considerations

  • Integration Inventory: List all synchronous and asynchronous integrations, including APIs, webhooks, and data pipelines. Some integrations may rely on deprecated endpoints or legacy SDKs that need refactoring.
  • Criticality & Dependencies: Identify mission-critical integrations (e.g.: CRM, ERP, payment gateways).
  • Batch & Scheduled Jobs: Audit long-running processes, scheduled exports, and batch jobs. Migration may require re-scheduling or re-platforming these jobs.
  • Security & Compliance: Validate API authentication, token lifecycles, and data encryption. Moving to SaaS or composable may require new security patterns.

 

7) Identify Which Sitecore offerings are in use — and to what extent?

Before migration, it’s essential to document the current Sitecore ecosystem and evaluate what the future state should look like. This determines whether the path is a straight upgrade or a transition to a composable stack.

Key Considerations

  • Current Topology: Is the solution running on XP or XM? Assume that XP features (xDB, personalization) may not be needed if moving to composable.
  • Content Hub: Check if DAM or CMP is in use. If not, consider whether DAM is required for centralized asset management, brand consistency, and omnichannel delivery.
  • Sitecore Personalize & CDP: Assess if personalization is currently rule-based or if advanced testing and segmentation are required.
  • OrderCloud: If commerce capabilities exist today or are planned in the near future.

 

Target Topologies

This is one of the most critical decisions is choosing the target architecture. This choice impacts infrastructure, licensing, compliance, authoring experience, and long-term scalability. It’s not just a technical decision—it’s a business decision that shapes your future operating model.

Key Considerations

  • Business Needs & Compliance: Does your organization require on-prem hosting for regulatory reasons, or can you move to SaaS for agility?
  • Authoring Experience: Will content authors need Experience Editor, or is a headless-first approach acceptable?
  • Operational Overhead: How much infrastructure management can team handle post-migration?
  • Integration Landscape: Are there tight integrations with legacy systems that require full control over infrastructure?

Architecture Options & Assumptions

Option Best For Pros Cons Assumptions
XM (on-prem/PaaS) CMS-only needs, multilingual content, custom integrations Visual authoring via Experience Editor

Hosting control

Limited marketing features Teams want hosting flexibility and basic CMS capabilities but analytics is not needed
Classic XP (on-prem/PaaS) Advanced personalization, xDB, marketing automation Full control

Deep analytics

Advanced marketing Personalization

Complex infrastructure, high resource demand Marketing features are critical; infra-heavy setup is acceptable
XM Cloud (SaaS) Agility, fast time-to-market, composable DXP Reduced overhead

Automatic updates

Headless-ready

Limited low-level customization SaaS regions meet compliance, Needs easy upgrades

 

Along with topology its important to consider hosting and frontend delivery platform. Lets look at available hosting options with their pros and cons:

  • On-Prem(XM/XP): You can build the type of machine that you want.
    • Pros: Maximum control, full compliance for regulated industries, and ability to integrate with legacy systems.
    • Cons: High infrastructure cost, slower innovation, and manual upgrades, difficult to scale.
    • Best For: Organizations with strict data residency, air-gapped environments, or regulatory mandates.
    • Future roadmap may require migration to cloud, so plan for portability.
  • PaaS (Azure App Services, Managed Cloud – XM/XP)
    • Pros: Minimal up-front costs and you do not need to be concerned about the maintenance of the underlying machine.
    • Cons: Limited choice of computing options and functionality.
    • Best For: Organizations expecting to scale vertically and horizontally, often and quickly
  • IaaS (Infrastructure as a service – XM/XP)
    • This is same as on-premise, but with VMs you can tailor servers to meet your exact requirements.
  • SaaS (XM Cloud)
    • Pros: Zero infrastructure overhead, automatic upgrades, global scalability.
    • Cons: Limited deep customization at infra level.
    • Best For: Organizations aiming for composable DXP and agility.
    • Fully managed by Sitecore (SaaS).

For development, you have different options for example: .Net MVC, .Net Core, Next JS, React. Depending on topology suggested, selection of frontend delivery can be hybrid or headless:

.NET MVC → For traditional, web-only application.
Headless → For multi-channel, composable, SaaS-first strategy.
.NET Core Rendering → For hybrid modernization with .NET.

 

8) Security, Compliance & Data Residency

Security is non-negotiable during any Sitecore migration or upgrade. These factors influence architecture, hosting choices and operational processes.

Key Considerations

  • Authentication & Access: Validate SSO, SAML/OAuth configurations, API security, and secrets management. Assume that identity providers or token lifecycles may need reconfiguration in the new environment.
  • Compliance Requirements: Confirm obligations like PCI, HIPAA, GDPR, Accessibility and regional privacy laws. Assume these will impact data storage, encryption, and as AI is in picture now a days it will even have impact on development workflow.
  • Security Testing: Plan for automated vulnerability scans(decide tools you going to use for the scans) and manual penetration testing as part of discovery and pre go-live validation.

 

9) Performance

A migration is the perfect opportunity to identify and fix structural performance bottlenecks, but only if you know your starting point. Without a baseline, it’s impossible to measure improvement or detect regressions.

Key Considerations

  • Baseline Metrics: Capture current performance indicators like TTFB (Time to First Byte), LCP (Largest Contentful Paint), CLS (Cumulative Layout Shift), throughput, and error rates. These metrics will guide post-migration validation and SLA commitments.
  • Caching & Delivery: Document existing caching strategies, CDN usage, and image delivery methods. Current caching patterns may need reconfiguration in the new architecture.
  • Load & Stress Testing: Define peak traffic scenarios and plan load testing tools with Concurrent Users and Requests per Second.

 

10) Migration Strategies

Choosing the right migration strategy is critical to balance risk, cost, and business continuity. There’s no one size fits all approach—your suggestion/choice depends on timeline, technical debt and operational constraints.

Common Approaches

    • Lift & Shift
      Move the existing solution as is with minimal changes.
      This is low-risk migrations where speed is the priority. When the current solution is stable and technical debt is manageable.
      However with this approach the existing issues and inefficiencies stays as is which can be harmful.

 

    • Phased (Module-by-Module)
      Migrate critical areas first (e.g.: product pages, checkout) and roll out iteratively.
      This can be opted for large, complex sites where risk needs to be minimized, when business continuity is critical.
      With this approach, timelines are longer and requires dual maintenance during transition.

 

    • Rewrite & Cutover
      Rebuild the solution from scratch and switch over at once.
      This is can be chosen when the current system doesn’t align with future architecture. When business wants a clean slate for modernization.

 

 

Above options can be suggested based on several factors whether business tolerate downtime or dual maintenance. What are the Timelines, What’s the budget. If the current solution worth preserving, or is a rewrite inevitable? Does the strategy align with future goals?

 

Final Thoughts

Migrating to Sitecore is a strategic move that can unlock powerful capabilities for content management, personalization, and scalability. However, success lies in the preparation. By carefully evaluating your current architecture, integration needs and team readiness, you can avoid common pitfalls and ensure a smoother transition. Taking the time to plan thoroughly today will save time, cost, and effort tomorrow setting the stage for a future-proof digital experience platform.

 

]]>
https://blogs.perficient.com/2025/08/29/planning-sitecore-migration-things-to-consider/feed/ 0 386668
5 Reasons Companies Are Choosing Sitecore SaaS https://blogs.perficient.com/2025/08/27/5-reasons-companies-are-choosing-sitecore-saas/ https://blogs.perficient.com/2025/08/27/5-reasons-companies-are-choosing-sitecore-saas/#respond Wed, 27 Aug 2025 14:24:10 +0000 https://blogs.perficient.com/?p=386630

The move to SaaS is one of the biggest shifts happening in digital experience. It’s not just about technology, it’s about making platforms simpler, faster, and more adaptable to the pace of customer expectations.

Sitecore has leaned in with a clear vision: “It’s SaaS. It’s Simple. It’s Sitecore.”

Here are five reasons why more organizations are turning to Sitecore SaaS to power their digital experience strategies:

1. Simplicity: A Modern Foundation

Sitecore SaaS solutions like XM Cloud remove the burden of managing infrastructure and upgrades.

  • No more complex version upgrades, updates happen automatically.
  • Reduced reliance on IT for day-to-day maintenance.
  • A leaner, more cost-effective foundation for marketing teams.

By simplifying operations, companies can focus on what matters most; delivering exceptional digital experiences.

2. Speed-to-Value: Launch Faster

Traditional DXPs can take months (or more) to implement and optimize. Sitecore SaaS is designed for speed:

  • Faster deployments with prebuilt components.
  • Seamless integrations with other SaaS and cloud tools.
  • Empowerment for marketers to build and launch campaigns without heavy dev cycles.

Organizations adopting Sitecore SaaS are moving from planning to execution faster than ever.

3. Scalability: Grow Without Rebuilds

As customer expectations grow, so does the need to scale digital experiences quickly. Sitecore SaaS allows companies to:

  • Spin up new sites, regions, or languages without starting from scratch.
  • Adjust to spikes in demand without disruption.
  • Add capabilities as the business evolves — without heavy upfront investment.

This scalability ensures brands can adapt as fast as their audiences do.

4. Continuous Innovation: Always Current

One of the most frustrating parts of traditional platforms is the upgrade cycle. Sitecore SaaS solves this with:

  • Automatic access to the latest innovations — no disruptive “big bang” upgrades.
  • Built-in adoption of emerging technologies like AI and machine learning.
  • A platform that’s always modern, not years behind.

With Sitecore SaaS, companies get a future-proof DXP that evolves with them.

5. Composability Without the Complexity

Composable DXPs promise flexibility, but without the right foundation they can feel overwhelming. Sitecore SaaS makes composability practical:

  • Start with XM Cloud as a core CMS foundation.
  • Add personalization, commerce, or search when ready.
  • Use APIs to integrate best-of-breed tools, without losing control.

This approach ensures organizations adopt what they need, when they need it without the complexity of managing multiple disconnected systems.

Why it Matters

Companies aren’t moving to Sitecore SaaS just to keep up with technology. They’re moving because it makes their organizations more agile, efficient, and competitive. SaaS with Sitecore means simpler operations, faster launches, continuous innovation, and a platform that grows alongside your business.

]]>
https://blogs.perficient.com/2025/08/27/5-reasons-companies-are-choosing-sitecore-saas/feed/ 0 386630
Deconstructing the Request Lifecycle in Sitecore Headless – Part 2: SSG and ISR Modes in Next.js https://blogs.perficient.com/2025/08/20/deconstructing-the-request-lifecycle-in-sitecore-headless-part-2-ssg-and-isr-modes-in-next-js/ https://blogs.perficient.com/2025/08/20/deconstructing-the-request-lifecycle-in-sitecore-headless-part-2-ssg-and-isr-modes-in-next-js/#comments Wed, 20 Aug 2025 07:43:59 +0000 https://blogs.perficient.com/?p=385891

In my previous post, we explored the request lifecycle in a Sitecore headless application using Next.js, focusing on how Server Side Rendering (SSR) works in tandem with Sitecore’s Layout Service and the Next.js middleware layer.. But that’s only one part of the story.

This follow-up post dives into Static Site Generation (SSG) and Incremental Static Regeneration (ISR) – two powerful rendering modes offered by Next.js that can significantly boost performance and scalability when used appropriately in headless Sitecore applications.

Why SSG and ISR Matter?

In Sitecore XM Cloud-based headless implementations, choosing the right rendering strategy is crucial for balancing performance, scalability, and content freshness. Static Site Generation (SSG) pre-renders pages at build time, producing static HTML that can be instantly served via a CDN. This significantly reduces time-to-first-byte (TTFB), minimizes server load, and is ideal for stable content like landing pages, blogs, and listing pages.

Incremental Static Regeneration (ISR) builds on SSG by allowing pages to be regenerated in the background after deployment, based on a configurable revalidation interval. This means you can serve static content with the performance benefits of SSG, while still reflecting updates without triggering a full site rebuild.

These strategies are especially effective in Sitecore environments where:

  • Most pages are relatively static and don’t require real-time personalization.
  • Content updates are frequent but don’t demand immediate global propagation.
  • Selective regeneration is acceptable, enabling efficient publishing workflows.

For Sitecore headless implementations, understanding when and how to use these strategies is key to delivering scalable, performant experiences without compromising on content freshness.

SSG with Sitecore JSS: The Request Lifecycle

In a Static Site Generation (SSG) setup, the request lifecycle transitions from being runtime-driven (like SSR) to being build-time-driven. This fundamentally alters how Sitecore, Next.js, and the JSS application work together to produce HTML. Here’s how the lifecycle unfolds in the context of Sitecore JSS with Next.js:

1. Build-Time Route Generation with getStaticPaths

At build time, Next.js executes the getStaticPaths function to determine which routes (i.e., Sitecore content pages) should be statically pre-rendered. This typically involves calling Sitecore’s Sitemap Service or querying layout paths via GraphQL or REST.

2. Layout Data Fetching with getStaticProps

For every path returned by getStaticPaths, Next.js runs getStaticProps to fetch the corresponding layout data from Sitecore. This data is fetched via: Sitecore Layout Service REST endpoint, or Experience Edge GraphQL endpoint

At this stage:

  • Sitecore’s middleware is not executed.
  • There is no personalization, since requests are not user-specific.
  • The component factory in the JSS app maps layout JSON to UI components and renders them to static HTML.

3. Static HTML Generation

Next.js compiles the entire page into an HTML file using:

  • The Layout JSON from Sitecore.
  • Mapped UI components from the JSS component factory.
  • Placeholder content populated during build

This results in fully static HTML output that represents the Sitecore page as it existed at build time.

4. Deployment & Delivery via CDN

Once built, these static HTML files are deployed to a CDN or static hosting platform (e.g., Vercel, Netlify), enabling:

  • Sub-second load times as no runtime rendering is required.
  • Massively scalable delivery.

5. Runtime Request Handling

When a user requests a statically generated page:

  • CDN Cache Hit: The CDN serves the pre-built HTML directly from cache
  • No Server Processing: No server-side computation occurs
  • Client-Side Hydration: React hydrates the static HTML, making it interactive
  • Instant Load: Users experience near-instantaneous page loads

Incremental Static Regeneration (ISR): The Best of Both Worlds

While SSG provides excellent performance, it has a critical limitation: content becomes stale immediately after build. ISR addresses this by enabling selective page regeneration in the background, maintaining static performance while ensuring content freshness.

ISR Request Lifecycle in Sitecore JSS Applications

1. Initial Request (Cached Response)

When a user requests an ISR-enabled page:

export const getStaticProps: GetStaticProps = async (context) => {
  const layoutData = await fetchLayoutData(context.params?.path);
  
  return {
    props: { layoutData },
    revalidate: 3600, // Revalidate every hour
  };
};
  • Next.js checks if a static version exists and is within the revalidation window
  • If valid, the cached static HTML is served immediately
  • If the cache is stale (beyond revalidation time), Next.js triggers background regeneration

2. Background Regeneration Process

When regeneration is triggered:

  1. Next.js makes a fresh API call to Sitecore’s Layout Service or GraphQL endpoint
  2. Sitecore resolves the current item, applies any layout changes, and returns updated JSON
  3. The JSS component factory processes the new layout data
  4. The newly rendered HTML replaces the cached version
  5. Updated content propagates across the CDN network

3. Subsequent Requests

After regeneration completes:

  • New requests serve the updated static content
  • The cycle repeats based on the revalidation interval
  • Users always receive static performance, even during regeneration

Best Practices for Sitecore SSG/ISR Implementation

When implementing SSG and ISR in Sitecore headless applications, align your rendering strategy with content characteristics: use SSG for truly static pages like landing pages, ISR for semi-dynamic content such as blogs and product catalogs with appropriate revalidation intervals (5 minutes for news, 30 minutes for blogs, 1 hour for products), and continue using SSR for personalized experiences. Focus on selective pre-rendering by only building high-traffic, SEO-critical, and core user journey pages at build time while using fallback strategies for less critical content.

Conclusion: Choosing the Right Strategy

The choice between SSR, SSG, and ISR isn’t binary – modern Sitecore headless applications often employ a hybrid approach:

  • SSG for truly static content that rarely changes
  • ISR for content that updates periodically but doesn’t require real-time freshness
  • SSR for personalized experiences and rapidly changing content

By understanding the request lifecycle for each rendering strategy, you can architect Sitecore headless solutions that deliver exceptional performance while maintaining content flexibility. The key is aligning your technical approach with your content strategy and user experience requirements. So, choose your rendering strategy wisely!

]]>
https://blogs.perficient.com/2025/08/20/deconstructing-the-request-lifecycle-in-sitecore-headless-part-2-ssg-and-isr-modes-in-next-js/feed/ 1 385891
Deconstructing the Request Lifecycle in Sitecore Headless (with a JSS + Next.js Deep Dive) https://blogs.perficient.com/2025/07/31/deconstructing-the-request-lifecycle-in-sitecore-headless-with-a-jss-next-js-deep-dive/ https://blogs.perficient.com/2025/07/31/deconstructing-the-request-lifecycle-in-sitecore-headless-with-a-jss-next-js-deep-dive/#respond Thu, 31 Jul 2025 17:48:34 +0000 https://blogs.perficient.com/?p=385650

In the era of traditional Sitecore MVC, the rendering lifecycle was tightly coupled to the Sitecore server. HTML generation, content retrieval, and presentation logic were all orchestrated within a single monolithic application. With the advent of headless architectures built using Sitecore JSS and platforms like XM Cloud, this paradigm has significantly shifted. Rendering responsibilities now move to decoupled frontend applications, enabling greater flexibility, scalability, and performance.

The responsibility for rendering has been decoupled and offloaded to a dedicated front-end application (e.g., React, Next.js, Vue.js), transforming Sitecore into a highly optimized content and layout delivery platform via robust APIs. For developers building Sitecore headless applications, a profound understanding of how a request traverses from the browser, through the front-end rendering host, interacts with Sitecore, and ultimately returns a rendered page, is paramount. This intricate knowledge forms the bedrock for effective debugging and advanced performance optimization.

This blog post will meticulously breaks down:

  • The generalized request processing flow in Sitecore headless applications.
  • The specific instantiation of this flow within JSS applications built leveraging the Next.js framework.
  • Debugging tips.

Sitecore XM Cloud and other headless Sitecore setups embody the principle of separation of concerns, decoupling content management from presentation logic. Rather than Sitecore generating the final HTML markup, your front-end rendering application (React/Next.js) dynamically fetches content and layout data via API endpoints and orchestrates the rendering process, whether client-side or server-side. Comprehending this architectural decoupling is critical for engineering performant, scalable, flexible, and personalized digital experiences.

The General Request Flow in Sitecore Headless Architectures

Irrespective of the specific front-end rendering host, the foundational request processing flow in Sitecore headless applications remains consistent:

  1. Client Request Initiation: A user initiates a request by navigating to a specific URL (e.g., https://www.example.com/about) in their web browser. This request is directed towards your front-end rendering host.
  2. Front-end Rendering Host Interception: The front-end rendering host (e.g. a Next.js application deployed on Vercel, or Netlify) receives the incoming HTTP request.
  3. Data Fetching from Sitecore: The rendering host, acting as a data orchestrator, makes an API call to Sitecore to retrieve the necessary page layout and content data. This can occur via two primary mechanisms:
    • Sitecore Layout Service : A traditional RESTful endpoint that delivers a comprehensive JSON representation of the page’s layout, components, and associated field values. This service is part of the Sitecore Headless Services module.
    • Sitecore Experience Edge GraphQL API: A more flexible and performant GraphQL endpoint that allows for precise data querying. This is the preferred mechanism for XM Cloud-native applications, providing a single endpoint for diverse data retrieval.  Critical parameters passed in this request typically include route (the requested URL path), sc_lang (the desired content language), sc_site (the target Sitecore site definition), and potentially additional context parameters for personalization or A/B testing.
  4. Sitecore Route and Context Resolution: Upon receiving the data request, the following server-side operations are performed:
    • Item Resolution: It resolves the incoming route parameter to a specific Sitecore content item within the content tree, based on defined route configurations (e.g., sitecore/content/MyTenant/MySite/About).
    • Context Establishment: It establishes the current request context, including the site, language, and user session, personalized page variant.
    • Layout Computation: Based on the resolved item and evaluated personalization, Sitecore computes the final page layout, including the arrangement of renderings within placeholders and the specific data sources for each component.
  5. Sitecore Response Generation: A structured JSON payload is returned to the rendering host. This payload typically includes:
    • Layout Metadata: Information about the overall page structure, placeholder definitions, and associated rendering components.
    • Component Data: For each component on the page, its type (e.g., “Hero”, “RichText”), its associated data source item ID (if applicable), and all serialized field values (e.g., Title, Body, Image).
  6. Front-end Rendering: The rendering host receives the JSON payload and, using its component factory (a mapping between Sitecore component names and UI component implementations), dynamically constructs the HTML for the requested page.
    • Component Mapping: Each JSON-defined component type is mapped to its corresponding React/Next.js UI component.
    • Data Binding: The serialized field values from the JSON are passed as props to the UI components.
    • Placeholder Resolution: The rendering host iterates through the placeholder definitions in the JSON, rendering child components into their designated placeholder regions.
    • Client-side Hydration: For server-rendered applications (SSR/SSG), the initial HTML is sent to the browser, where React then “hydrates” it, attaching event listeners and making the page interactive.
    • Post-render Actions: Any client-side personalization or analytics integration (e.g., Sitecore Personalize Engage SDK) may occur after the initial page render.

Key takeaway: In a headless setup, Sitecore acts as the intelligent provider of content and layout data via APIs, while the front-end application takes full ownership of rendering the final HTML and handling all user interface interactions.

Deep Dive: Request Lifecycle in JSS + Next.js Applications

The general headless flow finds its specific implementation within a JSS application leveraging the Next.js framework, benefiting from Next.js’s powerful data fetching and rendering capabilities. We’ll focus specifically on Server-Side Rendering (SSR) here, while a separate post will cover Static Site Generation (SSG) and Incremental Static Regeneration (ISR).

1. User Request Initiation

A user navigates to a specific route, such as /products, initiating an HTTP GET request directed to your deployed Next.js application, which acts as the unified rendering endpoint.

2. Next.js Middleware and Sitecore Add-on Integration (Edge-Based Execution)

If implemented, the middleware.ts file in your Next.js application executes at the Edge (close to the user) before the request even reaches your application’s pages. This provides an opportune moment for early request manipulation and context enrichment:

  • Authentication & Authorization: Redirecting unauthorized users or validating session tokens.
  • Request Rewrites & Redirects: URL transformations based on dynamic conditions.
  • Header Manipulation: Injecting custom headers or modifying existing ones.
  • Contextual Data Injection: Reading user-specific cookies, geolocation data, and potentially passing this context to downstream services via HTTP headers.

This middleware layer is where Sitecore XMCJSS Next.js add-ons particularly shine, streamlining complex Sitecore-specific functionalities.

2.1 Sitecore JSS Next.js Add-ons: Extending Middleware Capabilities

Sitecore provides specialized add-ons for JSS Next.js applications that are designed to integrate seamlessly with Next.js middleware, enhancing data fetching and other critical functionalities at the Edge. These add-ons abstract away much of the boilerplate code, allowing developers to focus on business logic.

Key add-ons relevant to the request lifecycle and that are compatible with XM Cloud are:

  • SXA (nextjs-sxa):
    • Includes example components and the setup for Headless SXA projects.
  • Next.js Multisite Add-on (nextjs-multisite):
    • Enables a single Next.js rendering host to serve multiple Sitecore sites.
    • Leverages middleware to resolve the correct Sitecore site (sc_site parameter) based on the incoming request’s hostname, path, or other routing rules. This ensures the correct site context is passed to the Layout Service or GraphQL calls.
    • Often uses a GraphQLSiteInfoService to fetch site definitions from Sitecore Experience Edge at build time or runtime.
  • Next.js Personalize Add-on (nextjs-personalize):
    • Integrates with Sitecore Personalize (formerly Boxever/CDP) for advanced client-side personalization and experimentation.
    • Its core component, the PersonalizeMiddleware, is designed to run at the Edge.
    • The PersonalizeMiddleware makes a call to Sitecore Experience Edge to fetch personalization information (e.g., page variants).
    • It then interacts with the Sitecore CDP endpoint using the request context to determine the appropriate page variant for the current visitor.
    • Crucially, if a personalized variant is identified, the middleware can perform a rewrite of the request path (e.g., to /_variantId_<variantId>/<original-path>). This personalized rewrite path is then read by the Next.js app to manipulate the layout and feed into client-side Page View events. This allows client-side logic to render the specific personalized content.
    • Also includes a Page Props Factory plugin to simplify data retrieval for personalized content.

These add-ons are included in your application when you create a Next.js project with the JSS initializer script, you can include multiple add-ons in your application. In addition to the above add-ons, you also get middleware plugin  redirects.ts that supports and enables redirects defined in Sitecore.

3. Next.js Server-Side Rendering (SSR) via getServerSideProps

JSS Next.js apps commonly employ a catch-all route (e.g., pages/[[...path]].tsx) to dynamically handle arbitrary Sitecore content paths. The getServerSideProps function, executed on the rendering host server for each request, is the primary mechanism for fetching the Sitecore layout data.

While Sitecore add-ons in middleware can pre-fetch data, getServerSideProps remains a critical point, especially if you’re not fully relying on middleware for all data, or if you need to merge data from multiple sources. The layoutData fetched here will already reflect any server-side personalization applied by Sitecore based on context passed from middleware.

4. Sitecore Layout Service / Experience Edge GraphQL Processing

Upon receiving the data fetch request from the Next.js application, Sitecore’s backend performs a series of crucial operations as mentioned earlier – resolves the route, evaluates the context (such as language, site, and device), and assembles the appropriate renderings based on the presentation details. It then serializes this information – comprising route-level fields, component content, placeholder hierarchy, and any server-side personalization – into a structured JSON or GraphQL response. This response is returned to the rendering host, enabling the front-end application to construct the final HTML output using the data provided by Sitecore.

5. Rendering with the JSS Component Factory

Upon receiving the layoutData JSON, the JSS Next.js application initiates the client-side (or server-side during SSR) rendering process using its component factory. This factory is a crucial mapping mechanism that links Sitecore component names (as defined in the componentName field in the Layout Service JSON) to their corresponding React UI component implementations.

6. HTML Response to Browser

Next.js completes the server-side rendering process, transforming the React component tree into a fully formed HTML string. This HTML, along with any necessary CSS and JavaScript assets, is then sent as the HTTP response to the user’s browser. If personalization rules were applied by Sitecore, the returned HTML will reflect the specific component variants or content delivered for that particular user.

7. Client-Side Hydration

Once the browser receives the HTML, React takes over on the client-side, “hydrating” the static HTML by attaching event listeners and making the page interactive. This ensures a seamless transition from a server-rendered page to a fully client-side interactive single-page application (SPA).

 Debugging Tips for Sitecore JSS Applications

When working with Sitecore JSS applications, in headless setups, debugging can be a crucial part of development and troubleshooting when components fail to render as expected, personalization rules seem to misfire, or data appears incorrect.

1. Enable Debug Logs in the JSS App

JSS uses a logging mechanism based on debug – a npm debugging module. The module provides a debug() function, which works like an enhanced version of console.log(). Unlike console.log, you don’t need to remove or comment out debug() statements in production – you can simply toggle them on or off using environment variables whenever needed.

To activate detailed logs for specific parts of your JSS app, set the DEBUG environment variable.

  1. To output all debug logs available, set the DEBUG environment variable to sitecore-jss:*. The asterisk (*) is used as a wildcard. DEBUG=sitecore-jss:*
  2. To filter out and display log messages from specific categories, such as those related to the Layout Service, set the variable like so  DEBUG=sitecore-jss:layout
  3. To exclude logs of specific categories use - prefix: DEBUG=sitecore-jss:*,-sitecore-jss:layout

Please refer to this document to get details of all the available debug logging.

2. Use Browser DevTools to Inspect Logs

If your app runs client-side, and the debug package is configured, JSS logs will appear in the browser console.

To enable this manually in the browser, set this in the browser console:

localStorage.debug = 'jss:*';

Then refresh the page. You’ll start seeing logs for:

  • Layout service requests

  • Component-level rendering

  • Data fetching and personalization events

3. Leveraging Server-Side Logging within Next.js

Next.js’s server-side data fetching functions (getServerSideProps, getStaticProps) provide excellent points for detailed logging. Add console.log statements within your getServerSideProps or getStaticProps functions to get more details of the request and its data. When deployed to platforms like Vercel, or Netlify, these console.log statements will appear in your serverless function logs (e.g., Vercel Function Logs).

Additionally, when deploying your Sitecore headless application on Vercel, you can leverage Vercel’s built-in request logging and observability features. These tools allow you to track incoming requests, inspect headers, view response times, and monitor serverless function executions. This visibility can be especially helpful when debugging issues related to routing, personalization, or data fetching from the Layout Service or other backend APIs.

Wrapping It All Up: Why This Matters

Understanding how requests are processed in Sitecore Headless applications – especially when using JSS with Next.js – gives developers a strong foundation for building high-performing and maintainable solutions. By grasping the complete request lifecycle, from incoming requests to Layout Service responses and component rendering, you gain the clarity needed to architect more efficient and scalable applications. Coupled with effective debugging techniques and observability tools, this knowledge enables you to identify bottlenecks, troubleshoot issues faster, and deliver seamless user experiences. With Sitecore’s architecture already embracing composable and headless paradigms, understanding these fundamentals is essential for developers looking to build modern, future-ready digital experiences.

]]>
https://blogs.perficient.com/2025/07/31/deconstructing-the-request-lifecycle-in-sitecore-headless-with-a-jss-next-js-deep-dive/feed/ 0 385650
Sitecore Forms: Spam Prevention Tips https://blogs.perficient.com/2025/07/30/sitecore-forms-spam-prevention-tips/ https://blogs.perficient.com/2025/07/30/sitecore-forms-spam-prevention-tips/#respond Wed, 30 Jul 2025 05:19:06 +0000 https://blogs.perficient.com/?p=384959

Forms are an essential part of any website, enabling visitors to get in touch, subscribe, or request information. But when spam bots flood your Sitecore forms with junk submissions, it can quickly become a problem, messing with your data, wasting resources, and making genuine leads harder to manage. In this post, we’ll explore practical and effective tips to keep our Sitecore forms protected from spam, helping us maintain clean data and a smoother user experience.

Why Spam Submissions Are a Problem in Sitecore Forms

Spam submissions aren’t just annoying—they can seriously impact our website’s performance and team’s productivity. When bots flood your forms with fake data, it becomes harder to identify genuine leads, analyse user feedback, or maintain clean records. Over time, this can lead to wasted time sifting through irrelevant submissions and even affect your marketing and sales efforts.

Common reasons why spam happens in Sitecore forms include:

  • Bots targeting publicly accessible forms.
  • Lack of proper validation or verification on form fields.
  • Forms without anti-spam mechanisms like CAPTCHA or honeypots.

Understanding these causes helps us implement the right solutions to keep our Sitecore forms secure and reliable.

To prevent spam form submissions in Sitecore, multiple approaches can be implemented. These include utilizing Sitecore’s native robot detection features, integrating CAPTCHAs to check for human users, applying anti-forgery tokens to secure forms against automated attacks, and adding honeypot fields to catch and block bots. Combining these techniques helps ensure robust protection against spam.

Here’s a breakdown of the techniques

1. Sitecore’s Built-in Robot Detection

  • Sitecore Forms offers a built-in “Robot detection” feature that can be enabled or disabled.
  • This feature is found within the Form elements pane, on the Settings tab.
  • Enabling “Robot detection” in Sitecore automatically identifies and blocks automated form submissions, reducing spam and ensuring valid entries.

Robotdetection

2. Implementing CAPTCHA

  • Google reCAPTCHA is a common and effective method to prevent spam.
  • They require users to complete a challenge (e.g., selecting images, solving puzzles) that is easy for humans but difficult for bots.
  • Sitecore supports the integration of reCAPTCHA into forms to help prevent automated spam submissions.
  • To implement, you’ll need to obtain API keys (Site Key and Secret Key) from Google and configure them within your Sitecore instance.

3. Using Honeypot Fields

  • Honeypots are hidden fields on the form that are invisible to human users but are filled out by bots.
  • Submissions containing data in the honeypot field are blocked, as this is a clear sign of bot activity and helps prevent spam.
  • This technique involves adding an extra field (e.g., type=”text” or input) to the form and hiding it using CSS.
  • The server-side code then checks if this field is populated during form submission.
  • Read a full implementation guide here.

4. Anti-forgery tokens

  • Sitecore includes a built-in anti-forgery feature designed to help protect web applications from Cross-Site Request Forgery (CSRF) attacks. This security mechanism ensures that form submissions come from authenticated and trusted sources by validating anti-forgery tokens. These tokens are automatically generated and validated by Sitecore, with the ASP.NET anti-forgery framework, providing developers with a simple and integrated way to secure forms and user interactions without extensive manual configuration.
  • You can verify anti-forgery configuration settings in /sitecore/admin/showconfig.aspx.

Antiforgerytoken

5. Server-Side Validation and Other Considerations

  • Validate all inputs: Make sure that all mandatory fields are validated on both the client and server sides.
  • Implement timeout checks: Track the time taken for form submission and flag submissions that occur too quickly as potential spam.
  • Consider custom solutions: Explore custom code or third-party modules that offer advanced spam detection and prevention techniques.
  • Regularly review submissions: Track form submissions for irregular patterns to detect potential spam sources.
  • Keep your contact lists clean: Regularly remove invalid email addresses and inactive users from your contact lists to minimize spam.

Conclusion

By combining these techniques, you can significantly reduce the amount of spam submissions reaching your Sitecore forms, improving the overall health and efficiency of your website and contact lists.

 

]]>
https://blogs.perficient.com/2025/07/30/sitecore-forms-spam-prevention-tips/feed/ 0 384959
Optimize Sitecore Docker Instance: Increase Memory Limits https://blogs.perficient.com/2025/07/28/optimize-sitecore-docker-instance-increase-memory/ https://blogs.perficient.com/2025/07/28/optimize-sitecore-docker-instance-increase-memory/#respond Mon, 28 Jul 2025 07:39:39 +0000 https://blogs.perficient.com/?p=384666

Running a Sitecore Docker instance is a game-changer for developers. It streamlines deployments, accelerates local setup, and ensures consistency across environments. However, performance can suffer – even on high-end laptops – if Docker resources aren’t properly optimized, especially after a hardware upgrade.

I recently faced this exact issue. My Sitecore XP0 instance, running on Docker, became noticeably sluggish after I upgraded my laptop. Pages loaded slowly, publishing dragged on forever, and SQL queries timed out.

The good news? The fix was surprisingly simple: allocate more memory to the proper containers using docker-compose.override.yml

What Went Wrong?

After the upgrade, I noticed:

  • The Content Management (CM) UI was lagging.
  • Publishing and indexing took ages.
  • SQL queries and Sitecore services kept timing out.

At first, this was puzzling because my new laptop had better specs. However, I then realized that Docker was still running with outdated memory limits for containers. By default, these limits are often too low for heavy workloads, such as Sitecore.

Root Cause

Docker containers run with memory constraints either from:

  • docker-compose.override.yml
  • Docker Desktop global settings

When memory is too low, Sitecore roles such as CM and MSSQL can’t perform optimally. They need significant RAM for caching, pipelines, and database operations.

The Solution: Increase Memory in docker-compose.override.yml

To fix the issue, I updated the memory allocation for key containers (mssql and cm) in the docker-compose.override.yml file.

Here’s what I did:

Before

mssql: 
 mem_limit: 2G

After

mssql:
  mem_limit: 4GB

cm:
  image: ${REGISTRY}${COMPOSE_PROJECT_NAME}-xp0-cm:${VERSION:-latest}
  build:
    context: ./build/cm
    args:
      BASE_IMAGE: ${SITECORE_DOCKER_REGISTRY}sitecore-xp0-cm:${SITECORE_VERSION}
      SPE_IMAGE: ${SITECORE_MODULE_REGISTRY}sitecore-spe-assets:${SPE_VERSION}
      SXA_IMAGE: ${SITECORE_MODULE_REGISTRY}sitecore-sxa-xp1-assets:${SXA_VERSION}
      TOOLING_IMAGE: ${SITECORE_TOOLS_REGISTRY}sitecore-docker-tools-assets:${TOOLS_VERSION}
      SOLUTION_IMAGE: ${REGISTRY}${COMPOSE_PROJECT_NAME}-solution:${VERSION:-latest}
      HORIZON_RESOURCES_IMAGE: ${SITECORE_MODULE_REGISTRY}horizon-integration-xp0-assets:${HORIZON_ASSET_VERSION}
  depends_on:
    - solution
  mem_limit: 8GB
  volumes:
    - ${LOCAL_DEPLOY_PATH}\platform:C:\deploy
    - ${LOCAL_DATA_PATH}\cm:C:\inetpub\wwwroot\App_Data\logs
    - ${HOST_LICENSE_FOLDER}:c:\license
    - ${LOCAL_ITEM_PATH}:c:\items-mounted

How to Apply the Changes

  1. Open docker-compose.override.yml.
  2. Locate the mssql and cm services.
  3. Update or add the mem_limit property:
    • mssql → 4GB
    • cm → 8GB
  4. Rebuild containers:
    
    docker compose down
    docker compose up --build -d
  1. Check updated limits:
  docker stats

Impact After Change

After increasing memory:

  • CM dashboard loaded significantly faster.
  • Publishing operations completed in less time.
  • SQL queries executed smoothly without timeouts.

Why It Works

Sitecore roles (especially CM) and SQL Server are memory-hungry. If Docker allocates too little memory:

  • Containers start swapping.
  • Performance tanks.
  • Operations fail under load.

By increasing memory:

  • CM handles ASP.NET, Sitecore pipelines, and caching more efficiently.
  • SQL Server caches queries better and reduces disk I/O.

Pro Tips

  • Ensure Docker Desktop or Docker Engine is configured with enough memory globally.
  • Avoid setting memory limits too high if your laptop has limited RAM.
  • If using multiple Sitecore roles, adjust memory allocation proportionally.

Final Thoughts

A simple tweak in docker-compose.override.yml can drastically improve your Sitecore Docker instance performance. If your Sitecore CM is sluggish or SQL queries are slow, try increasing the memory limit for critical containers.

]]>
https://blogs.perficient.com/2025/07/28/optimize-sitecore-docker-instance-increase-memory/feed/ 0 384666
Lessons from the Front: Configurable Workflow Rules for New Items in XM Cloud https://blogs.perficient.com/2025/07/25/lessons-from-the-front-configurable-workflow-rules-for-new-items-in-xm-cloud/ https://blogs.perficient.com/2025/07/25/lessons-from-the-front-configurable-workflow-rules-for-new-items-in-xm-cloud/#respond Fri, 25 Jul 2025 18:17:21 +0000 https://blogs.perficient.com/?p=384890

Intro 📖

In this post I’d like to share a workflow “attacher” implementation I built on a recent Sitecore XM Cloud project. The solution attaches workflows to new items based on a configurable list of template and path rules. It was fun to build and ended up involving a couple Sitecore development mechanisms I hadn’t used in a while:

  • The venerable Sitecore configuration factory to declaratively define runtime objects
  • The newer pipeline processor invoked when items are created from a template: addFromTemplate

This implementation provided our client with a semi-extensible way of attaching workflows to items without writing any additional code themselves. “But, Nick, Sitecore already supports attaching workflows to items, why write any custom code to do this?” Great question 😉.

The Problem 🙅‍♂️

The go-to method of attaching workflows to new items in Sitecore is to set the workflow fields on Standard Values for the template(s) in question. For example, on a Page template in a headless site called Contoso (/sitecore/templates/Project/Contoso/Page/__Standard Values). This is documented in the Accelerate Cookbook for XM Cloud here. Each time a new page is created using that template, the workflow is associated to (and is usually started on) the new page.

Setting workflow Standard Values fields on site-specific or otherwise custom templates is one thing, but what about on out-of-the-box (OOTB) templates like media templates? On this particular project, there was a requirement to attach a custom workflow to any new versioned media items.

I didn’t want to edit Standard Values on any of the media templates that ship with Sitecore. However unlikely, those templates could change in a future Sitecore version. Also, worrying about configuring Sitecore to treat any new, custom media templates in the same way as the OOTB media templates just felt like a bridge too far.

I thought it would be better to “listen” for new media items being created and then check to see if a workflow should be attached to the new item or not. And, ideally, it would be configurable and would allow the client’s technical resources to enumerate one or more workflow “attachments,” each independently configurable to point to a specific workflow, one or more templates, and one or more paths.

The Solution ✅

🛑 Disclaimer: Okay, real talk for a second. Before I describe the solution, broadly speaking, developers should try to avoid customizing the XM Cloud content management (CM) instance altogether. This is briefly mentioned in the Accelerate Cookbook for XM Cloud here. The less custom code deployed to the CM the better; that means fewer points of failure, better performance, more expedient support ticket resolution, etc. As Robert Galanakis once wrote, “The fastest code is the code which does not run. The code easiest to maintain is the code that was never written.”

With that out of the way, in the real world of enterprise XM Cloud solutions, you may find yourself building customizations. In the case of this project, I didn’t want to commit to the added overhead and complexity of building out custom media templates, wiring them up in Sitecore, etc., so I instead built a configurable workflow attachment mechanism to allow technical resources to enumerate which workflows should start on which items based on the item’s template and some path filters.

addFromTemplate Pipeline Processor 🧑‍🔧

Assuming it’s enabled and not otherwise bypassed, the addFromTemplate pipeline processor is invoked when an item is created using a template, regardless of where or how the item was created. For example:

  • When a new page is created in the Content Editor
  • When a new data source item is created using Sitecore PowerShell Extensions
  • When an item is created as the result of a branch template
  • When a new media item is uploaded to the media library
  • When several media items are uploaded to the media library at the same time
  • …etc.

In years past, the item:added event handler may have been used in similar situations; however, it isn’t as robust and doesn’t fire as consistently given all the different ways an item can be created in Sitecore.

To implement an addFromTemplate pipeline processor, developers implement a class inheriting from AddFromTemplateProcessor (via Sitecore.Pipelines.ItemProvider.AddFromTemplate). Here’s the implementation for the workflow attacher:

using Contoso.Platform.Extensions;
using Sitecore.Pipelines.ItemProvider.AddFromTemplate;
...

namespace Contoso.Platform.Workflow
{
    public class AddFromTemplateGenericWorkflowAttacher : AddFromTemplateProcessor
    {
        private List<WorkflowAttachment> WorkflowAttachments = new List<WorkflowAttachment>();

        public void AddWorkflowAttachment(XmlNode node)
        {
            var attachment = new WorkflowAttachment(node);
            if (attachment != null)
            {
                WorkflowAttachments.Add(attachment);
            }
        }

        public override void Process(AddFromTemplateArgs args)
        {
            try
            {
                Assert.ArgumentNotNull(args, nameof(args));

                if (args.Aborted || args.Destination.Database.Name != "master")
                {
                    return;
                }

                // default to previously resolved item, if available
                Item newItem = args.ProcessorItem?.InnerItem;

                // use previously resolved item, if available
                if (newItem == null)
                {
                    try
                    {
                        Assert.IsNotNull(args.FallbackProvider, "Fallback provider is null");

                        // use the "base case" (the default implementation) to create the item
                        newItem = args.FallbackProvider.AddFromTemplate(args.ItemName, args.TemplateId, args.Destination, args.NewId);
                        if (newItem == null)
                        {
                            return;
                        }

                        // set the newly created item as the result and downstream processor item
                        args.ProcessorItem = args.Result = newItem;
                    }
                    catch (Exception ex)
                    {
                        Log.Error($"{nameof(AddFromTemplateGenericWorkflowAttacher)} failed. Removing partially created item, if it exists", ex, this);

                        var item = args.Destination.Database.GetItem(args.NewId);
                        item?.Delete();

                        throw;
                    }
                }

                // iterate through the configured workflow attachments
                foreach (var workflowAttachment in WorkflowAttachments)
                {
                    if (workflowAttachment.ShouldAttachToItem(newItem))
                    {
                        AttachAndStartWorkflow(newItem, workflowAttachment.WorkflowId);
                        // an item can only be in one workflow at a time
                        break;
                    }
                }
            }
            catch (Exception ex)
            {
                Log.Error($"There was a processing error in {nameof(AddFromTemplateGenericWorkflowAttacher)}.", ex, this);
            }
        }

        private void AttachAndStartWorkflow(Item item, string workflowId)
        {
            item.Editing.BeginEdit();

            // set default workflow
            item.Fields[Sitecore.FieldIDs.DefaultWorkflow].Value = workflowId;
            // set workflow
            item.Fields[Sitecore.FieldIDs.Workflow].Value = workflowId;
            // start workflow
            var workflow = item.Database.WorkflowProvider.GetWorkflow(workflowId);
            workflow.Start(item);

            item.Editing.EndEdit();
        }
    }
}

Notes:

  • The WorkflowAttachments member variable stores the list of workflow definitions (defined in configuration).
  • The AddWorkflowAttachment() method is invoked by the Sitecore configuration factory to add items to the WorkflowAttachments list.
  • Assuming the creation of the new item wasn’t aborted, the destination database is master, and the new item is not null, the processor iterates over the list of workflow attachments and, if the ShouldAttachToItem() extension method returns true, the AttachAndStartWorkflow() method is called.
  • The AttachAndStartWorkflow() method associates the workflow to the new item and starts the workflow on the item.
  • Only the first matching workflow attachment is considered—an item can only be in one (1) workflow at a time.

The implementation of the ShouldAttachToItem() extension method is as follows:

...

namespace Contoso.Platform
{
    public static class Extensions
    {
        ...
        public static bool ShouldAttachToItem(this WorkflowAttachment workflowAttachment, Item item)
        {
            if (item == null)
                return false;

            // check exclusion filters
            if (workflowAttachment.PathExclusionFilters.Any(exclusionFilter => item.Paths.FullPath.IndexOf(exclusionFilter, StringComparison.OrdinalIgnoreCase) > -1))
                return false;

            // check inclusion filters
            if (workflowAttachment.PathFilters.Any() &&
                !workflowAttachment.PathFilters.Any(includeFilter => item.Paths.FullPath.StartsWith(includeFilter, StringComparison.OrdinalIgnoreCase)))
                return false;

            var newItemTemplate = TemplateManager.GetTemplate(item);

            // check for template match or template inheritance
            return workflowAttachment.TemplateIds.Any(id => ID.TryParse(id, out ID templateId)
                && (templateId.Equals(item.TemplateID)
                    || newItemTemplate.InheritsFrom(templateId)));
        }
    }
    ...
}

Notes:

  • This extension method determines if the workflow should be attached to the new item or not based on the criteria in the workflow attachment object.
  • The method evaluates the path exclusion filters, path inclusion filters, and template ID matching or inheritance (in that order) to determine if the workflow should be attached to the item.

Here’s the WorkflowAttachment POCO that defines the workflow attachment object and facilitates the Sitecore configuration factory’s initialization of objects:

using Sitecore.Diagnostics;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Xml;

namespace Contoso.Platform.Workflow
{
    public class WorkflowAttachment
    {
        public string WorkflowId { get; set; }

        public List<string> TemplateIds { get; set; }

        public List<string> PathFilters { get; set; }

        public List<string> PathExclusionFilters { get; set; }

        public WorkflowAttachment(XmlNode workflowAttachmentNode)
        {
            TemplateIds = new List<string>();
            PathFilters = new List<string>();
            PathExclusionFilters = new List<string>();

            if (workflowAttachmentNode == null)
                throw new ArgumentNullException(nameof(workflowAttachmentNode),
                    $"The workflow attachment configuration node is null; unable to create {nameof(WorkflowAttachment)} object.");

            // parse nodes
            foreach (XmlNode childNode in workflowAttachmentNode.ChildNodes)
            {
                if (childNode.NodeType != XmlNodeType.Comment)
                    ParseNode(childNode);
            }

            // validate
            Assert.IsFalse(string.IsNullOrWhiteSpace(WorkflowId), $"{nameof(WorkflowId)} must not be null or whitespace.");
            Assert.IsTrue(TemplateIds.Any(), "The workflow attachment must enumerate at least one (1) template ID.");
        }

        private void ParseNode(XmlNode node)
        {
            switch (node.LocalName)
            {
                case "workflowId":
                    WorkflowId = node.InnerText;
                    break;
                case "templateIds":
                    foreach (XmlNode childNode in node.ChildNodes)
                    {
                        if (childNode.NodeType != XmlNodeType.Comment)
                            TemplateIds.Add(childNode.InnerText);
                    }
                    break;
                case "pathFilters":
                    foreach (XmlNode childNode in node.ChildNodes)
                    {
                        if (childNode.NodeType != XmlNodeType.Comment)
                            PathFilters.Add(childNode.InnerText);
                    }
                    break;
                case "pathExclusionFilters":
                    foreach (XmlNode childNode in node.ChildNodes)
                    {
                        if (childNode.NodeType != XmlNodeType.Comment)
                            PathExclusionFilters.Add(childNode.InnerText);
                    }
                    break;
                default:
                    break;
            }
        }
    }
}

Configuration ⚙

The following patch configuration file is defined to A. wire-up the addFromTemplate pipeline processor and B. describe the various workflow attachments. In the sample file below, for brevity, there’s only one (1) attachment defined, but multiple attachments are supported.

<configuration>
  <sitecore>
  ...
  <pipelines>
    <group name="itemProvider" groupName="itemProvider">
      <pipelines>
        <addFromTemplate>
          <processor
            type="Contoso.Platform.Workflow.AddFromTemplateGenericWorkflowAttacher, Contoso.Platform"
            mode="on">
            <!-- Contoso Media Workflow attachment for versioned media items and media folders -->
            <workflowAttachmentDefinition hint="raw:AddWorkflowAttachment">
              <workflowAttachment>
                <!-- /sitecore/system/Workflows/Contoso Media Workflow -->
                <workflowId>{88839366-409A-4E57-86A4-167150ED5559}</workflowId>
                <templateIds>
                  <!-- /sitecore/templates/System/Media/Versioned/File -->
                  <templateId>{611933AC-CE0C-4DDC-9683-F830232DB150}</templateId>
                  <!-- /sitecore/templates/System/Media/Media folder -->
                  <templateId>{FE5DD826-48C6-436D-B87A-7C4210C7413B}</templateId>
                </templateIds>
                <pathFilters>
                  <!-- Contoso Media Library Folder -->
                  <pathFilter>/sitecore/media library/Project/Contoso</pathFilter>
                </pathFilters>
                <pathExclusionFilters>
                  <pathExclusionFilter>/sitecore/media library/System</pathExclusionFilter>
                  <pathExclusionFilter>/Sitemap</pathExclusionFilter>
                  <pathExclusionFilter>/Sitemaps</pathExclusionFilter>
                  <pathExclusionFilter>/System</pathExclusionFilter>
                  <pathExclusionFilter>/_System</pathExclusionFilter>
                </pathExclusionFilters>
              </workflowAttachment>
            </workflowAttachmentDefinition>
            ...
          </processor>
        </addFromTemplate>
      </pipelines>
    </group>
  </pipelines>
  ...
  </sitecore>
</configuration>

Notes:

  • N number of <workflowAttachmentDefinition> elements can be defined.
  • Only one (1) <workflowId> should be defined per attachment.
  • The IDs listed within the <templateIds> element are the templates the new item must either be based on or inherit from.
  • The <pathFilters> element enumerates the paths under which the workflow attachment should apply. If the new item’s path is outside of any of the paths listed, then the workflow is not attached. This element can be omitted to forgo the path inclusion check.
  • The <pathExclusionFilters> element enumerates the paths under which the workflow attachment should not apply. If the new item’s path contains any of these paths, then the workflow is not attached. This element can be omitted to forgo the path exclusion check. This filtering is useful to ignore new items under certain paths, e.g., under the Sitemap or Thumbnails media folders, both of which are media folders controlled by Sitecore.

Closing Thoughts ☁

While certainly not a one-size-fits-all solution, this approach was a good fit for this particular project considering the requirements and a general reticence for modifying Standard Values on OOTB Sitecore templates. Here are some pros and cons for this solution:

Pros ✅

  • Provides a semi-extensible, configuration-based way to start workflows on new items.
  • Adding, updating, or removing a workflow attachment requires a configuration change but not code change.
  • Allows for a template ID match or inheritance for more flexibility.
  • Allows for path inclusion and exclusion filtering for more granular control over where in the content tree the workflow attachment should (or should not) apply.

Cons ❌

  • Deploying custom server-side code to the XM Cloud CM instance isn’t great.
  • Arguably, creating custom templates inheriting from the OOTB templates in order to attach the workflows was the “more correct” play.
  • A deployment to change a configuration file could still require a code deployment—many (most?) pipelines don’t separate the two. If configuration changes are deployed, then so is the code (which, of course, necessitates additional testing).

Takeaways:

  • If you’re building an XM Cloud solution, do your best to avoid (or at least minimize) customizations to the CM.
  • If you need to attach workflows to specific project templates or custom templates, do so via Standard Values (and serialize the changes)—don’t bother with custom C# code.
  • If, for whatever reason, you need to resort to a custom solution, consider this one (or something like it).
  • Of course, this solution can be improved; to list a few possible improvements:
    • Pushing the configuration files into Sitecore to allow content authors to manage workflow attachment definitions. This would require a permissions pass and some governance to help prevent abuse and/or misconfigurations.
    • Add support to conditionally start the workflow; at present, the workflow always starts on new items.
    • Add logic to protect against workflow clobbering if, for whatever reason, the new item already has a workflow attached to it.
    • Improve path matching when applying the path inclusion and exclusion filters.
    • Logging improvements.

Thanks for the read! 🙏

Resources 📚

]]>
https://blogs.perficient.com/2025/07/25/lessons-from-the-front-configurable-workflow-rules-for-new-items-in-xm-cloud/feed/ 0 384890
Sean Brundle Transforms Technical Expertise into Leadership that Empowers Team Success https://blogs.perficient.com/2025/07/16/sean-brundle-transforms-technical-expertise-into-leadership-that-empowers-team-success/ https://blogs.perficient.com/2025/07/16/sean-brundle-transforms-technical-expertise-into-leadership-that-empowers-team-success/#respond Wed, 16 Jul 2025 15:59:21 +0000 https://blogs.perficient.com/?p=384342

Meet Sean Brundle, Lead Technical Consultant, Sitecore 

Sean’s dedication to excellence and passion for continuous growth have defined his 10-year journey at Perficient, culminating in his recent promotion to Lead Technical Consultant. As a remarkable people leader, his commitment to professional development and mentoring his team across diverse technologies exemplifies Perficient’s promise to challenge, champion, and celebrate every colleague.  

Sean supports a broad range of clients within Perficient’s Customer Experience Platform (CXP) Managed Services department—primarily in the Sitecore business unit (BU)—by monitoring infrastructure applications, addressing performance issues, tracking latency, and maintaining robust security. Through these efforts, he delivers top-tier applications, expert recommendations, and security solutions that ensure clients maintain fast, secure websites with maximum reliability and minimal downtime. 

Continue reading to discover how Sean’s proactive client approach and obsession over outcomes have driven growth for Perficient’s DevOps practice.  

READ MORE: Perficient’s Customer Experience Expertise 

Sean’s Early Career Journey

Sean began his career in 2015 as an IT Specialist Intern at a marketing technology company and quickly advanced the following year to Project Specialist, building content in Sitecore and conducting in-depth content analysis.  

In 2017, he transitioned to the technical side of the business. As a Technical Quality Assurance (QA) Specialist, he sharpened his QA expertise by driving automation projects and pioneering numerous new processes within the department. Sean’s enthusiasm for software development and IT converged in 2018, when the company launched its DevOps department and entrusted him with the role of Junior DevOps Engineer. In this position, Sean played a key role in architecting many of the core standards and operational processes that continue to underpin Perficient’s DevOps practice today.  

Establishing a CXP Managed Services Department 

Sean’s passion for DevOps reached a defining moment in 2019 when he was promoted to DevOps Engineer. By anticipating client needs and fostering collaboration within his team, Sean led the creation and expansion of the Managed Services department, which later evolved into Perficient’s CXP Managed Services department. This milestone marked a significant turning point in Sean’s career and stands as one of his proudest achievements 

While supporting our clients, we noticed recurring challenges with some hosting providers. These experiences highlighted an opportunity for us to create our own Managed Services department. I helped lead that initiative and created the different tools and processes we use today. We started off with two or three clients and now have over 10 different clients expanding to various platforms. It was a cool initiative that I was able to have a major hand in leading. 

Driven by a genuine commitment to people-first leadership, Sean’s forward-thinking approach has been pivotal in delivering greater value—empowering both colleagues and clients to accelerate growth and achieve meaningful results.  

Our team is highly proactive. We focus on making recommendations and helping clients make their applications much better and faster before difficulties arise. We’ve spent a lot of time analyzing different client systems, and I’ve implemented specific processes and tooling that accelerate our work and identify issues the client might not even be aware of.”  

Building Expertise, Strengthening Client Engagement, and Leading with Purpose

Sean joined Perficient as a Technical Consultant through an acquisition in 2020 and quickly advanced to Senior Technical Consultant by 2022. Now serving as a Lead Technical Consultant, he works closely with the Managed Services team to deliver proactive client support, leveraging his expertise to inform tooling and offerings that optimize application and infrastructure development 

Sean stays ahead in the fast-changing digital world by actively working with different platforms and technologies, continuously learning emerging best practices, and reading up on the latest innovations. Motivated by a results-driven mindset and devotion to client success, he has built lasting relationships through consistent delivery excellence and continues to shatter boundaries with cutting-edge solutions.  

“Identifying performance gaps, presenting those insights to the client, implementing solutions, and then demonstrating the impressive speed improvements we’ve achieved—that’s incredibly rewarding. Seeing the client’s excitement energizes my team and me, and it motivates me to keep enhancing their offerings. This ongoing effort helps strengthen client relationships.”  

Sean’s people-first leadership shines through his collaborative work with colleagues and clients. Anchored by more than a decade of IT industry experience, he has become a trusted advisor and influential mentor. 

Empowering Teams and Clients Through Strategic Leadership  

Sean has continuously advanced his technical knowledge by working with diverse clients and technologies, while also expanding his credentials with certifications such as Microsoft Azure Administrator Associate, Microsoft Azure DevOps Engineer Expert, and AWS Certified Solutions Architect.  

READ MORE: Accelerating Professional Growth Through Certifications  

Alongside his professional development, Sean has deepened his client engagement and relationship-building skills. He leads quarterly, in-depth reviews of his team’s progress and client successes, delivering strategic presentations that highlight untapped tooling opportunities—driving measurable value and strengthening long-term client relationships. Additionally, Sean completed Perficient’s Consultant Curriculum program, where he acquired strategies for effectively identifying and addressing client needs. 

READ MORE: Learn About Perficient’s Award-Winning Growth for Everyone Programming 

Sean has developed outstanding leadership skills that inspire collaboration, enhance team dynamics, and deliver impactful results. He embodies true team spirit by empowering individuals to grow and excel, driving collective success with passion and purpose.

At Perficient, my role has given me a lot of opportunities to lead and mentor other engineers. I’ve really taken on that role and enjoy uplifting team members. Sharing knowledge, helping to support them, and seeing them grow has been really exciting. Focusing on helping my team rather than just myself benefits the entire team and department.  

In mentoring his colleagues, Sean champions open communication as the foundation for building trust, setting clear expectations, and fostering meaningful learning experiences. 

“I think that listening and having clear communication has been valuable in my leadership growth. When assigning a task to someone you’re mentoring, it’s important to follow up, but not be too strict. Providing clear steps, setting actionable goals within a certain timeframe, maintaining consistent communication, and helping them when they get stuck makes a difference. I’ve noticed many team members respond well to this approach.”  

Sean’s empathetic leadership naturally promotes transparency across teams and time zones, driving seamless global collaboration. He works regularly with Perficient’s Nagpur office to monitor different applications and infrastructures, gaining cutting-edge insights from diverse multicultural perspectives.  

“There are times when I’m working with a team of colleagues who all come from different cultural backgrounds. This influences how they communicate or approach certain tasks. Being able to adapt to these differences and learn from them has significantly helped my growth as a leader.”

LEARN MORE: Perficient’s Global Footprint Enables Genuine Connection  

Unlocking Potential Through Shared Knowledge and Cross-Functional Collaboration 

Within Perficient’s Sitecore BU, Sean’s team fosters continuous learning through a dedicated Confluence platform for role-specific skills and monthly meetings to discuss new technologies and processes. Sean takes great pride in Perficient’s broader culture of cross-functional collaboration. 

We have a lot of different BUs at Perficient, and they’re able to work together to support each other. Perficient colleagues are great about coming together as a team, supporting everyone, and enabling knowledge sharing that benefits both our team and our clients. I think it’s a big benefit with Perficient.” 

Charting New Waters Through Relentless Innovation 

Sean shatters boundaries and drives excellence through continuous innovation, fueling both individual and team success. 

“I try to push my team and myself to constantly seek better ways to serve our clients and improve ourselves—exploring better approaches, speaking up, and testing ideas we haven’t tried before. I strive to do this as often as I can.”  

While Sean champions continuous growth, he also emphasizes the value of experiential learning and resilience in the face of setbacks.

“Do not be afraid to fail. If you take on a new role or responsibility—even if you make mistakes— you will grow and learn. I’m lucky to have a lot of responsibility, and there are times when I have to learn from my mistakes, but it makes me much stronger and a better consultant engineer.” 

Just as Sean explores new technologies with curiosity and determination, he embraces the wonders of nature—reaching new heights through rock climbing and kayaking while cherishing time outdoors with his son. 

“Outside of work, I try to spend as much time with my son as I can. He’s almost 2 years old and loves to play outside, so we try to spend as much time outdoors as possible. I go rock climbing occasionally, and I also love being on the water. If I get the chance, I try to take out our kayaks. More than anything, I focus on spending quality time with my son and enjoying the outdoors as much as we can together.” 

MORE ON GROWTH FOR EVERYONE  

Perficient continually looks for ways to champion and challenge our workforce, encourage personal and professional growth, and celebrate the unique culture created by the ambitious, brilliant, people-oriented team we have cultivated. These are their stories. 

Learn more about what it’s like to work at Perficient on our Careers page. Connect with us on  LinkedIn here. 

]]>
https://blogs.perficient.com/2025/07/16/sean-brundle-transforms-technical-expertise-into-leadership-that-empowers-team-success/feed/ 0 384342
Creating a Brand Kit in Stream: Why It Matters and How It helps Organizations https://blogs.perficient.com/2025/07/15/brandkit-sitecore-stream/ https://blogs.perficient.com/2025/07/15/brandkit-sitecore-stream/#respond Tue, 15 Jul 2025 09:24:10 +0000 https://blogs.perficient.com/?p=384493

In today’s digital-first world, brand consistency is more than a visual guideline, it’s a strategic asset. As teams scale and content demands grow, having a centralized Brand Kit becomes essential. If you’re using Sitecore Stream, building a Brand Kit is not just useful, it’s transformational.

In my previous post, I tried to explore Sitecore Stream, highlighting how it reimagines modern marketing by bringing together copilots, agentic AI, and real-time brand intelligence to supercharge content operations. We explored how Stream doesn’t just assist it acts with purpose, context, and alignment to your brand.

Now, we take a deeper dive into one of the most foundational elements that makes that possible: Brand Kit.

In this post, we’ll cover:

  • What a Brand Kit is inside Stream, and why it matters
  • How to build one – from brand documents to structured sections
  • How AI copilots use it to drive consistent, on-brand content creation

Let’s get into how your brand knowledge can become your brand’s superpower.

 

What Is a Brand Kit?

A Brand Kit is a centralized collection of brand-defining assets, guidelines, tone, messaging rules.

Brand kit sections represent a subset of your brand knowledge.
It includes information about your brands like:

  • Logo files and usage rules
  • Typography and color palettes
  • Brand voice and tone guidelines
  • Brand specific imagery or templates
  • Do’s and Don’ts of brand usage
  • Compliance or legal notes

Think of it as your brand’s source of truth, accessible by all stakeholders – designers, marketers, writers, developers, and AI assistants.

 

Why Stream Needs a Brand Kit

Stream is a platform where content flows – from ideas to execution. Without a Brand Kit:

  • Writers may use inconsistent tone or terminology as per their knowledge or considerations about brand.
  • Designers may reinvent the wheel with each new visual.
  • Copilots might generate off-brand content.
  • Cross-functional teams lose time clarifying brand basics.

With a Brand Kit in place, Stream becomes smarter, faster, and more aligned with your organization’s identity.

 

How a Brand Kit Helps the Organization

Here’s how Stream and your Brand Kit work together to elevate content workflows:

  •  Faster onboarding: New team members instantly understand brand expectations.
  •  Accurate content creation: Content writers, designers, and strategists reference guidelines directly from the platform.
  •  AI-assisted content stays on-brand: Stream uses your brand data to personalize AI responses for content creation and editing.
  •  Content reuse and updates become seamless with analysis: Brand messaging is consistent across landing pages, emails, and campaigns. You can also perform A/B testing with brandkit generated content vs manually added content.

Now that we understand what a Brand Kit is and why it’s essential, let’s walk through how to create one effectively within Sitecore Stream.

 

Uploading Brand Documents = Creating Brand Knowledge

To create a Brand Kit, you begin by uploading and organizing your brand data this includes documents, guidelines, assets, and other foundational materials that define your brand identity.

Below screenshot displays the screen of uploaded brand document, button to process the document and an option to upload another document.

Upload Doc

 

In Stream, when you upload brand-specific documents, they don’t just sit there. The process:

  • Analyses the data
  • Transforms them into AI-usable data by Creating brand knowledge
  • Makes this knowledge accessible across brainstorming, content creation, and AI prompts

Process

In short, Here’s how the process works:

  • Create a Brand Kit – Start with a blank template containing key sections like Brand Context, Tone of Voice, and Global Goals.
  • Upload Brand Documents – Add materials like brand books, visual and style guides to serve as the source of your brand knowledge.
  • Process Content – Click Process changes to begin ingestion. Stream analyzes the documents, breaks them into knowledge chunks, and stores them.
  • Auto-Fill Sections – Stream uses built-in AI prompts to populate each section with relevant content from your documents.

 

Brand Kit Sections: Structured for Versatility

Once your Brand Kit is created and the uploaded documents are processed, Stream automatically generates key sections. Each section serves a specific purpose and is built from well-structured content extracted from your brand documents. These are essentially organized chunks of brand knowledge, formatted for easy use across your content workflows. Default sections that gets created are as follows:

  • Global Goals – Your brand’s core mission and values.
  • Brand Context – Purpose, positioning, and brand values.
  • Dos and Don’ts – Content rules to stay on-brand.
  • Tone of Voice – Defines your brand’s personality.
  • Checklist – Quick reference for brand alignment.
  • Grammar Guidelines – Writing style and tone rules.
  • Visual Guidelines – Imagery, icons, and layout specs.
  • Image Style – Color, emotion, and visual feel.

Each section holds detailed, structured brand information that can be updated manually or enriched using your existing brand knowledge. If you prefer to control the content manually and prevent it from being overwritten during document processing, you can mark the section as Non-AI Editable.

Stream allows you to add new subsections or customize existing ones to adapt to your evolving brand needs. For example, you might add a “Localization Rules” section when expanding to global markets, or a “Crisis Communication” section to support PR strategies.

When creating a new subsection, you’ll provide a name and an intent a background prompt that guides the AI to extract relevant information from your uploaded brand documents to populate the section accurately.

Below screenshot of sections created after brand document process and subsections for example:

Brand Kit Sections

Section Details

AI + Brand Kit = Smarter Content, Automatically

Now we have created brand kit, lets see how AI in Stream uses your Brand Kit to:

Suggest on-brand headlines or social posts

  • Flag content that strays from brand guidelines
  • Assist in repurposing older content using updated brand tone

It’s like having a brand-savvy assistant embedded in your workflow.

 

Brand assist in Sitecore Stream

Once you have Brand Kit ready, you can use the Brand Assistant to generate and manage content aligned with your brand using simple prompts.

Key uses:

  • Ask brand-related questions
  • Access brand guidelines
  • Generate on-brand content
  • Draft briefs and long-form content
  • Explore ideas and marketing insights

It uses agentic AI, with specialized agents that ensure every output reflects your brand accurately.

When a user enters a prompt in the Brand Assistant, whether it’s a question or an instruction the copilots automatically includes information from the Brand Context section of the Brand Kit. It then evaluates whether this context alone is enough to generate a response. If it is, a direct reply is provided. If not, specialized AI agents are activated to gather and organize additional information.

These include a Search Agent (to pull data from brand knowledge or the web), a Brief Agent (for campaign or creative brief requests), and a Summary Agent (to condense information into a clear, relevant response).

I clicked on Brand Assistant tab, selected my Brand Kit, asked a question, and the response I got was spot on! It perfectly aligned with the brand documents I had uploaded and even suggested target consumer based on that information. Super impressed with how well it worked!

Brandkit Selection In Assist

Brainstorm

 

Now it’s time to see how the Brand Kit helps me generate content on XM Cloud or Experience Platform. To do that, connect XM Cloud website with Sitecore Stream so the copilots can access the Brand Kit.

I simply went to Site Settings, found the Stream section, selected my Stream instance and that’s it. I was all set to use brandkit.

Brandkit Setting In Site

Now, when I open the page editor and click on Optimize, I see an additional option with my Brand Kit name. Once selected, I can either draft new text or optimize existing content.

The copilot leverages the Brand Kit sections to generate content that’s consistent, aligned with our brand voice, and ready to use.

For example, I asked the brand kit to suggest campaign content ideas and it provided exactly the kind of guidance I needed.

Campaign Page

 

Conclusion

Building and maintaining a Brand Kit in Stream isn’t just about visual consistency, it’s about scaling brand intelligence across the entire content lifecycle. When your Brand Kit is connected to the tools where work happens, everyone from AI to human collaborators works with the same understanding of what your brand stands for.

]]>
https://blogs.perficient.com/2025/07/15/brandkit-sitecore-stream/feed/ 0 384493