Experience Management Articles / Blogs / Perficient https://blogs.perficient.com/category/services/customer-experience-design/digital-experience/experience-management/ Expert Digital Insights Wed, 19 Nov 2025 15:08:32 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Experience Management Articles / Blogs / Perficient https://blogs.perficient.com/category/services/customer-experience-design/digital-experience/experience-management/ 32 32 30508587 Sitecore Content SDK: What It Offers and Why It Matters https://blogs.perficient.com/2025/11/19/sitecore-content-sdk-what-it-offers-and-why-it-matters/ https://blogs.perficient.com/2025/11/19/sitecore-content-sdk-what-it-offers-and-why-it-matters/#respond Wed, 19 Nov 2025 15:08:05 +0000 https://blogs.perficient.com/?p=388367

Sitecore has introduced the Content SDK for XM Cloud-now Sitecore AI to streamline the process of fetching content and rendering it on modern JavaScript front-end applications. If you’re building a website on Sitecore AI, the new Content SDK is the modern, recommended tool for your development team.

Think of it as a specialized, lightweight toolkit built for one specific job: getting content from Sitecore AI and displaying it on your modern frontend application (like a site built with Next.js).

Because it’s purpose-built for Sitecore AI, it’s fast, efficient, and doesn’t include a lot of extra baggage. It focuses purely on the essential “headless” task of fetching and rendering content.

What About the JSS SDK?
This is the original toolkit Sitecore created for headless development.

The key difference is that the JSS SDK was designed to be a one-size-fits-all solution. It had to support both the new, headless Sitecore AI and Sitecore’s older, all-in-one platform, Sitecore XP/XM.

To do this, it had to include extra code and dependencies to support older features, like the “Experience Editor”. This makes the JSS SDK “bulkier” and more complex. If you’re only using Sitecore AI, you’re carrying around a lot of extra weight you simply don’t need.

The Sitecore Content SDK is the modern, purpose-built toolkit for developers using Sitecore AI, providing seamless, out-of-the-box integration with the platform’s most powerful capabilities. This includes seamless visual editing that empowers marketers to build and edit pages in real-time, as well as built-in hooks for personalization and analytics that simplify the delivery and tracking of targeted user experiences. For developers, it provides GraphQL utilities to streamline data fetching and is deeply optimized for Next.js, enabling high-performance features like server-side rendering. Furthermore, with the recent introduction of App Router support (in beta), the SDK is evolving to give developers even more granular control over performance, SEO, bundle sizes, and security through a more modern, modular code structure.

What does the Content SDK offer?

1) App Router support (v1.2)

With version 1.2.0, Sitecore Content SDK introduces App Router support in beta. While the full fledged stable release is expected soon, developers can already start exploring its benefits and work flow with 1.2 version.
This isn’t just a minor update; it’s a huge step toward making your front-end development more flexible and highly optimized.

Why should you care? –
The App Router introduces a fantastic change to your starter application’s code structure and how routing works. Everything becomes more modular and declarative, aligning perfectly with modern architecture practices. This means defining routes and layouts is cleaner, content fetching is neatly separated from rendering, and integrating complex Next.js features like dynamic routes is easier than ever. Ultimately, this shift makes your applications much simpler to scale and maintain as they grow on Sitecore AI.

Performance: Developers can fine-tune route handling with nested layouts and more aggressive and granular caching to seriously boost overall performance, leading to faster load times.

Bundle Size: Smaller bundle size because it uses React Server Components (RSC) to render components. It help fetch and render component from server side without making the static files in bundle.

Security: It helps with security by giving improved control over access to specific routes and content.

With the starter kit applications, this is how app router routing structure looks like:

Approute

 

2) New configs – sitecore.config.ts & sitecore.cli.config.ts

The sitecore.config.ts file, located in the root of your application, acts as the central configuration point for Content SDK projects. It is replacement of the older temp/config file used by the JSS SDK. It contains properties that can be used throughout the application just by importing the file. It contains important properties like sitename, defaultLanguage, edge props like contextid. Starter templates include a very lightweight version containing only the mandatory parameters necessary to get started. Developers can easily extend this file as the project grows and requires more specific settings.

Key Aspects:

Environment Variable Support: This file is designed for deployment flexibility using a layered approach. Any configuration property present in this file can be sourced in three ways, listed in order of priority:

  1. Explicitly defined in the configuration file itself.
  2. Fallback to a corresponding environment variable (ideal for deployment pipelines).
  3. Use a default value if neither of the above is provided.

This layered approach ensures flexibility and simplifies deployment across environments.

 

The sitecore.cli.config.ts file is dedicated to defining and configuring the commands and scripts used during the development and build phases of a Content SDK project.

Key Aspects:

CLI Command Configuration: It dictates the commands that execute as part of the build process, such as generateMetadata() and generateSites(), which are essential for generating Sitecore-related data and metadata for the front-end.

Component Map Generation: This file manages the configuration for the automatic component map generation. This process is crucial for telling Sitecore how your front-end components map to the content structure, allowing you to specify file paths to scan and define any files or folders to exclude. Explored further below.

Customization of Build Process: It allows developers to customize the Content SDK’s standard build process by adding their own custom commands or scripts to be executed during compilation.

While sitecore.config.ts handles the application’s runtime settings (like connection details to Sitecore AI), sitecore.cli.config.ts works in conjunction to handle the development-time configuration required to prepare the application for deployment.

Cli Config

 

3) Component map

In Sitecore Content SDK-based applications, every custom component must be manually registered in the .sitecore/component-map.ts file located in the app’s root. The component map is a registry that explicitly links Sitecore renderings to their corresponding frontend component implementations. The component map tells the Content SDK which frontend component to render for each component receives from Sitecore. When the rendering gets added to any page via presentation, component map tells which frontend rendering should be rendered at the place.

Key Aspects:

Unlike JSS implementations that automatically maps components, the Content SDK’s explicit component map enables better tree-shaking. Your final production bundle will only include the components you have actually registered and use, resulting in smaller, more efficient application sizes.

This is how it looks like: (Once you start creating custom component, you have to add the component name here to register.)

Componentmap

 

4) Import map

The import map is a tool used specifically by the Content SDK’s code generation feature. It manages the import paths of components that are generated or used during the build process. It acts as a guide for the code generation engine, ensuring that any new code it creates correctly references your existing components.
Where it is: It is a generated file, typically found at ./sitecore/import-map.ts, that serves as an internal manifest for the build process. You generally do not need to edit this file manually.
It simplifies the logic of code generation, guaranteeing that any newly created code correctly and consistently references your existing component modules.

The import map generation process is configurable via the sitecore.cli.config.ts file. This allows developers to customize the directories scanned for components.

 

5) defineMiddleware in the Sitecore Content SDK

defineMiddleware is a utility for composing a middleware chain in your Next.js app. It gives you a clean, declarative way to handle cross-cutting concerns like multi-site routing, personalization, redirects, and security all in one place. This centralization aligns perfectly with modern best practices for building scalable, maintainable functions.

The JSS SDK leverages a “middleware plugin” pattern. This system is effective for its time, allowing logic to be separated into distinct files. However, this separation often requires developers to manually manage the ordering and chaining of multiple files, which could become complex and less transparent as the application grew. The Content SDK streamlines this process by moving the composition logic into a single, highly readable utility which can customizable easily by extending Middleware

Middleware

 

6) Debug Logging in Sitecore Content SDK

Debug logging helps you see what the SDK is doing under the hood. Super useful for troubleshooting layout/dictionary fetches, multisite routing, redirects, personalization, and more. The Content SDK uses the standard DEBUG environment variable pattern to enable logging by namespace. You can selectively turn on logging for only the areas you need to troubleshoot, such as: content-sdk:layout (for layout service details) or content-sdk:dictionary (for dictionary service details)
For all available namespaces and parameters, refer to sitecore doc – https://doc.sitecore.com/sai/en/developers/content-sdk/debug-logging-in-content-sdk-apps.html#namespaces 

 

7) Editing & Preview

In the context of Sitecore’s development platform, editing and preview render optimization with the Content SDK involves leveraging middleware, architecture, and framework-specific features to improve the performance of rendering content in editing and preview modes. The primary goal is to provide a fast and responsive editing experience for marketers using tools like Sitecore AI Pages and the Design Library. EditingRenderMiddleware: The Content SDK for Next.js includes optimized middleware for editing scenarios. Instead of a multi-step process involving redirects, the optimized middleware performs an internal, server-side request to return the HTML directly. This reduces overhead and speeds up rendering significantly.
This feature Works out of the box in most environments: Local container, Vercel / Netlify, SitecoreAI (defaults to localhost as configured)

For custom setups, override the internal host with: SITECORE_INTERNAL_EDITING_HOST_URL=https://host
This leverages a Integration with XM Cloud/Sitecore AI Pages for visual editing and testing of components.

 

8) SitecoreClient

The SitecoreClient class in the Sitecore Content SDK is a centralized data-fetching service that simplifies communication with your Sitecore content backend typically with Experience Edge or preview endpoint via GraphQL endpoints.
Instead of calling multiple services separately, SitecoreClient lets you make one organized request to fetch everything needed for a page layout, dictionary, redirects, personalization, and more.

Key Aspect:

Unified API: One client to access layout, dictionary, sitemap, robots.txt, redirects, error pages, multi-site, and personalization.
To understand all key methods supported, please refer to sitecore documentation: https://doc.sitecore.com/sai/en/developers/content-sdk/the-sitecoreclient-api.html#key-methods

Sitecoreclientmethods

9) Built-In Capabilities for Modern Web Experiences

GraphQL Utilities: Easily fetch content, layout, dictionary entries, and site info from Sitecore AI’s Edge and Preview endpoints.
Personalization & A/B/n Testing: Deploy multiple page or component variants to different audience segments (e.g., by time zone or language) with no custom code.
Multi-site Support: Seamlessly manage and serve content across multiple independent sites from a single Sitecore AI instance.
Analytics & Event Tracking: Integrated support via the Sitecore Cloud SDK for capturing user behavior and performance metrics.
Framework-Specific Features: Includes Next.js locale-based routing for internationalization, and supports both SSR and SSG for flexible rendering strategies.

 

10) Cursor for AI development

Starting with Content SDK version 1.1, Sitecore has provided comprehensive “Cursor rules” to facilitate AI-powered development.
The integration provides Cursor with sufficient context about the Content SDK ecosystem and Sitecore development patterns. These set of rules and context helps to accelerate the development. The cursor rules are created for contentsdk with starter application under .cursor folder. This enables the AI to better assist developers with tasks specific to building headless Sitecore components, leading to improved development consistency and speed following same patterns just by providing few commands in generic terms. Example given in below screenshot for Hero component which can act as a pattern to create another similar component by cursor.

Cursorrules

 

11) Starter Templates and Example Applications

To accelerate development and reduce setup time, the Sitecore Content SDK includes a set of starter templates and example applications designed for different use cases and development styles.
The SDK provides a Next.js JavaScript starter template that enables rapid integration with Sitecore AI. This template is optimized for performance, scalability, and best practices in modern front-end development.
Starter Applications in examples

basic-nextjs -A minimal Next.js application showcasing how to fetch and render content from Sitecore AI using the Content SDK. Ideal for SSR/SSG use cases and developers looking to build scalable, production-ready apps.

basic-spa -A single-page application (SPA) example that demonstrates client-side rendering and dynamic content loading. Useful for lightweight apps or scenarios where SSR is not required.

Other demo site to showcase Sitecore AI capabilities using the Content SDK:

kit-nextjs-article-starter

kit-nextjs-location-starter

kit-nextjs-product-starter

kit-nextjs-skate-park

 

Final Thoughts

The Sitecore Content SDK represents a major leap forward for developers building on Sitecore AI. Unlike the older JSS SDK, which carried legacy dependencies, the Content SDK is purpose-built for modern headless architectures—lightweight, efficient, and deeply optimized for frameworks like Next.js. With features like App Router support, runtime and CLI configuration flexibility, and explicit component mapping, it empowers teams to create scalable, high-performance applications while maintaining clean, modular code structures.

]]>
https://blogs.perficient.com/2025/11/19/sitecore-content-sdk-what-it-offers-and-why-it-matters/feed/ 0 388367
How to Track User Interactions in React with a Custom Event Logger https://blogs.perficient.com/2025/07/28/how-to-track-user-interactions-in-react/ https://blogs.perficient.com/2025/07/28/how-to-track-user-interactions-in-react/#respond Mon, 28 Jul 2025 08:56:12 +0000 https://blogs.perficient.com/?p=385319

In today’s data-driven world, understanding how users interact with your application is no longer optional , it’s essential. Every scroll, click, and form submission tells a story, a story about what your users care about, what they ignore, and where they might be facing friction.

This is where event tracking and analytics come into play.

Traditionally, developers and product teams rely on third-party tools like Google Analytics, Log rocket, or Hot-jar to collect and analyse user behaviour. These tools are powerful, but they come with trade-offs:

  • Privacy concerns : You may not want to share user data with external services.
  • Cost : Premium analytics platforms can be expensive.
  • Limited customization : You’re often restricted to predefined event types and dashboards.

 What Is Event Tracking?

Event tracking is the process of capturing and analyzing specific user interactions within a website or application. These events help you understand how users engage with your product.

 Common Events to Track:

  • Page Views – When a user visits a page
  • Button Clicks – Interactions with CTAs or navigation
  • Scroll Events – How far users scroll down a page
  • Form Submissions – When users submit data
  • Text Inputs – Typing in search bars or forms
  • Mouse Movements – Hovering or navigating with the cursor

Why Is It Important?

The primary goal of event tracking is to:

  • Understand user behaviour
  • Identify friction points in the UI/UX
  • Make data-informed decisions for product improvements
  • Measure feature adoption and conversion rates

Whether you’re a developer, product manager, or designer, having access to this data empowers you to build better, more user-centric applications.

In this blog, I’ll give you a high-level overview of a custom Event Tracker POC built with React.js and Bootstrap—highlighting only the key snippets and how user interactions are tracked.

  1. Reusable Event Tracker Utility:
    const eventTracker = (eventName, eventData = {}) => {
      const key = 'eventCounts';
      const existing = JSON.parse(localStorage.getItem(key)) || {};
      existing[eventName] = (existing[eventName] || 0) + 1;
      localStorage.setItem(key, JSON.stringify(existing));
      const event = {
        name: eventName,
        data: eventData,
        timestamp: new Date().toISOString(),
      };
      console.log('Tracked Event:', event);console.log('Event Counts:', existing);};
    

     

  2. Wherever event happen add in below format(e.g: form submit)
    eventTracker('Form Submitted', { name, email });

     

  3. To view any event tracker count and which event it is, we can do as per below code.
    export const getEventCount = (eventName) => {
      const counts = JSON.parse(localStorage.getItem('eventCounts')) || {};
      return counts[eventName] || 0;
    };
    
    

     

  4. Usage in dashboard
    import { getEventCount } from '../utils/eventTracker';
    
    const formSubmitCount = getEventCount('Form Submitted');
    const inputChangeCount = getEventCount('Input Changed');
    const pageViewCount = getEventCount('Page Viewed');
    const scrollEventCount = getEventCount('Scroll Event');
    
    

    This allows you to monitor how many times each event has occurred during the users session  (if local storage is retained).

Advantages of Custom Event Tracker:

  1. Full Control – Track only what matters, with custom data structure
  2. Data Privacy – No third-party servers, easier GDPR/CCPA compliance
  3. Cost Effective – No subscription, suitable for POCs and internal tools
  4. Custom UI – Fully customizable dashboard with React and Bootstrap
  5. No External Dependencies – Works offline or in secure environments
  6. Easy Debugging – Transparent logic and flexible debugging process

Conclusion:

  1. If your focus is flexibility, cost-saving, and data ownership, a custom event tracker built in any framework or library (like your POC) is a powerful choice—especially for MVPs, internal dashboards, and privacy-conscious applications.
  2. However, for quick setup, advanced analytics, and visual insights, third-party tools are better suited—particularly in production-scale apps where speed and insights for non-developers matter most.
  • Use custom tracking when you want control.
  • Use third-party tools when you need speed.
]]>
https://blogs.perficient.com/2025/07/28/how-to-track-user-interactions-in-react/feed/ 0 385319
Moderate Image Uploads with AI/GenAI & AWS Rekognition https://blogs.perficient.com/2025/07/23/moderate-image-uploads-with-ai-genai-aws-rekognition/ https://blogs.perficient.com/2025/07/23/moderate-image-uploads-with-ai-genai-aws-rekognition/#respond Thu, 24 Jul 2025 01:40:22 +0000 https://blogs.perficient.com/?p=385005

As we all know, in the world of reels, Photos, and videos, Everyone is creating content and uploading to public-facing applications, such as social media. There is no control over the type of images users upload to the website. Here, we will discuss how to restrict inappropriate photos.

The AWS Rekognition Service can help you restrict this. AWS Rekognition Content moderation can detect inappropriate or unwanted content and provide levels for the images. By using that, not only can you make your business site compliant, but it also saves you a lot of cost, as you must pay only for what you use, without minimum fees, licenses, or upfront commitments.

This will require Lambda and API Gateway.

Implementing AWS Rekognition

To implement the solution, you must create a Lambda function and an API Gateway.

So the flow of solution will be like in the following manner:
2

Lambda

To create Lambda, you can follow the steps below:
1. Go to AWS Lambda Service and click on Create function, add information, and click on Create function.

123

  1. Make sure to add the permission below in Lambda Execution role:
{
    "Version": "2012-10-17",
    "Statement": [
       {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "rekognition:*",
            "Resource": "*"
        }
    ]
}
  1. You can use the Python code below:

Below, the sample Python code sends the image to the AWS Rekognition service, retrieves the moderation level from it, and based on that, decides whether it is safe or unsafe.

responses = rekognition.detect_moderation_labels(
            Image={'Bytes': image_bytes},
            MinConfidence=80
        )
        print(responses)
        response = str(responses)    
        filter_keywords = ["Weapons","Graphic Violence","Death and Emaciation","Crashes","Products","Drugs & Tobacco Paraphernalia & Use","Alcohol Use","Alcoholic Beverages","Explicit Nudity","Explicit Sexual Activity","Sex Toys","Non-Explicit Nudity","Obstructed Intimate Parts","Kissing on the Lips","Female Swimwear or Underwear","Male Swimwear or Underwear","Middle Finger","Swimwear or Underwear","Nazi Party","White Supremacy","Extremist","Gambling" ]
        def check_for_unsafe_keywords(response: str):
            response_lower = response.lower()
            unsafe_keywords_found = [
                keyword for keyword in filter_keywords if keyword.lower() in response_lower
            ]
            return unsafe_keywords_found
        unsafe = check_for_unsafe_keywords(response)
        if unsafe:
            print("Unsafe keywords found:", unsafe)
            return {
                'statusCode': 403,
                'headers': {'Content-Type': 'application/json'
                },
                'body': json.dumps({
                    "Unsafe": "Asset is Unsafe",
                    "labels": unsafe
                })
            }
        else:
            print("No unsafe content detected.")
            return {
                'statusCode': 200,
                'headers': {'Content-Type': 'application/json'},
                'body': json.dumps({
                   "safe": "Asset is safe",
                    "labels": unsafe
                })
            }

4.  Then click on the Deploy button to deploy the code.

AWS Api Gateway

You need to create an API gateway that can help you send an image to Lambda to process with AWS Rekognition and then send the response to the User.

Sample API Integration:


Picture11

 

Once this is all set, when you send the image to the API gateway in the body, you will receive a response in a safe or unsafe manner.

Picture9

Conclusion

With this solution, your application prevents the uploading of any inappropriate or unwanted images. It is very cost-friendly too. It also helps make your site compliant.

 

]]>
https://blogs.perficient.com/2025/07/23/moderate-image-uploads-with-ai-genai-aws-rekognition/feed/ 0 385005
Creating a Brand Kit in Stream: Why It Matters and How It helps Organizations https://blogs.perficient.com/2025/07/15/brandkit-sitecore-stream/ https://blogs.perficient.com/2025/07/15/brandkit-sitecore-stream/#respond Tue, 15 Jul 2025 09:24:10 +0000 https://blogs.perficient.com/?p=384493

In today’s digital-first world, brand consistency is more than a visual guideline, it’s a strategic asset. As teams scale and content demands grow, having a centralized Brand Kit becomes essential. If you’re using Sitecore Stream, building a Brand Kit is not just useful, it’s transformational.

In my previous post, I tried to explore Sitecore Stream, highlighting how it reimagines modern marketing by bringing together copilots, agentic AI, and real-time brand intelligence to supercharge content operations. We explored how Stream doesn’t just assist it acts with purpose, context, and alignment to your brand.

Now, we take a deeper dive into one of the most foundational elements that makes that possible: Brand Kit.

In this post, we’ll cover:

  • What a Brand Kit is inside Stream, and why it matters
  • How to build one – from brand documents to structured sections
  • How AI copilots use it to drive consistent, on-brand content creation

Let’s get into how your brand knowledge can become your brand’s superpower.

 

What Is a Brand Kit?

A Brand Kit is a centralized collection of brand-defining assets, guidelines, tone, messaging rules.

Brand kit sections represent a subset of your brand knowledge.
It includes information about your brands like:

  • Logo files and usage rules
  • Typography and color palettes
  • Brand voice and tone guidelines
  • Brand specific imagery or templates
  • Do’s and Don’ts of brand usage
  • Compliance or legal notes

Think of it as your brand’s source of truth, accessible by all stakeholders – designers, marketers, writers, developers, and AI assistants.

 

Why Stream Needs a Brand Kit

Stream is a platform where content flows – from ideas to execution. Without a Brand Kit:

  • Writers may use inconsistent tone or terminology as per their knowledge or considerations about brand.
  • Designers may reinvent the wheel with each new visual.
  • Copilots might generate off-brand content.
  • Cross-functional teams lose time clarifying brand basics.

With a Brand Kit in place, Stream becomes smarter, faster, and more aligned with your organization’s identity.

 

How a Brand Kit Helps the Organization

Here’s how Stream and your Brand Kit work together to elevate content workflows:

  •  Faster onboarding: New team members instantly understand brand expectations.
  •  Accurate content creation: Content writers, designers, and strategists reference guidelines directly from the platform.
  •  AI-assisted content stays on-brand: Stream uses your brand data to personalize AI responses for content creation and editing.
  •  Content reuse and updates become seamless with analysis: Brand messaging is consistent across landing pages, emails, and campaigns. You can also perform A/B testing with brandkit generated content vs manually added content.

Now that we understand what a Brand Kit is and why it’s essential, let’s walk through how to create one effectively within Sitecore Stream.

 

Uploading Brand Documents = Creating Brand Knowledge

To create a Brand Kit, you begin by uploading and organizing your brand data this includes documents, guidelines, assets, and other foundational materials that define your brand identity.

Below screenshot displays the screen of uploaded brand document, button to process the document and an option to upload another document.

Upload Doc

 

In Stream, when you upload brand-specific documents, they don’t just sit there. The process:

  • Analyses the data
  • Transforms them into AI-usable data by Creating brand knowledge
  • Makes this knowledge accessible across brainstorming, content creation, and AI prompts

Process

In short, Here’s how the process works:

  • Create a Brand Kit – Start with a blank template containing key sections like Brand Context, Tone of Voice, and Global Goals.
  • Upload Brand Documents – Add materials like brand books, visual and style guides to serve as the source of your brand knowledge.
  • Process Content – Click Process changes to begin ingestion. Stream analyzes the documents, breaks them into knowledge chunks, and stores them.
  • Auto-Fill Sections – Stream uses built-in AI prompts to populate each section with relevant content from your documents.

 

Brand Kit Sections: Structured for Versatility

Once your Brand Kit is created and the uploaded documents are processed, Stream automatically generates key sections. Each section serves a specific purpose and is built from well-structured content extracted from your brand documents. These are essentially organized chunks of brand knowledge, formatted for easy use across your content workflows. Default sections that gets created are as follows:

  • Global Goals – Your brand’s core mission and values.
  • Brand Context – Purpose, positioning, and brand values.
  • Dos and Don’ts – Content rules to stay on-brand.
  • Tone of Voice – Defines your brand’s personality.
  • Checklist – Quick reference for brand alignment.
  • Grammar Guidelines – Writing style and tone rules.
  • Visual Guidelines – Imagery, icons, and layout specs.
  • Image Style – Color, emotion, and visual feel.

Each section holds detailed, structured brand information that can be updated manually or enriched using your existing brand knowledge. If you prefer to control the content manually and prevent it from being overwritten during document processing, you can mark the section as Non-AI Editable.

Stream allows you to add new subsections or customize existing ones to adapt to your evolving brand needs. For example, you might add a “Localization Rules” section when expanding to global markets, or a “Crisis Communication” section to support PR strategies.

When creating a new subsection, you’ll provide a name and an intent a background prompt that guides the AI to extract relevant information from your uploaded brand documents to populate the section accurately.

Below screenshot of sections created after brand document process and subsections for example:

Brand Kit Sections

Section Details

AI + Brand Kit = Smarter Content, Automatically

Now we have created brand kit, lets see how AI in Stream uses your Brand Kit to:

Suggest on-brand headlines or social posts

  • Flag content that strays from brand guidelines
  • Assist in repurposing older content using updated brand tone

It’s like having a brand-savvy assistant embedded in your workflow.

 

Brand assist in Sitecore Stream

Once you have Brand Kit ready, you can use the Brand Assistant to generate and manage content aligned with your brand using simple prompts.

Key uses:

  • Ask brand-related questions
  • Access brand guidelines
  • Generate on-brand content
  • Draft briefs and long-form content
  • Explore ideas and marketing insights

It uses agentic AI, with specialized agents that ensure every output reflects your brand accurately.

When a user enters a prompt in the Brand Assistant, whether it’s a question or an instruction the copilots automatically includes information from the Brand Context section of the Brand Kit. It then evaluates whether this context alone is enough to generate a response. If it is, a direct reply is provided. If not, specialized AI agents are activated to gather and organize additional information.

These include a Search Agent (to pull data from brand knowledge or the web), a Brief Agent (for campaign or creative brief requests), and a Summary Agent (to condense information into a clear, relevant response).

I clicked on Brand Assistant tab, selected my Brand Kit, asked a question, and the response I got was spot on! It perfectly aligned with the brand documents I had uploaded and even suggested target consumer based on that information. Super impressed with how well it worked!

Brandkit Selection In Assist

Brainstorm

 

Now it’s time to see how the Brand Kit helps me generate content on XM Cloud or Experience Platform. To do that, connect XM Cloud website with Sitecore Stream so the copilots can access the Brand Kit.

I simply went to Site Settings, found the Stream section, selected my Stream instance and that’s it. I was all set to use brandkit.

Brandkit Setting In Site

Now, when I open the page editor and click on Optimize, I see an additional option with my Brand Kit name. Once selected, I can either draft new text or optimize existing content.

The copilot leverages the Brand Kit sections to generate content that’s consistent, aligned with our brand voice, and ready to use.

For example, I asked the brand kit to suggest campaign content ideas and it provided exactly the kind of guidance I needed.

Campaign Page

 

Conclusion

Building and maintaining a Brand Kit in Stream isn’t just about visual consistency, it’s about scaling brand intelligence across the entire content lifecycle. When your Brand Kit is connected to the tools where work happens, everyone from AI to human collaborators works with the same understanding of what your brand stands for.

]]>
https://blogs.perficient.com/2025/07/15/brandkit-sitecore-stream/feed/ 0 384493
#1 Barrier to Implementing a Content Supply Chain at Large Organizations https://blogs.perficient.com/2025/03/03/implementing-a-content-supply-chain-at-large-organizations/ https://blogs.perficient.com/2025/03/03/implementing-a-content-supply-chain-at-large-organizations/#respond Mon, 03 Mar 2025 16:25:21 +0000 https://blogs.perficient.com/?p=378024

As I gear up for my presentation at the Adobe Summit this March, I’ve been reflecting on the transformative potential of a well-executed content supply chain—and the hurdles large organizations face in making it a reality. And since it is Summit, I will obviously be referencing tools like Adobe GenStudio, Adobe Workfront, AEM Sites, and AEM Assets, which all aim to streamline content creation, management, and activation. Yet, one pain point consistently rises to the top when implementing this process at scale: siloed teams and disconnected workflows.

The #1 Barrier to an Efficient Content Supply Chain

In large organizations, it is practically impossible to audit the content generation processes because content production often resembles a patchwork quilt rather than a seamless assembly line, with each patch representing a different department or agency. The patches themselves represent the individual departmental processes, which makes stitching them together difficult.

For example, the Marketing team might be crafting campaigns within the agency, while design teams work in isolation on visuals that get handed off to the dev team for assembly within their isolated channel teams.  This fragmentation isn’t just a minor inconvenience—it’s the number one barrier to achieving an efficient content supply chain.

This pain point isn’t insurmountable, however. The Adobe Experience Cloud is designed specifically to bridge these process clusters to help orchestrate tasks across teams, ensuring everyone—from copywriters to legal reviewers—is aligned on timelines and deliverables. Now coined GenStudio from Adobe, Adobe has invested significant dollars into creating one holistic solution to streamline content development. The trick is getting everyone on the same page.

Simply Buying Tech Won’t Solve Your Content Supply Challenges

Ok, so, yes, it’s the age-old adage of people and processes and not just technology. You probably didn’t need to read this to figure that out, but if it is so obvious, why do so many large organizations struggle to improve? My observation is this: if you don’t have the proper change management and cross-functional training in place and if you can’t foster and establish a cultural shift toward collaboration, then all the technology in the world won’t help, making leadership buy-in is critical.

At Adobe Summit, my plan is to review the technology elements but to also dive deeper into how organizations can tackle this pain point head-on, with real-world examples and practical strategies. We have to start by connecting the dots—and the people—behind the content, unlocking the full potential of the organization to scale up content production.

Stay tuned for more insights as I prepare for March, and let me know your thoughts on streamlining content workflows in the comments!

Attending Adobe Summit 2025?

Join us for lunch during Adobe Summit to explore why having a clear, strategic vision is essential before deploying new technologies. We’ll discuss how GenStudio and other tools can fit into your existing content workflow to maximize efficiency and creativity.

We hope to see you there!

Beyond GenStudio: Crafting a Modern Content Supply Chain Vision
Wednesday, March 19 | 11:30 A.M. – 1:30 P.M.
Register

]]>
https://blogs.perficient.com/2025/03/03/implementing-a-content-supply-chain-at-large-organizations/feed/ 0 378024
How Sitecore Drives Digital Strategy Value as a Composable DXP https://blogs.perficient.com/2025/01/31/how-sitecore-drives-digital-strategy-value-as-a-composable-dxp/ https://blogs.perficient.com/2025/01/31/how-sitecore-drives-digital-strategy-value-as-a-composable-dxp/#comments Sat, 01 Feb 2025 03:04:11 +0000 https://blogs.perficient.com/?p=376719

Have you seen the speed at which the digital landscape is shifting and evolving and thought to yourself, how can I keep up? How can I level up my organization’s digital customer experience and futureproof my website and digital ecosystem to ensure consistent growth for years to come?

The answer might just be a shift to a Composable Digital Experience Platform (DXP) like Sitecore. This is the latest approach to providing digital experiences that offer flexibility, scalability and faster iteration. Sitecore is a true leader in digital experience management and is fully embracing this composable future, while empowering businesses to create personalized experiences for their customers. Let’s take a closer look at what this means for your strategy and how Sitecore can help you navigate this transition.

What are the key benefits of a composable DXP?

We are coming from a place where monolithic DXP’s were the norm. While this type of platform offered convenience, they could be expensive, required regular upgrades and were difficult to scale, especially with the introduction of AI technologies.

Some of the benefits that migrating to a composable DXP can offer include, but are certainly not limited to:

  • Greater Flexibility
  • Scalability
  • Faster Innovation

How can Sitecore specifically power your composable digital strategy?

Sitecore has shifted from a one-size-fits-all platform to a modular ecosystem, where companies can seamlessly integrate custom components, API’s and third-party platforms. Here are some key areas Sitecore’s composable DXP is driving results for customers across numerous industries.

  1. Sitecore XM Cloud: Sitecore’s cloud-based platform supports headless content delivery. This means that businesses can expect faster time to market for strategic content publishes, reduces maintenance costs and ensures consistency across all digital channels.
  2. Sitecore CDP & Personalize: Sitecore’s Customer Data Platform (CDP) and personalization features help businesses extract real-time customer insights to dynamically display content. This leads to increased conversion and improved customer experience.
  3. Sitecore Content Hub & Sitecore Stream: While Content Hub provides a centralized digital asset management (DAM) system, it also helps automate content creation workflows. Sitecore Stream transforms content lifecycles with AI workflows, generative copilots, and brand aware AI.

Final Thoughts

As you can see, there are a lot of reasons why a composable DXP makes a lot of sense for organizations across all industry verticals, and Sitecore specifically can add a ton of value to Marketing and Technology teams alike in a world of constantly change. At Perficient, we have a team of dedicated and experienced folks ready to help you tackle the transformation and transition into the world of Composable DXP. Reach out to us today, and see how we can work with you to drive outstanding digital customer experiences for your customers.

]]>
https://blogs.perficient.com/2025/01/31/how-sitecore-drives-digital-strategy-value-as-a-composable-dxp/feed/ 1 376719
Think Big, Start Epic: Harnessing Agile Epics for Project Success https://blogs.perficient.com/2024/09/19/think-big-start-epic-harnessing-agile-epics-for-project-success/ https://blogs.perficient.com/2024/09/19/think-big-start-epic-harnessing-agile-epics-for-project-success/#respond Thu, 19 Sep 2024 17:01:07 +0000 https://blogs.perficient.com/?p=369524

Let’s be honest – projects can get messy fast. It’s all too easy to get tangled up in the details and lose sight of the bigger picture. That’s where Agile epics step in, helping you think big while staying grounded in the steps that lead to success. Epics act as the link between your grand strategy and the day-to-day tasks, giving your team the clarity to drive meaningful progress. Whether you’re steering a massive project or managing smaller innovations, mastering epics is key to unlocking the flexibility and focus that Agile promises. In this post, we’ll show you how epics empower teams to think big, act smart, and deliver results.

What is an Epic?

In a hierarchy of work, epics are formed by breaking down higher-level themes or business goals. They are large initiatives that encompass all the development work needed to implement a larger deliverable. An epic is too large to be completed in a single scrum team’s sprint, but it is smaller than the highest-level goals and initiatives. Epics are intentionally broad, light on details, and flexible.

Here’s what that means: The epic is broken down into smaller pieces of work. Your team may call these smaller pieces product backlog items/tickets, user stories, issues, or something else. As conditions or customer requirements change over time, these smaller pieces can be modified, removed, or added to a team’s product backlog with each sprint. In this way, the epic is flexible, providing direction without requiring heavy investment in its plans and details.

Agile Requirements Image

Why Are Epics Important?

Instead of tackling the whole epic at once with a deadline in a few months, you and your teammates deliver small increments of value to your customers, users, or stakeholders each sprint. When changes are needed, you adapt the plan easily. Had your team taken on the entire epic at once, they might find that changes have rendered the epic obsolete by the end.

How to Identify Epics?

Agile epics should describe major product requirements or areas of functionality that define the user experience. You can think of them as categories or parents for user stories that may not directly relate to each other but fall under the same umbrella of functionality (e.g. UI Improvements). Epics can become unwieldy quickly, so it’s worth examining them along the following lines to determine if the size is appropriate or not. Remember, the goal is for the epic to be fully delivered!

  • Does the epic span products? If so, it may be more appropriate to split the epic along product lines.
  • Do the success criteria support each other entirely? If there is conflict between measurements, splitting the epic would be warranted.
  • Is the epic for multiple customer segments? Targeting different customer groups is likely to lead to contention between measurement and goals.
  • How risky is the epic? An effective mitigation strategy may be to compartmentalize the risk across several epics rather than concentrating it in one.
  • Would working on the epic effectively shut down all other development work? This may be an indication that the epic is too large (even if the business priority is clearly highest) and could introduce an extra level of risk that may not have been considered or can be easily mitigated.

Who Creates and Manages Epics?

In Agile, the creation of epics typically starts with the product manager, who has a deep understanding of the project’s long-term vision and business objectives. The product manager identifies major areas of work, shaping them into epics that guide the team’s efforts. While the product manager leads this process, it often involves input from various stakeholders and team members to ensure that each epic aligns with overall project goals. Once established, the product manager is responsible for managing these epics, breaking them down into smaller tasks, and prioritizing them with the product owner to support effective sprint planning and execution.

How to Craft Effective Epics?

  • Define Clear Goals: Begin by identifying the epic’s objectives. Understand the problem it seeks to address and clarify how it will drive value for the project and stakeholders.
  • Collaborate for Alignment: Involve key stakeholders—such as team members, users, and business leaders—to ensure the epic is well-rounded and matches user needs and business priorities.
  • Maintain Flexibility: Though the epic should offer clear direction, it’s important to leave space for changes as new insights or requirements emerge during development.
  • Prioritize Value: Ensure that every aspect of the epic contributes meaningfully to delivering tangible value to both the customer and the overall project.

Epic Structure: Key Components of a Well-Written Epic

  • Title: The title should succinctly summarize the core of the epic, giving the team and stakeholders a quick understanding of its focus.
  • Overview: Write a concise summary that outlines the epic’s objectives and the value it delivers to both the project and the end-user. Consider the target audience and competitors while framing this.
  • Actionable Features: Break the epic down into smaller, actionable features that are measurable and align with the epic’s primary goals. These features should be traceable to specific user needs or project requirements.
  • Success Criteria: Clearly define how the success of the epic will be measured. This should go beyond basic acceptance criteria and include broader business outcomes that may evolve over time.
  • Dependencies: Identify any interdependencies with other epics, projects, or external factors that could influence the epic’s progress.
  • Timeline: While the exact timeframe might not be locked, establishing a rough schedule helps prioritize the work and manage stakeholder expectations.

Next Steps

In conclusion, epics are fundamental to Agile methodology and critical to the Scrum framework. They help product managers, product owners, and key stakeholders manage and organize the product backlog effectively. Developers can also use epics to plan iterations, breaking them into manageable sprints, and systematically collect customer feedback. As outlined, epics serve as an asset for Agile teams, allowing for the grouping of user stories to aid in prioritization and incremental value delivery.

Effectively creating and managing epics can be challenging without the right approach. If you’re finding it difficult to structure your epics, align them with business goals, or manage their scope within your team, don’t hesitate to reach out to us at Perficient. Our experts can help you refine your process, ensuring that your epics are well-defined, manageable, and strategically aligned with your project’s success.

Contact us today to learn how we can assist your team in mastering Agile epics!

]]>
https://blogs.perficient.com/2024/09/19/think-big-start-epic-harnessing-agile-epics-for-project-success/feed/ 0 369524
Giving the Power of Speech Real Horsepower with Voice-to-Everything Capabilities https://blogs.perficient.com/2024/08/28/giving-the-power-of-speech-real-horsepower-with-voice-to-everything-capabilities/ https://blogs.perficient.com/2024/08/28/giving-the-power-of-speech-real-horsepower-with-voice-to-everything-capabilities/#comments Wed, 28 Aug 2024 20:05:04 +0000 https://blogs.perficient.com/?p=368309

With the 2024 Paris Summer Olympics now behind us, I pause for a moment to reflect on a time when the last summer games were held in Europe. The year was 2012, and the Olympics had just wrapped up in London, the queen had celebrated 60 years upon the throne, and in true royal fashion, I had just purchased the latest Ford Explorer.  

This Ford Explorer came in Triple Black with every feature including the latest version of sync with voice control. I was giddy with excitement and felt like I was Captain Kirk at the helm of the Starship Enterprise steering towards new horizons. But… the voice activation was not what I had hoped for.  

When attempting to call my mother, I got my friend Monica, and when trying to dial a colleague, I received a childhood friend. If you know me, then you understand that navigation isn’t my strong suit, and when searching for directions to Birmingham in Michigan, I would consequently be sent to Alabama. You get the picture.  

Speed back to 2024 and voice-to-everything is transforming the automotive industry. Thankfully, the voice control in my 2023 Ford Edge is now working much better — the way it was intended. 

Voice-to-Everything Technology Allows for Expanded Vehicle Control  

The automotive industry is undergoing a significant transformation driven by advancements in technology that are reshaping the way we interact with our vehicles. One of the most exciting developments in this space is the rise of voice-to-everything (VTE) technology. This innovation is poised to redefine the driving experience, increasing intuition, safety, and making it more connected than ever before.  

VTE technology refers to the integration of voice-controlled systems throughout a vehicle, allowing drivers and passengers to interact with the car’s functions using simple voice commands. This technology leverages advancements in artificial intelligence (AI) and natural language processing (NLP) to understand and execute spoken instructions, minimizing the need for physical controls or manual inputs. In essence, VTE in automotive transforms your voice into the primary interface for controlling the vehicle, including everything from adjusting the climate controls, to navigating to destinations, or even managing entertainment options.  

Just Like Language Itself, Voice Technology Has Evolved Over Time 

The Evolution of VTE in cars isn’t entirely new, but it has come a long way from the rudimentary systems of the past. Early voice-activated systems often struggled with accuracy, limited vocabulary, and rigid command structures. However, recent advancements in AI and machine learning have dramatically improved these systems, enabling them to understand context, recognize natural speech patterns, and respond accurately even in noisy environments. Modern vehicles are now equipped with sophisticated voice assistants that can manage a wide range of functions. These systems are no longer just limited to basic commands; they can engage in complex interactions, understand conversational language, and even learn from user preferences over time. 

How Voice-to-Everything is Transforming the Driving Experience

The integration of VTE in vehicles offers several significant benefits, fundamentally changing how drivers and passengers interact with their cars.

To begin, VTE makes the driving experience more convenient and user-friendly. Instead of fumbling with buttons or touchscreens, drivers can simply speak their commands. This ease of use is particularly beneficial in complex, multitasking scenarios, such as driving in heavy traffic or during long trips. Modern VTE systems can learn from the driver’s habits and preferences, offering a personalized experience. For instance, the system can remember your preferred routes, favorite radio stations, or climate settings, automatically adjusting to your preferences as soon as you step into the car. 

Further, as vehicles become more connected, VTE plays a crucial role in integrating the car with other smart devices and services. Drivers can use voice commands to interact with their smartphones, smart homes, and other connected systems, creating a seamless experience that extends beyond the vehicle.

This hands-free approach is not only more convenient but also significantly enhances safety by reducing distractions.  By enabling drivers to control various functions without taking their hands off the wheel or eyes off the road, VTE greatly enhances driving safety.  Whether it’s making a phone call, changing a song, or setting up navigation, voice commands allow drivers to stay focused on the road. An additional benefit is increased productivity during long commutes, which significantly improves the driver experience. 

Finally, VTE is paving the way for the future of autonomous driving. As cars become more autonomous, voice commands will likely become the primary mode of interaction between the driver and the vehicle, allowing for smooth control of the car’s functions even when manual driving is no longer required. 

Let’s Drive Towards a Voice-Powered Future Together 

Voice-to-everything is rapidly becoming a cornerstone of the modern automotive experience. By making driving safer, more convenient, and more connected, this technology is set to revolutionize the way we interact with our vehicles. As it continues to evolve, VTE will play a crucial role in shaping the future of transportation, bringing us closer to a world where the sound of our voice is all that’s needed to command the road. Just to be clear, I am not yet ready to include my vehicle in my friend group, or as part of my fantasy team, but it’s clear that the voice-driven car is more than just a concept—it’s the future.  

As I’ve mentioned in a previous blog, Perficient is in the middle of conducting primary research on connected products. We also have a robust innovations lab that routinely helps OEMs with their customer experiences, data needs, and cloud infrastructure.  Please explore our automotive expertise and schedule a meeting, as we would love to discuss how we can help create a sustainable competitive advantage for you. 

]]>
https://blogs.perficient.com/2024/08/28/giving-the-power-of-speech-real-horsepower-with-voice-to-everything-capabilities/feed/ 1 368309
How Data and Personalization are Shaping the Future of Travel https://blogs.perficient.com/2024/08/26/how-data-and-personalization-are-shaping-the-future-of-travel/ https://blogs.perficient.com/2024/08/26/how-data-and-personalization-are-shaping-the-future-of-travel/#comments Mon, 26 Aug 2024 17:53:57 +0000 https://blogs.perficient.com/?p=367966

Generic travel brochures and one-size-fits-all itineraries are becoming less prevalent in today’s travel and tourism industry. Travelers crave truly unique experiences, and the industry is responding with a powerful tool: data. By harnessing the power of data and personalization, travel companies are unlocking a new era of customer engagement, satisfaction, and loyalty.

Bespoke Travel Experiences Powered by Data

Travel recommendations shouldn’t be generic suggestions that can be found by a cursory Google search, but rather curated experiences that anticipate your every desire. With data, that can be the case. By analyzing everything from past booking history to social media preferences, travel companies can build a rich profile for your travel dossier. This data goldmine allows them to personalize itineraries that cater to your specific interests. Whether it’s reservations to a hidden culinary gem for the adventurous gourmet who devoured Anthony Bourdain’s: Part’s Unknown or serene nature escapes for those who indicating that they want to get away and unplug. Data doesn’t just personalize experiences; it also fuels intelligent recommendations.

Unlocking Customer Loyalty Through Personalization

Personalization is no longer a perk, it’s the expectation. Travelers crave experiences that feel designed just for them, and data empowers travel companies to make the perfect menu just for them. Imagine receiving exclusive deals on flights to destinations you’ve been dreaming of, or automatic upgrades to experiences that resonate with your passions. By leveraging data and personalization, travel companies can build deeper connections with their customers thus fostering lifelong loyalty. This translates into repeat business and positive word-of-mouth recommendations as well as glowing five-star reviews.

Creating A Hands-Off Travel Experience with Customer Data

Data and personalization extend far beyond basic recommendations. Travel companies can leverage this powerful duo to elevate the entire travel journey. What if, when booking a flight, your preferred seat was automatically preselected based on past choices, or the hotel you book remembers your favorite room temperature and sleep number and adjusts accordingly upon arrival. Data can even personalize in-destination experiences. Instead of calling around for restaurants that can accommodate dietary restrictions, your recommendations will have already taken them into account. Rather than searching through various tour programs, they’ve already been curated so that they align with your historical interests.

On-the-spot Location Tailored Experiences

Location data adds another exciting dimension to personalization. You could be exploring a new city and receive real-time notifications about off-the-beaten-path cafes or historical landmarks that are right around the corner. Travel companies can use location data to send personalized offers for nearby attractions or cultural events, ensuring you make the most of every moment. Thanks to real-time suggestions and knowledge of your current location, you can be assured that you’ll be updated on impending weather conditions. This ensures a comfortable travel experience, providing a safe and cozy hideaway for you if there’s a need to duck in off the road and enjoy some shelter.

Personalization with Privacy in Mind

While travelers crave personalization, they also value privacy. The key lies in striking a balance. Transparency is crucial, allowing travelers to understand how their data is used along with giving them the power to control their privacy settings. Finally, travel companies must ensure data security and improve transparency about their policies to build trust with their customers.

AI Enables Travel Companies to Embrace New and Unknown Terrain

Data and personalization, especially enabled by artificial intelligence, will continue to evolve, and the travel landscape will transform with it. We’re entering a future where AI-powered travel companions will use data to anticipate your needs, suggest local experiences, and deftly navigate language barriers. Travel companies that embrace the power of data and personalization will be the ones who unlock the greatest opportunities, fostering strong customer relationships and defining the future of travel.

Forge the future of adventures and accommodations with our travel and hospitality expertise.

 

]]>
https://blogs.perficient.com/2024/08/26/how-data-and-personalization-are-shaping-the-future-of-travel/feed/ 1 367966
Creating a Sound A/B Test Hypothesis https://blogs.perficient.com/2024/08/15/creating-a-sound-a-b-test-hypothesis/ https://blogs.perficient.com/2024/08/15/creating-a-sound-a-b-test-hypothesis/#comments Thu, 15 Aug 2024 14:16:18 +0000 https://blogs.perficient.com/?p=367439

A Hypothesis is important for understanding what you are trying to prove with your A/B test. A well-formed hypothesis acts as a test guide.

A hypothesis is going to challenge an assumption you have about your website’s performance and/or visitor behavior. What is the assumption you want to validate as right or wrong?

Ask yourself these questions when coming up with your test hypothesis:

  • What assumption are you addressing? Is there data to support your assumption?
  • What solution are you proposing to address the challenged assumption?
  • What is the anticipated outcome of your challenge? What metrics will be impacted if you make the specific change?

Asking those questions will help us ensure the hypothesis is S.O.U.N.D.:

Specific – the hypothesis should clearly define the change that is being tested.
Objective – while the test is proving or disproving an assumption – that assumption should be based upon actual insights – analytics, industry research, or user feedback for example.
User-focused – the hypothesis should address a user pain point. Focusing on user experience will increase test engagement and result in better outcomes.
Needs-based – the hypothesis should address a business need. Spend time on tests that will bring value to the business as well as the user. Keep ROI front of mind.
Data-driven – always make sure the hypothesis has measurable metrics and a clear quantitative goal.

Some examples of a solid hypothesis are:

The current headline on our landing page lacks a clear value proposition, so changing the headline to a more concise and benefit-oriented version will increase conversion rate.

Our promo banners blend in with the page design causing users to scroll by them, so testing a more contrasting color will increase CTA clicks on the banners.

The lead capture form is too long causing users to exit the site, so reducing the number of form fields from 20 to 10 will increase the number of leads.

 

]]>
https://blogs.perficient.com/2024/08/15/creating-a-sound-a-b-test-hypothesis/feed/ 1 367439
Composable Martech: Orchestration & Federation https://blogs.perficient.com/2024/05/06/composable-martech-orchestration-federation/ https://blogs.perficient.com/2024/05/06/composable-martech-orchestration-federation/#respond Mon, 06 May 2024 14:41:12 +0000 https://blogs.perficient.com/?p=362120

Part 3 in our “unpack the stack” series on composable martech is all about the data – specifically, access to the data – the middle layer of the stack. The next set of capabilities we’re exploring is Orchestration and Federation. These two capabilities go well together because they are very similar and have some overlap, so let’s unpack ’em.

Orchestration and Federation in a Composable Architecture

At a high level, the “orchestration and federation” category represents the underlying access and routing to data across a variety of back-end martech products – from PIM, CMS, Order Management, DAM, Marketing Automation, internal and external proprietary databases, etc. While the prior topics of FEaaS and Experience Builders focus on the visual expression of content, data, and layout, orchestration and federation capabilities provide access (and intelligence!) to the actual content and data to hydrate those experiences. Let’s better understand the differences here.

Orchestration vs. Federation

The reality is these terms are often used interchangeably, so the definitions below are my take based on how they are often used in reality and… a bit of the dictionary:

  • Federation means bringing multiple sources of data/content together into a consolidated and predictable “place” – in reality the “place” may be a martech tool that holds a copy of all of the data/content, or simply an API facade that sits on top of the underlying systems’ APIs. More on this in a bit. The key point here is it’s a unification layer in the martech stack, a single entry point to get access to the back-end systems via a single API.
  • Orchestration is the same as Federation, however, is brings a bit more logic to the data party, providing some level of intelligence and control on exactly what data/content is provided for consumption. It’s like air traffic control for the flow of data from back-end systems.

Examples of Content Federation

Content Federation is a unification capability where you can combine multiple back-end sources together in a composable stack. A few examples include:

Hygraph Content Federation

Hygraph Content Federation

Hygraph Remote Sources unify multiple back-end source system APIs directly into the Hygraph API so the consumer (e.g. a web app) only needs to access the Hygraph API and not all of the underlying other systems directly. You can read more about the content federation concept from Hygraph or see it live in a video! One thing to note is that Hygraph does not actually fetch and store external data inside Hygraph, instead the remote source’s API schema is added to the Hygraph API so a single call to the Hygraph API will make an “under the hood” call to the external API from Hygraph at query time.

Contentful Content Orchestration’s External References

Contentful External References

Contentful External References (ironically) is a feature of Contentful “content orchestration” (see what I mean by these being used interchangeably?). External References allows system users to register external API sources that get merged into the Contentful GraphQL API so a consumer only needs to use one API. This is nearly identical in capability to Hygraph, however, one important thing to note is that Contentful allows for bi-directional editing of external data.  That means a CMS user can directly edit the external data from the Contentful CMS UI (assuming the API is setup to handle that). One key advantage of bi-directional editing is that a business user does not need to log into the other systems to make edits, they can stay inside the Contentful interface to do all of the editing.

Netlify Connect

Netlify Connect

Netlify Connect is another good example of federation following a similar model to Hygraph and Contentful. In Netlify Connect you can configure multiple “data layers” to back-end systems using pre-built integrations provided by Netlify, or to your own proprietary system using the Netlify SDK. A great use case for this custom approach is if you have a proprietary system that is difficult to get data out of and requires custom code.

The most notable difference with Netlify Connect is that it actually fetches and caches your external data into its own database and exposes snapshots of the historical data. This means you can use historical data revisions to query a specific set of data at a point in time, especially if you need to troubleshoot or rollback the state of an experience.

Optimizely Graph

Optimizely Graph

Unlike the prior examples, Optimizely is a more traditional DXP that is leaning heavily into headless with the likes of Sitecore, Adobe, dotCMS, and others.

Optimizely Graph is the new GraphQL API to serve headless experiences built on Optimizely. One subtle (and maybe overlooked?) feature of Graph is the ability to register external data sources and synchronize them into Graph. Based on the documentation as it stands today, it appears this work is primary developer-driven and requires developers to write custom code to fetch, prepare, and submit the data to Graph. That said, the benefits mentioned previously still stand. This allows headless experiences to consume content from a single API while behind the scenes the synchronization process fetches and stores the external data into Graph.

Enterspeed

"</p

Enterspeed is a good example of a pureplay product that focuses on unification as the middle layer in a composable architecture. It allows you to ingest external data, transform that data, and deliver that data to various touchpoints, all via a high-speed edge network.

WunderGraph Cosmo

Wundergraph Cosmo

WunderGraph provides GraphQL microservice federation. It’s an open source and hosted product that helps you manage multiple back-end databases, APIs, authentication providers, etc. Additionally, its designed in a way for developers to declare the type of compositions of APIs they want using code, following a Git-based approach, instead of requiring UI-based setup and configuration.

Hasura

Hasura

Hasura provides GraphQL federation similar to WunderGraph. It provides a single GraphQL API to consumers with the ability to connect several underlying systems such as REST APIs and databases (e.g. Postgres, SQL Server, Oracle, etc.).

Examples of Orchestration

Digital Experience Orchestration with Conscia

DXO is an emerging capability pioneered by Conscia.ai to help solve the problem of integrating many back-ends to many front-ends. DXO helps to orchestrate the complexity in the middle via a unified layer that all back-end services and systems of records communicate with, as well as a unified front-end API for experiences to consume:

Conscia

A key tenet to this approach is to continue to leverage real-time APIs from the back-end systems, for example, a pure play headless CMS and a commerce engine. The DXO not only acts as a façade in front-end of these back-end system (similar to an API gateway), but it also provides other benefits:

  • Unifies data across back-end systems of record like you see with Federation
  • Provides enhanced business logic and rules by allowing business users to chain APIs together, avoiding static logic written into code by developers
  • Offers performance improvements by caching the real-time calls to the back-end APIs as well as pushing as much computation (e.g. business logic) to the edge closer to the end users

One key value proposition of Conscia’s DXO is a business user-friendly canvas to integrate multiple API call responses and create chains of calls. For example, the response of one API call might become an input to another API, which is often hard-coded logic written by developers:

Conscia Canvas

Conscia’s DXO provides two key capabilities:

  • DX Engine as the federation layer to unify multiple back-end sources of data and content, as well as a rules engine and intelligent decisioning to orchestrate the right content and data for the individual experience
  • DX Graph as a centralized hub of all data, especially useful if you have legacy back-end systems or proprietary systems with hard to access data. The DX Graph can connect to modern APIs to visualize (via a graph!) all of your data, but crucially it becomes a centralized hub for proprietary data as well that may require scheduled sync jobs, batch file processing, and similar ways to integrate.

Similar patterns: API Gateways & BFFs

Is this like an API Gateway?
Yes and no. An API gateway provides a façade on top of multiple back-end services and APIs, however it mostly performs choreography as an event broker between the back-end and front-end (client). An orchestration system puts the brain in API gateway, being a centralized hub, and allows business users to control more of the logic.

Is this similar to the BFF (backend for frontend) design pattern?
Sort of. If the specific federation or orchestration tooling you are using allows you to control the shape of your API responses for specific consumers (e.g. frontend clients in a BFF), then you can accomplish a BFF. This is definitely a good use case for Conscia.

Why do orchestration and federation matter in composable architecture?

In a truly composable stack, we need to consider the fact that multiple systems in use means multiple sources of truth: CMS, another CMS, maybe another, PIM, DAM, OMS, the list goes on. It is absolutely possible to integrate directly with all of these systems directly from your head (the experience implementation you are providing such as a web site, mobile app, etc.). However, direct integrations like this tend to break down when you scale to multiple experiences since all of the back-end data integration logic is in a specific experience implementation/head (e.g. the website application code).

So, what’s the alternative to putting the integrations directly in your head?

  • Abstract it out and build a DIY integration layer: this sounds like a lot of work, but it certainly is possible. However, it may be hard to scale, add features, and maintain since it will turn into a bespoke product within your architecture.
  • Buy a federation/orchestration tool: why build it when there are products that already handle this? Focus on your specific business instead of building (and maintaining!) a bespoke product (like a CMS, and PIM, and OMS, etc.)

A dedicated federation/orchestration layer offers the following key benefits:

  • A single unified API for consumers (marketing site, web portal, native mobile app, data syndication to other systems/channels, etc.)
  • Promotes the concept that systems of record should truly own their data and avoids needing to write custom middleware to handle the orchestration and logic across many systems (e.g. head-based integration or a custom integration layer)
  • Encourages reuse of data and content: it offers data-as-a-service, so you can focus on how to activate it on your channels.
  • May provide contextual intelligence to control and personalize API responses to individual visitors in a dedicated data layer to power tailored omnichannel experiences. 

We have it all, so what’s next?

Seems like we have everything we need here, what else is there? Let’s (re)package up the first three capabilities into a larger topic – stay tuned for part 4 where we will talk about Digital Experience Composition (DXC).

]]>
https://blogs.perficient.com/2024/05/06/composable-martech-orchestration-federation/feed/ 0 362120
Composable Martech: Experience Builders https://blogs.perficient.com/2024/04/18/composable-martech-experience-builders/ https://blogs.perficient.com/2024/04/18/composable-martech-experience-builders/#respond Thu, 18 Apr 2024 19:20:54 +0000 https://blogs.perficient.com/?p=361485

Welcome back for Part 2 in a series on composable martech, where we unpack the stack into the emerging capabilities that make modern architectures. This time we’re talking about Experience Builders, which go hand-in-hand with Part 1’s topic, Front End as a Service.

Experience Builders have been around for a long time. In fact, the concept of drag and drop or point and click page template creation has been around for ages, going back to early content management systems as the visual on-page editor experience, very common with traditional monolithic all-in-one (!) suites. In this context we’re talking about Experience Builders in a composable stack, where the underlying CMS may not be an all-in-one solution or a hybrid headless solution.

As I mentioned, Experience Builders go together with the prior topic, Front End as a Service (FEaaS). While FEaaS focuses on the individual UI component creation, design, and data binding, Experience Builders allow business users to compose multiple components together to form bespoke web pages or reusable page templates. In fact, many FEaaS providers also offer the Experience Builder capability, allowing users to build the repository of components and assemble them together into page layouts.

Traditional Experience Builders

As a baseline of understanding, let’s first look at experience builders that come embedded in traditional CMS/DXP solutions – whether you consider them to be monolithic, all-in-one, unified, etc.

Traditional experience builders are typically offered as the out of the box visual in-line on-page editors provided with a CMS (or DXP). They often support either drag and drop or point and click page layout creation with placeholders or content areas to assign UI components. The fidelity of freedom of page design is often configurable, from blue sky blank slate where you can design the layout of the whole page, to what I like to call “paint by numbers” where the page layout and components are “fixed” and content just needs to be entered.

Below are a few examples of traditional experience builders that you may have seen or worked with over the years.

Sitecore’s Experience Editor (fka Page Editor) is the point-and-click turned drag-and-drop visual editor of the XP product with the ability to in-line edit content and add UI components into page-level placeholders:

Sitecore Editor1

Sitecore Editor2

Optimizely’s On Page Editor provides similar capabilities as Sitecore with on-page in-line editing and placement of content and blocks into content areas:

Optimizely Editor

HubSpot’s SaaS CMS has a visual editor as well, likely what you would expect from a CMS that manages and delivers its own content:

Hubspot Editor

Composable Experience Builders

Now let’s move on to modern composable experience builders since that’s the topic of this series. This is where things get a bit blurry between traditional suite providers going composable and headless-first pure play point solutions. Monolithic-turned-composable DXP players (if you believe that’s even a thing) and pure play headless CMSs are both “meeting in the middle” (a great Deane Barker phrase that I completely agree with) with a lot of emphasis on Visual Editing with headless-based solutions. This is a big topic that is being discussed a lot, recently from industry veteran Preston So with the idea of a Universal CMS. I’ve even written about it before as an emerging trend in the space.

As you may know, headless CMSs were initially celebrated by developers for their ability to use web APIs to fetch content, freeing developers from the underlying CMS platform technology, however this came at a price for marketers, often losing their visual editing ability that we talked about in the traditional experience builders above. Well, its 2024 now, and times have changed. Many solutions out there are turning to visual editors that work with headless technology to bring the power back to the marketers while keeping developers happy with modern web APIs. Let’s take a look at some examples and the wide spectrum of solutions.

dotCMS’s Universal Visual Editor

Dotcms Editor

dotCMS’s new Universal Visual Editor is an on-page in-line editing app built to support headless sites, including SPAs, and other external apps that may fetch content from dotCMS.

AEM’s Universal Editor

Aem Editor

AEM’s new Universal Editor supports visual editing for any type of AEM implementation, from server-side, client side, React, Angular, etc.

Contentful Studio

Contentful Studio

Contentful Studio includes a newly released visual builder called Experiences. This is a very interesting example where a traditional pure play headless CMS is clearly pushing upmarket, innovating to capture the needs to marketers with this no code drag and drop tooling.

Optimizely’s Visual Builder

Opti Visual Builder

Optimizely’s yet-to-be-released (likely arriving in H2 of 2024) Visual Builder appears to follow suit with the likes of Adobe, dotCMS, and other traditional suites by offering a SaaS-based editor for headless implementations. The timing of this is likely to appear after their release on the upcoming SaaS CMS offering.

Sitecore XM Cloud Components & Pages

Xmc Pages

Sitecore XM Cloud’s Components and Pages go hand-in-hand: FEaaS and Experience Builder working in harmony. Interestingly, these interfaces not only support Sitecore-based components such as built-in UI components (OOTB), Sitecore SDK-developed components (via developers), or FEaaS components (via authors), they also allow developers to “wrap” external components and allow marketers to use and author with them, bringing a whole new level of flexibility outside the world of Sitecore with bring your own components (BYOC). This brings composability to a whole new level when coupled with using external data sources for component data binding.

Uniform

Uniform Visual Workspace

In many ways Uniform lives in a category of its own. We’ll unpack that (pun intended) a bit more in a future article in this series. That said, one of the most well-known capabilities of Uniform is its Visual Workspace. Uniform empowers marketers though the visual editor by focusing on the expression of the underlying content and data, bringing the actual experience to the forefront. There’s much more to be said about Uniform on this topic, so stay tuned.

Netlify Create & Builder.io

As I mentioned in the prior article on FEaaS, Netlify Create and Builder.io both offer modern composable Experience Builders that are very similar in nature but have some notable nuances:

  • Builder.io goes to market as a Visual Headless CMS, offering the experience builder visual editing with its own CMS under the hood so you are not required to bring in another tool to manage the content. However Builder.io also supports using an outside CMS such as Contentful, Contentstack, and Kontent.ai
  • Create provides an Experience Builder canvas that works solely with external CMSs supporting Contentful, Sanity, DatoCMS, and other custom options available

Features of a modern Experience Builder

So, what features do modern experience builders provide?

  • Design reusable page layout templates through structural components (columns, splitters, etc) on a visual canvas
  • Assemble and arrange page layouts with UI components (often built via FEaaS tools)
  • Configure visual design systems for consistent look and feel when multiple components come together, for example, typefaces, colors, sizing, padding/margin, etc.
  • Ability to communicate with multiple back-ends to compose an experience sourced from multiple systems (e.g. a product detail page sourcing data from a CMS, PIM, and DAM)
    • Note: many pure play providers offer this while some suite providers are “closed” to their own content – but perhaps other features in the stack can solve this. Keep reading this series to learn more 🤔
  • Flexibility to build both reusable page layout templates and one-off bespoke pages (e.g. unique landing pages such as campaigns)

The combination of FEaaS and an Experience Builder is very common in a composable stack made up of many headless products as it empowers business users without needing developers to implement integrations with back-end APIs. I’ve written about this topic before in the CMSWire article “5 Visual Editing Trends in Composable Martech Tools.” At the end of the day, Experience Builders are the “visual editor” that was missing in the early days of pure play headless CMS’s that mostly favored a great developer experience with elegant APIs and SDKs.

What’s next?

Where are we going to get all of the content from? It’s all about the data! The next topic will cover the underlying systems that can serve the right content and data to these experiences. Stay tuned to learn more about orchestration and federation!

]]>
https://blogs.perficient.com/2024/04/18/composable-martech-experience-builders/feed/ 0 361485