Technology Partners Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/ Expert Digital Insights Fri, 05 Dec 2025 06:51:04 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technology Partners Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/ 32 32 30508587 Lightning Web Security (LWS) in Salesforce https://blogs.perficient.com/2025/12/05/lightning-web-security-lws-in-salesforce/ https://blogs.perficient.com/2025/12/05/lightning-web-security-lws-in-salesforce/#respond Fri, 05 Dec 2025 06:51:04 +0000 https://blogs.perficient.com/?p=388406

What is Lightning Web Security?

Lightning Web Security (LWS) is Salesforce’s modern client-side security architecture designed to secure Lightning Web Components (LWC) and Aura components. Introduced as an improvement over the older Lightning Locker service, LWS enhances component isolation with better performance and compatibility with modern web standards.

Key Features of LWS

  • Namespace isolation: Each Lightning web component runs in its own JavaScript sandbox, preventing unauthorized access to data or code from other namespaces.

  • API distortion: LWS modifies standard JavaScript APIs dynamically to enforce security policies without breaking developer experience.

  • Supports third-party libraries: Unlike Locker, LWS allows broader use of community and open-source JS libraries.

  • Default in new orgs: Enabled by default for all new Salesforce orgs created from Winter ’23 release onwards.

Benefits of Using LWS

  • Stronger security: Limits cross-component and cross-namespace vulnerabilities.

  • Improved performance: Reduced overhead compared to Locker’s wrappers, resulting in faster load times for users.

  • Better developer experience: Easier to build robust apps without excessive security workarounds.

  • Compatibility: Uses the latest web standards and works well with modern browsers and tools.

How to Enable LWS in Your Org

  1. Navigate to Setup > Session Settings in Salesforce.

  2. Enable the checkbox for Use Lightning Web Security for Lightning web components and Aura components.

  3. Save settings and clear browser cache to ensure the change takes effect.

  4. Test your Lightning components thoroughly, ideally starting in a sandbox environment before deploying to production.

Best Practices for Working with LWS

  • Test extensively: Some existing components may require minor updates due to stricter isolation.

  • Use the LWS Console: Salesforce provides developer tools to inspect and debug components under LWS.

  • Follow secure coding guidelines: Maintain least privilege principle and avoid direct DOM manipulations.

  • Plan migration: Gradually transition from Lightning Locker to LWS, if upgrading older orgs.

  • Leverage Third-party Libraries Wisely: Confirm compatibility with LWS to avoid runtime errors.

Troubleshooting Common LWS Issues

  • Components failing due to namespace restrictions.

  • Unexpected behavior with third-party libraries.

  • Performance bottlenecks during initial page loading.

Utilize Salesforce’s diagnostic tools, logs, and community forums for support.

Resources for Further Learning

]]>
https://blogs.perficient.com/2025/12/05/lightning-web-security-lws-in-salesforce/feed/ 0 388406
Salesforce Custom Metadata getInstance vs SOQL: Key Differences & Best Practices. https://blogs.perficient.com/2025/12/05/salesforce-custom-metadata-getinstance-vs-soql/ https://blogs.perficient.com/2025/12/05/salesforce-custom-metadata-getinstance-vs-soql/#respond Fri, 05 Dec 2025 06:46:26 +0000 https://blogs.perficient.com/?p=377157

Salesforce provides powerful features to handle metadata, allowing you to store and access configuration data in a structured manner. In this blog, we explore Salesforce Custom Metadata getInstance vs SOQL—two key approaches developers use to retrieve custom metadata efficiently. Custom metadata types in Salesforce offer a great way to define reusable and customizable application data without worrying about governor limits that come with other storage solutions, like custom objects. For more details, you can visit the official Salesforce Trailhead Custom Metadata Types module. We will delve into the differences, use cases, and best practices for these two approaches.

What is Custom Metadata in Salesforce?

Custom metadata types are custom objects in Salesforce that store metadata or configuration data. Unlike standard or custom objects, they are intended for storing application configurations that don’t change often. These types are often used for things like:

  • Configuration settings for apps
  • Defining global values (like API keys)
  • Storing environment-specific configurations
  • Reusable data for automation or integrations

Custom metadata records can be easily managed via Setup, the Metadata API, or APEX.

Approach 1: Using getInstance()

getInstance() is a method that allows you to access a single record of a custom metadata type. It works on a “singleton” basis, meaning that it returns a specific instance of the custom metadata record.

How getInstance() Works

The getInstance() method is typically used when you’re looking to retrieve a single record of custom metadata in your code. This method is not intended to query multiple records or create complex filters. Instead, it retrieves a specific record directly, based on the provided developer name.

Example:

// Get a specific custom metadata record by its developer name
My_Custom_Metadata__mdt metadataRecord = My_Custom_Metadata__mdt.getInstance('My_Config_1');

// Access fields of the record
String configValue = metadataRecord.Config_Value__c;

When to Use getInstance()

  • Single Record Lookup: If you know the developer name of the record you’re looking for and expect to access only one record.
  • Performance: Since getInstance() is optimized for retrieving a single metadata record by its developer name, it can offer better performance than querying all records, especially when you only need one record.
  • Static Configuration: Ideal for use cases where the configuration is static, and you are sure that the metadata record will not change often.

Advantages of getInstance()

  • Efficiency: It’s quick and easy to retrieve a single metadata record when you already know the developer name.
  • Less Complex Code: This approach requires fewer lines of code and simplifies the logic, particularly in configuration-heavy applications.

Limitations of getInstance()

  • Single Record: It can only retrieve one record at a time.
  • No Dynamic Querying: It does not support complex filtering or dynamic querying like SOQL.

Approach 2: Using SOQL Queries

SOQL (Salesforce Object Query Language) is the standard way to retrieve multiple records in Salesforce, including custom metadata records. By using SOQL, you can query a custom metadata type much like any other object in Salesforce, providing flexibility in how records are retrieved.

How SOQL Queries Work

With SOQL, you can write queries that return multiple records, filter based on field values, or sort the records as needed. For instance:

// Query for multiple custom metadata records with SOQL
List<My_Custom_Metadata__mdt> metadataRecords = [SELECT MasterLabel, Config_Value__c FROM My_Custom_Metadata__mdt WHERE Active__c = TRUE];

// Loop through records and access their values
for (My_Custom_Metadata__mdt record : metadataRecords) {
    System.debug('Label: ' + record.MasterLabel + ', Value: ' + record.Config_Value__c);
}

When to Use SOQL Queries

  • Multiple Records: If you need to retrieve more than one record or apply filters to the query.
  • Dynamic Queries: When the records you’re querying are dynamic (e.g., based on user input or other logic).
  • Complex Criteria: If you need to use conditions like WHERE, ORDER BY, or join metadata with other objects.

Advantages of SOQL Queries

  • Flexibility: SOQL queries allow you to retrieve multiple records based on complex conditions.
  • Filtering and Sorting: You can easily filter and sort records to get the exact data you need.
  • Dynamic Usage: Ideal for cases where the data or records you’re querying may change, such as pulling all active configuration records.

Limitations of SOQL Queries

  • Governor Limits: SOQL queries are subject to Salesforce’s governor limits (e.g., the number of records returned and the number of queries per transaction).
  • Complexity: Writing and managing SOQL queries might introduce additional complexity in the code, especially when dealing with large datasets.

Key Differences: getInstance() vs. SOQL Queries

AspectgetInstance()SOQL Query
PurposeRetrieves a single record by developer nameRetrieves multiple records with flexibility
PerformanceFaster for a single record lookupSlower when retrieving many records
Use CaseStatic configuration data, single record lookupDynamic and multiple record retrieval
ComplexitySimple, minimal codeMore complex, requires query handling
Filtering & SortingNone, only by developer nameSupports filtering, sorting, and conditions
Governor LimitsDoesn't count against query limitsSubject to governor limits (e.g., 50,000 records per query)

Best Practices for Using getInstance() and SOQL

  • Use getInstance() when you need to access one specific metadata record and know the developer name beforehand. It’s efficient and optimized for simple lookups.
  • Use SOQL when you need to filter, sort, or access multiple metadata records. It’s more flexible and ideal for dynamic scenarios, but you should always be aware of governor limits to avoid hitting them.
  • Combine the Two: In some cases, you can use getInstance() for fetching critical single configuration records and SOQL for retrieving a list of configuration settings.

Conclusion

Both getInstance() and SOQL queries have their strengths when it comes to working with custom metadata types in Salesforce. Understanding when to use each will help optimize your code and ensure that your Salesforce applications run efficiently. For simple, static configurations, getInstance() is the way to go. For dynamic, large, or complex datasets, SOQL queries will offer the flexibility you need. By carefully selecting the right approach for your use case, you can harness the full power of Salesforce custom metadata.

]]>
https://blogs.perficient.com/2025/12/05/salesforce-custom-metadata-getinstance-vs-soql/feed/ 0 377157
Agentforce World Tour Chicago: How AI and Data Are Powering Manufacturing’s Next Chapter https://blogs.perficient.com/2025/12/02/agentforce-world-tour-chicago-how-ai-and-data-are-powering-manufacturings-next-chapter/ https://blogs.perficient.com/2025/12/02/agentforce-world-tour-chicago-how-ai-and-data-are-powering-manufacturings-next-chapter/#respond Tue, 02 Dec 2025 23:40:06 +0000 https://blogs.perficient.com/?p=388784

AI is no longer optional for manufacturers. It is the dividing line between industry leaders and those falling behind. Companies that embrace AI and data are setting the pace for efficiency, customer engagement, and growth. Those that delay risk losing relevance in a market that rewards speed, precision, and innovation.

Perficient will join industry leaders at the Agentforce World Tour in Chicago on December 16. Salesforce will showcase its most advanced capabilities, including Agentforce, Slack, and Data 360. These solutions give manufacturers the power to predict demand, automate decisions, and deliver connected experiences that drive measurable results across the entire value chain.

Why Attend Agentforce World Tour Chicago

Chicago is where top industries come together to lead what’s next. At Agentforce World Tour, you will explore sessions and solutions built for the sectors that define this city, including healthcare and life sciences, retail and consumer goods, and manufacturing. You will see real use cases, dive into emerging trends, connect with peers, and gain insights from experts who are shaping the future. You will leave with practical strategies and a roadmap for growth.

New to Salesforce? Start here.
Maybe you attended your first Dreamforce this year and want to get more hands-on. Agentforce World Tour is the perfect next step. This event gives you a closer look at what Salesforce can do for your business. You will learn how the latest agentic and AI innovations drive real results. Hear from customers, explore live demos, and see how Salesforce helps you unlock productivity, accelerate growth, and deliver exceptional customer experiences.

What’s Next for Manufacturing: From Products to Services With Servitization

Perficient is committed to helping manufacturers turn AI and data into measurable results. Here’s where you can connect with us and gain practical strategies for your business:

Transforming Manufacturing Aftermarket: From Products to Services with Servitization
Date: December 4 at 1:00 PM EST
Location: Online | Registration link coming soon
Join these industry experts for our upcoming webinar:

  • Sarah McDowell, Director, Perficient
  • Lester McHargue, Director of Manufacturing, Perficient
  • Pete Niesen, Sr. Director, Business Strategy Consulting, Salesforce

They will explore how servitization and connected digital experiences are transforming the manufacturing and equipment aftermarket. Learn how Salesforce, data, and AI enable new revenue streams, predictive maintenance, and automated support. Walk away with practical strategies to deliver proactive, data-driven services that boost loyalty, satisfaction, and profitability long after the initial sale. Register here.

Ready to take the next step?
If you want to learn more about servitization and how it can transform your aftermarket strategy, download our Manufacturing Servitization Workshop Guide for practical insights and a roadmap to success. And don’t forget to register for Agentforce World Tour Chicago—it’s free, but registration is required. Send us a note here if you would like to connect during World Tour. Secure your spot today and join us for a day of innovation, hands-on learning, and real-world strategies that will help you lead what’s next in manufacturing.

]]>
https://blogs.perficient.com/2025/12/02/agentforce-world-tour-chicago-how-ai-and-data-are-powering-manufacturings-next-chapter/feed/ 0 388784
Join Perficient at Agentforce World Tour New York: Build What’s Next https://blogs.perficient.com/2025/12/02/join-perficient-at-agentforce-world-tour-new-york-build-whats-next/ https://blogs.perficient.com/2025/12/02/join-perficient-at-agentforce-world-tour-new-york-build-whats-next/#respond Tue, 02 Dec 2025 14:11:30 +0000 https://blogs.perficient.com/?p=388771

Close Out the Year. Start Building the Next.

As 2025 winds down, the smartest companies aren’t just looking back; they’re planning. The future of business is agentic, and the time to prepare is now.

Join us at Agentforce World Tour New York on December 10, 2025, and experience the innovation that defined Dreamforce, live in NYC. In just one day, you’ll:

  • 140+ expert-led sessions, demos, and hands-on trainings
  • A front-row look at Salesforce’s biggest launches, including Agentforce 360, Slack, and Data 360
  • Practical ways companies are increasing productivity, accelerating growth, and modernizing customer experiences

All free. All designed to help you turn future plans into action. Register for World Tour here!

Why Attend Agentforce World Tour NYC?

NYC is where Salesforce is pushing the next wave of agentic AI. If you want a real, unfiltered look at how companies are applying Agentforce and Data 360 to drive revenue, speed, and operational lift, this is the event.

You’ll walk away with:

  • Clear, proven examples of agentic AI driving results
  • Direct access to Salesforce product experts and industry innovators
  • Practical steps you can immediately apply to your own AI roadmap

More Ways to Connect During World Tour Week

Agentforce World Tour NYC is just the start. We’re hosting and joining exclusive experiences throughout the week to help you dive deeper into AI, data, and the future of agentic business.

December 10 – Agentforce Champions Breakfast

Start your World Tour experience with an exclusive breakfast for Agentforce champions and power users across leading industries. Connect with peers, share insights, and engage directly with Salesforce leaders. Perficient’s Allie Vaughan will be on-site to share how we’re helping organizations harness agentic AI for real business impact.

Wednesday December 10, 2025 | 8:30AM –10:00AM EST
Onsite at World Tour Javits Center | 429 11th Ave, New York, NY 10001
Register Here → World Tour NYC Agentforce Champions Breakfast

December 10 – Perficient Breakfast at Russ & Daughters

Join us for a relaxed pre-event meetup at one of NYC’s most iconic spots. Enjoy great conversation and connect with Perficient experts and fellow attendees before the main World Tour sessions begin.

Wednesday, December 10 | 8:00AM – 10:30AM EST
Russ & Daughters, NYC | 502 W 34th St., New York, NY 10001
Contact Us for an Invite → Save Your Spot

December 11 – Data 360 + Agentforce Workshops at Salesforce Tower

Take your World Tour experience further with a hands-on workshop designed to help you unlock the full potential of Data 360 and Agentforce. Guided by Perficient’s AI and Data 360 specialists Allie Vaughan and Anu Pandey, you’ll go beyond theory with practical strategies you can apply immediately.

Thursday, December 11 | 10:00AM – 2PM EST
Salesforce Tower New York | 1095 6th Ave, New York, NY 10036
Contact Allie and Anu for an Invite

December 12 – Datablazer Mastery Onsite

Wrap up the week with Salesforce’s full-day enablement experience designed for the Datablazer Community. Deepen your expertise in Data 360 and Agentforce with hands-on learning.

December 12, 2025 | 8:30AM–4:30PM EST
Salesforce Tower New York | 1095 6th Ave, New York, NY 10036
Register Here → Datablazer Mastery Onsite: Agentforce Edition NYC

Your Next Step Toward an Agentic Future

Agentforce World Tour NYC is your chance to see where the Salesforce platform is going and how quickly companies are adapting. From the main event to hands-on workshops, this week offers a complete view of what it takes to operate as an agentic enterprise.

Follow Perficient on LinkedIn for event updates, key takeaways, and our latest insights on Agentforce, Data 360, and the future of AI-driven business.

]]>
https://blogs.perficient.com/2025/12/02/join-perficient-at-agentforce-world-tour-new-york-build-whats-next/feed/ 0 388771
5 Imperatives Financial Leaders Must Act on Now to Win in the Age of AI-Powered Experience https://blogs.perficient.com/2025/12/02/5-imperatives-financial-leaders-must-act-on-now-to-win-in-the-age-of-ai-powered-experience/ https://blogs.perficient.com/2025/12/02/5-imperatives-financial-leaders-must-act-on-now-to-win-in-the-age-of-ai-powered-experience/#respond Tue, 02 Dec 2025 12:29:07 +0000 https://blogs.perficient.com/?p=388106

Financial institutions are at a pivotal moment. As customer expectations evolve and AI reshapes digital engagement, leaders in marketing, CX, and IT must rethink how they deliver value.

Adobe’s report, State of Customer Experience in Financial Services in an AI-Driven World,” reveals that only 36% of the customer journey is currently personalized, despite 74% of executives acknowledging rising customer expectations. With transformation already underway, financial leaders face five imperatives that demand immediate action to drive relevance, trust, and growth.

1. Make Personalization More Meaningful

Personalization has long been a strategic focus, but today’s consumers expect more than basic segmentation or name-based greetings. They want real-time, omnichannel interactions that align with their financial goals, life stages, and behaviors.

To meet this demand, financial institutions must evolve from reactive personalization to predictive, intent-driven engagement. This means leveraging AI to anticipate needs, orchestrate journeys, and deliver content that resonates with individual context.

Perficient Adobe-consulting principal Ross Monaghan explains, “We are still dealing with disparate data and slow progression into a customer 360 source of truth view to provide effective personalization at scale. What many firms are overlooking is that this isn’t just a data issue. We’re dealing with both a people and process issue where teams need to adjust their operational process of typical campaign waterfall execution to trigger-based and journey personalization.”

His point underscores that personalization challenges go beyond technology. They require cultural and operational shifts to enable real-time, AI-driven engagement.

2. Redesign the Operating Model Around the Customer

Legacy structures often silo marketing, IT, and operations, creating friction in delivering cohesive customer experiences. To compete in a digital-first world, financial institutions must reorient their operating models around the customer, not the org chart.

This shift requires cross-functional collaboration, agile workflows, and shared KPIs that align teams around customer outcomes. It also demands a culture that embraces experimentation and continuous improvement.

Only 3% of financial services firms are structured around the customer journey, though 19% say it should be the ideal.

3. Build Content for AI-Powered Search

As AI-powered search becomes a primary interface for information discovery, the way content is created and structured must change. Traditional SEO strategies are no longer enough.

Customers now expect intelligent, personalized answers over static search results. To stay visible and trusted, financial institutions must create structured, metadata-rich content that performs in AI-powered environments. Content must reflect experience-expertise-authoritativeness-trustworthiness principles and be both machine-readable and human-relevant. Success depends on building discovery journeys that work across AI interfaces while earning customer confidence in moments that matter.

4. Unify Data and Platforms for Scalable Intelligence

Disconnected data and fragmented platforms limit the ability to generate insights and act on them at scale. To unlock the full potential of AI and automation, financial institutions must unify their data ecosystems.

This means integrating customer, behavioral, transactional, and operational data into a single source of truth that’s accessible across teams and systems. It also involves modernizing MarTech and CX platforms to support real-time decisioning and personalization.

But Ross points out, “Many digital experience and marketing platforms still want to own all data, which is just not realistic, both in reality and cost. The firms that develop their customer source of truth (typically cloud-based data platforms) and signal to other experience or service platforms will be the quickest to marketing execution maturity and success.”

His insight emphasizes that success depends not only on technology integration but also on adopting a federated approach that accelerates marketing execution and operational maturity.

5. Embed Guardrails Into GenAI Execution

As financial institutions explore GenAI use cases, from content generation to customer service automation, governance must be built in from the start. Trust is non-negotiable in financial services, and GenAI introduces new risks around accuracy, bias, and compliance.

Embedding guardrails means establishing clear policies, human-in-the-loop review processes, and robust monitoring systems. It also requires collaboration between legal, compliance, marketing, and IT to ensure responsible innovation.

At Perficient, we use our PACE (Policies, Advocacy, Controls, Enablement) Framework to holistically design tailored operational AI programs that empower business and technical stakeholders to innovate with confidence while mitigating risks and upholding ethical standards.

The Time to Lead is Now

The future of financial services will be defined by how intelligently and responsibly institutions engage in real time. These five imperatives offer a blueprint for action, each one grounded in data, urgency, and opportunity. Leaders who move now will be best positioned to earn trust, drive growth, and lead in the AI-powered era.

Learn About Perficient and Adobe’s Partnership

Are you looking for a partner to help you transform and modernize your technology strategy? Perficient and Adobe bring together deep industry expertise and powerful experience technologies to help financial institutions unify data, orchestrate journeys, and deliver customer-centric experiences that build trust and drive growth.

Get in Touch With Our Experts

]]>
https://blogs.perficient.com/2025/12/02/5-imperatives-financial-leaders-must-act-on-now-to-win-in-the-age-of-ai-powered-experience/feed/ 0 388106
Building with Sitecore APIs: From Authoring to Experience Edge https://blogs.perficient.com/2025/11/28/building-with-sitecore-apis-from-authoring-to-experience-edge/ https://blogs.perficient.com/2025/11/28/building-with-sitecore-apis-from-authoring-to-experience-edge/#respond Fri, 28 Nov 2025 20:31:24 +0000 https://blogs.perficient.com/?p=388662

Sitecore has made a significant shift towards a fully API-first, headless-friendly architecture. This modern approach decouples content management from delivery, giving developers unprecedented flexibility to work with content from virtually anywhere—be it front-end applications, backend systems, integration services, or automated pipelines.

One of the biggest advantages of this shift is that you no longer need server-side access to Sitecore to manipulate content. Instead, the system exposes a robust set of APIs to support these powerful new use cases.

Sitecore provides three key APIs, each designed for a specific purpose: Experience Edge APIs, Authoring and Management API. Understanding how these APIs relate and differ is crucial for designing robust external integrations, building sync services, and managing your content programmatically.

This blog will  provide a practical, end-to-end view of how these APIs fit into modern architectures. We will specifically walk through how any external system can call the Authoring API using GraphQL,  how to execute common GraphQL mutations such as create, update, delete, rename, and move. If you’re building integration services or automation pipelines for SitecoreAI, this will give you a complete picture of what’s possible

Sitecore’s modern architecture separates content operations into three distinct API layers. This crucial separation is designed to ensure scalability, security, and clear responsibility boundaries across the content lifecycle.

Let’s break down the purpose and typical use case for each API:

1. Experience Edge Delivery API

The Experience Edge Delivery API is Sitecore’s public-facing endpoint, dedicated purely to high-performance content delivery.

  • Primary Use: Used primarily by your front-end applications (ex- Next.js, React, mobile apps) and kiosks to fetch published content for your presentation layer.

  • Core Function: It is fundamentally read-only and does not support content creation or modification.

  • Interface: Exposes a GraphQL endpoint that allows for querying items, fields, and components efficiently.

  • Authentication: Requires minimal or no complex authentication (often just an API key) when fetching published content, as it is designed for global, low-latency access.

  • Endpoint: https://edge.sitecorecloud.io/api/graphql/v1

2. Authoring API (GraphQL)

The Authoring API is the control center for all item-level content management operations from external systems.

  • Primary Use: This is the API you use when building integration pipelines, external systems or third-party applications that need to manipulate content programmatically.

  • Core Functions: It allows external systems to perform the same operations authors execute in the CMS UI, including:

    • Create, update, and delete items.

    • Rename or move items.

    • Manage media assets.

    • Work with workflows and language settings.

  • Interface: Exposed through a dedicated GraphQL endpoint that supports both queries and mutations.

  • Authentication: All calls must be authenticated. The recommended secure approach is using OAuth’s client_credentials flow to obtain a Bearer JWT access token, as detailed in Sitecore’s security documentation.

  • Endpoint Structure: The endpoint is hosted on your Content Management (CM) instance, following a structure like:
    https://your-cm-instance/sitecore/api/authoring/graphql/v1

3. Management API

The Management API supports all administrative, system, and environment-level capabilities.

  • Primary Use: Often used in CI/CD pipelines, server-side processes, and automated scripts for environment maintenance.

  • Core Functions: These include operations that affect the system state or background jobs, such as:

    • Triggering content publishing jobs.

    • Running index rebuilds.

    • Managing environment metadata and background jobs.

    • Generating access tokens (such as through the client_credentials flow).

  • Interface: It shares the same GraphQL endpoint as the Authoring API.

  • Endpoint: same as Authoring API.
    Note: The distinction between the Authoring and Management API is primarily managed by the OAuth scopes assigned to the access token used for authentication, not by a different URL.
  • Relationship to Authoring: While it doesn’t handle item-level content edits, it works alongside the Authoring API to support a full content lifecycle, such as writing content (Authoring API) followed by publishing it (Management API).

Enabling the Authoring and Management APIs: The Prerequisites

Before we can send our first GraphQL mutation to manage content, we have to handle the setup and security. The prerequisites for enabling the Authoring and Management APIs are slightly different depending on your Sitecore environment, but the end goal is the same: getting a secure access token.

Sitecore XM Cloud / SitecoreAI

If you’re on a cloud-native platform like XM Cloud or SitecoreAI, the GraphQL endpoints are already up and running. You don’t have to fiddle with configuration files. Your main focus is on authorization:

  1. Generate Credentials: You need to use the Sitecore interface (often in the Manage or Connect section) to generate a set of Client Credentials (a Client ID and a Client Secret). These are your secure “keys” to the content.

  2. Define Scopes: When you generate these credentials, you must ensure the associated identity has the appropriate OAuth scopes. For instance, you’ll need scopes like sitecore.authoring and sitecore.management to be included in your token request. This is what tells the system what your application is actually allowed to do (read, write, or publish).

Sitecore XM /XP

For traditional, self-hosted Sitecore XM installations, you have a small administrative step to get the endpoints operational:

  1. Enable the Endpoint: You need to deploy a simple configuration patch file. This patch explicitly enables the API endpoint itself and often the helpful GraphQL Playground IDE (for easy testing). You’ll typically set configuration settings in your CM instnace.

    <setting name="GraphQL.Enabled" value="true" /> <setting name="GraphQL.ExposePlayground" value="true" />

  2. Configure Identity Server: Similar to XM Cloud, you then need to register your client application with your Sitecore Identity Server. This involves creating a client record in your IDS configuration that specifies the required allowedGrantTypes (like client_credentials) and the necessary allowedScopes (sitecore.authoring, etc.).

Whether you’re in the SitecoreAI or Sitecore XP/XM the biggest hurdle is obtaining that secure JWT Bearer token. Every request you send to the Authoring and Management APIs must include this token in the Authorization header. We’ll dive into the client_credentials flow for getting this token in the next section.

For the absolute definitive guide on the steps specific to your environment, always check the official documentation: Sitecore XM Cloud / SitecoreAI and Sitecore XP/XM.

Authoring API – Authentication, Requests, and Query Examples

The Authoring API exposes the full set of content-management capabilities through GraphQL. Because these operations can modify items, media, workflows, and other critical pieces of the content tree, every request must be authenticated. The Authoring API uses OAuth, and the recommended approach is the client_credentials flow.

To authorize a request, you first create a client in the Sitecore Cloud Portal. This client gives you a client_id and client_secret. Once you have them, you request an access token from the token endpoint:

POST https://auth.sitecorecloud.io/oauth/token
Content-Type: application/x-www-form-urlencoded

grant_type=client_credentials
client_id=your_client_id
client_secret=your_client_secret
audience=https://api.sitecorecloud.io

The response contains an access_token and an expiry. This token is then passed in the Authorization header for all subsequent GraphQL calls.

The Authoring API endpoint accepts only POST requests with a JSON body containing a GraphQL query or mutation. A typical request looks like this:

POST https://your-tenant.sitecorecloud.io/api/authoring/graphql
Authorization: Bearer your_access_token
Content-Type: application/json

However, the real value of the Authoring API comes from the mutations it supports. These mutations allow external systems to take over tasks that were traditionally only possible inside the CMS. They enable you to create new content, update fields, delete obsolete items, restructure information architecture, or even rename and move items. For integrations, sync services, or automated workflows, these mutations become the core building blocks. Below are a few queries that can be helpful:

1. Create Item:

mutation CreateItem(
  $name: String!
  $parentId: ID!
  $templateId: ID!
  $fields: [FieldValueInput]!
) {
  createItem(
    input: {
      name: $name
      parent: $parentId
      templateId: $templateId
      fields: $fields
    }
  ) {
    item {
      ItemID: itemId
      ItemName: name
    }
  }
}

Input Variables:

{
    name = <item-name>,
    parentId = <parent-item-id>,
    templateId = <template-id>,
    fields = [
        { name: "title", value: "Contact US" }
        { name: "text", value: "Contact US Here" }
      ]
};

2. Update Item:

mutation UpdateItem(
  $id: ID!
  $database: String!
  $language: String!
  $fields: [FieldValueInput!]
) {
  updateItem(
    input: {
      itemId: $id
      database: $database
      language: $language
      version: 1
      fields: $fields
    }
  ) {
    item {
      ItemID: itemId
      ItemName: name
    }
  }
}


Input Variables:
{
    id = <item-id>,
    database = "master",
    language = "en" ,
    fields = [
        { name: "Title", value: "New Title" }
        { name: "Content", value: "New Content" }
      ]
};
3. Rename Item:
mutation RenameItem(
  $id: ID!, 
  $database: String!,
  $newName: String!
) {
  renameItem(
    input: {
      itemId: $id,
      database: $database, 
      newName: $newName 
  }
) {
    item {
      ItemID: itemId
      ItemName: name
    }
  }
}

Input Variables:
{
     id = <item-id>,
     database = "master",
     newName = <new-item-name>
};
4. Move Item:
mutation MoveItem($id: ID!, $targetParentId: ID!) {
  moveItem(input: { itemId: $id, targetParentId: $targetParentId }) {
    item {
      ItemID: itemId
      ItemName: name
    }
  }
}

Input Variables:
{
    id = <item-id>,
    targetParentId = <target-parent-item-id>,
};
5. Delete Item:
mutation DeleteItem($itemID: ID!) {
  deleteItem(input: { itemId: $itemID, permanently: false }) {
    successful
  }
}

Input Variables:
{
itemID = <item-id>,
};

These mutations are extremely powerful because they give you full authoring control from any external system. You can build automated pipelines, sync content from third-party sources, integrate back-office systems, or maintain content structures without needing direct access to the CMS. The Authoring API essentially opens up the same level of control Sitecore developers traditionally had through server-side APIs, but now in a clean, modern, and fully remote GraphQL form.

Management API  – Authentication and Usage

The Management API sits alongside the Authoring API but focuses on administrative and system-level operations. These include running indexing jobs, publishing content, listing background jobs, working with workflows, or inspecting environment metadata. The authentication model is the same: you obtain an access token using the same client_credentials flow and include it in the Authorization header when making requests.

The Management API also uses GraphQL, though the endpoint is different. The requests still follow the same structure: POST calls with a JSON body containing the GraphQL query or mutation.

A typical request looks like:

POST https://your-tenant.sitecorecloud.io/api/management/graphql
Authorization: Bearer your_access_token
Content-Type: application/json

A common example is triggering a publish operation. The mutation for that might look like:

mutation PublishItem($root: ID!) {
  publishItem(
    input: {
      rootItemId: $root
      languages: "en"
      targetDatabases: "experienceedge"
      publishItemMode: FULL
      publishRelatedItems: false
      publishSubItems: true
    }
  ) {
    operationId
  }
}

Input Variables:
 {
     root = <iten-to-publish>
 };

The Management API is often used after content changes are made through the Authoring API. For example, after creating or modifying items, your external service may immediately trigger a publish so that the changes become available through Experience Edge.

The authorization workflow is identical to the Authoring API, which keeps integration straightforward: your service requests one token and can use it for both Authoring and Management operations as long as the client you registered has the appropriate scopes.

Experience Edge Delivery API –  Authentication and Query Examples

Experience Edge exposes published content through a globally distributed read-only API. Unlike the Authoring and Management APIs, the Delivery API uses API keys rather than OAuth tokens for content retrieval. However, the API key itself is obtained through an authenticated request that also uses an access token.

To get the Experience Edge API key for a specific environment, you first authenticate using the same client_credentials flow. Once you have your access token, you call the Deploy or Environment API endpoint to generate or retrieve an Edge Access Token or Delivery API key for that specific environment. This token is what your application uses when querying Edge.

Once you have the key, requests to Experience Edge look more like this:

POST https://edge.sitecorecloud.io/api/graphql/v1
X-GQL-Token: your_edge_api_key
Content-Type: application/json

A basic read query might be:

query ItemExists($id: String!, $language: String!) {
  item(path: $id, language: $language) {
    ItemID: id
    ItemName: name
  }
}

Input Variables:
{
   id= <item-id>,
   language = "en"
}
Experience Edge only returns published content. If you have just created or updated an item through the Authoring API, it will not be available in Edge until a publish operation has been performed, either manually or through the Management API.
The workflow for external applications is usually:
  1. Obtain access token
  2. Use token to retrieve or generate the Edge API key
  3. Use the Edge key in all Delivery API requests
  4. Query published content through GraphQL
Because Edge is optimized for front-end delivery, it is highly structured, cached, and tuned for fast reads. It does not support mutations. Anything involving content modification must happen through the Authoring API.

Making Sense of the Entire Flow

With the combination of Experience Edge for delivery and the Authoring and Management APIs for write and operational tasks, Sitecore has opened up a model where external systems can participate directly in the creation, maintenance, and publication of content without ever touching the CM interface. This gives developers and teams a lot more freedom. You can build sync services that keep Sitecore aligned with external data sources, migrate content with far less friction, or automate repetitive authoring work that used to require manual effort. It also becomes straightforward to push structured data such as products, locations, events, or practitioner information – into Sitecore from CRMs, commerce engines, or any internal system you rely on. Everything is just an authenticated GraphQL call away.

The separation between these APIs also brings clarity. The Authoring API handles the content changes, the Management API supports the operational steps around them, and Experience Edge takes care of delivering that content efficiently to any front end. Each piece has its own responsibility, and they work together without getting in each other’s way. Authors continue working in the CMS. Front-end applications consume only published content. Integration services run independently using APIs built for automation.

The end result is a content platform that fits naturally into modern technical ecosystems. It’s cloud-friendly, headless from the ground up, and flexible enough to integrate with whatever tools or systems an organization already uses. And because everything runs through secure, well-defined APIs, you get consistency, stability, and a workflow that scales as your requirements grow.

This unified approach – external content operations through Authoring and Management APIs, and high-performance delivery through Experience Edge, is what makes the platform genuinely powerful. It lets you build reliable, maintainable, and future-ready content solutions without being tied to the internals of the CMS, and that is a significant shift in how we think about managing content today.

]]>
https://blogs.perficient.com/2025/11/28/building-with-sitecore-apis-from-authoring-to-experience-edge/feed/ 0 388662
How to Approach Implementing Sitecore Content Hub https://blogs.perficient.com/2025/11/26/how-to-approach-implementing-sitecore-content-hub/ https://blogs.perficient.com/2025/11/26/how-to-approach-implementing-sitecore-content-hub/#respond Wed, 26 Nov 2025 20:38:38 +0000 https://blogs.perficient.com/?p=388649

Content chaos is costing you more than you think

Every disconnected asset, every redundant workflow, every missed opportunity to reuse content adds up, not just in operational inefficiency, but in lost revenue, slower time-to-market, and diminished brand consistency. For many organizations, the content supply chain is broken, and the cracks show up everywhere: marketing campaigns delayed, creative teams overwhelmed, and customers receiving fragmented experiences.

Sitecore Content Hub can help solve this, but here’s the truth: technology alone won’t solve the problem. Success requires a strategic approach that aligns people, processes, and platforms. Over the years, I’ve seen one principle hold true: when you break the process into digestible steps, clarity emerges. Here’s the five-step framework I recommend for leaders who want to turn Content Hub into a competitive advantage. It’s what I wish I had before my first implementation. While Content Hub is extremely powerful for a Digital Asset Management (DAM) platform, and there could be entire books written on each configuration point, my hope in this post is to give someone new to the platform a mindset to have before beginning an implementation.

 

Step 1: Discover and Decode

Transformation starts with visibility. Before you configure anything, take a hard look at your current state. What assets do you have? How do they move through your organization, from creation to approval to archival? Who touches them, and where do bottlenecks occur?

This isn’t just an audit; it’s an opportunity to uncover inefficiencies and align stakeholders. Ask questions like:

  • Are we duplicating content because teams don’t know what already exists?
  • Where are the delays that slow down time-to-market?
  • Which assets drive value and which are digital clutter?

Document these insights in a way that tells a story. When leadership sees the cost of inefficiency and the opportunity for improvement, alignment becomes easier. This step sets the foundation for governance, taxonomy, and integration decisions later. Skip it, and everything else wobbles.

 

Step 2: Design the Blueprint

Once you know where you are, define where you’re going. This is your architectural phase and the moment to design a system that scales.

Start with taxonomy. A well-structured taxonomy makes assets easy to find and reuse, while a poor one creates friction and frustration. Establish naming conventions and metadata standards that support searchability and personalization. Then, build a governance model that enforces consistency without stifling creativity.

Finally, map the flow of content across systems. Where is content coming from? Where does it need to go? These answers determine integration points and connectors. If you skip this step, you risk building silos inside your new system, which is a mistake that undermines the entire investment.

 

Step 3: Deploy the (Content) Hub

See what we did there?! With the blueprint in hand, it’s time to implement. Configure the environment, validate user roles, and migrate assets with care.

Deployment is more than a technical exercise. It’s a change management moment. How you roll out the platform will influence adoption. Consider a phased approach: start with a pilot group, gather feedback, and refine before scaling.

Testing is critical. Validate search functionality, user permissions, and workflows before you go live. A smooth deployment isn’t just about avoiding errors. It’s about building confidence across the organization.

 

Step 4: Drive Intelligent Delivery

Content Hub isn’t just a repository; it’s a strategic engine. This is where you unlock its full potential. Enable AI features to automate tagging and improve personalization. Create renditions and transformations that make omnichannel delivery seamless.

Think beyond efficiency. Intelligent delivery is about elevating the customer experience. When your content is enriched with metadata and optimized for every channel, you’re not just saving time. You’re driving engagement and revenue.

Governance plays a starring role here. Standards aren’t just rules. They’re the guardrails that keep your ecosystem healthy and scalable. Without them, even the smartest technology can devolve into chaos.

 

Step 5: Differentiate

This is where leaders separate themselves from the pack. Implementation is not the finish line—it’s the starting point for continuous improvement.

Differentiation begins with measurement. Build dashboards that show how content performs across channels and campaigns. Which assets drive conversions? Which formats resonate with your audience? These insights allow you to double down on what works and retire what doesn’t.

But don’t stop at performance metrics. Use audits to identify gaps in your content strategy. Are you missing assets for emerging channels? Are you over-investing in content that doesn’t move the needle? This level of visibility turns your content operation into a strategic lever for growth.

Finally, think about innovation. How can you use Content Hub to enable personalization at scale? How can AI-driven insights inform creative decisions? Leaders who embrace this mindset turn Content Hub from a tool into a competitive advantage.

 

Final Thoughts

Your current state may feel daunting, but clarity is within reach. By breaking the process into these five steps, you can transform chaos into a content strategy that drives real business outcomes. Sitecore Content Hub is powerful—but only if you implement it with intention.

Ready to start your journey? Begin with discovery. The rest will follow. If Perficient can help, reach out!

]]>
https://blogs.perficient.com/2025/11/26/how-to-approach-implementing-sitecore-content-hub/feed/ 0 388649
Introducing Microsoft Work IQ: The Intelligence Layer for Agents https://blogs.perficient.com/2025/11/25/introducing-microsoft-work-iq-the-intelligence-layer-for-agents/ https://blogs.perficient.com/2025/11/25/introducing-microsoft-work-iq-the-intelligence-layer-for-agents/#respond Tue, 25 Nov 2025 23:15:13 +0000 https://blogs.perficient.com/?p=388641

Microsoft Work IQ is a new AI-driven intelligence layer in Microsoft 365 that understands how your organization actually works – far beyond the org chart – and uses that knowledge to make Copilot and AI agents context-aware by default. Announced at Ignite 2025, Work IQ gives Copilot “brains,” turning raw workplace data into actionable understanding. In practical terms, it finds patterns and context in your enterprise data, so AI assistants deliver answers and actions as if they truly know your business. This is a game-changer for IT leaders looking to harness AI: it means your AI won’t just retrieve information, it will understand it in context.

What is Work IQ?

At its core, Work IQ is the intelligence layer that enables Microsoft 365 Copilot and agents to know you, your job, and your company inside and out. It continuously analyzes the rich signals in your digital workspace – emails, files, chats, meetings – and learns from how work gets done in your organization. Microsoft describes Work IQ in three parts:

  • Data: It connects to your work data in Microsoft 365 (SharePoint documents, Outlook emails, Teams meetings and chats, etc.), not as isolated files, but as a connected web of knowledge. Work IQ semantically indexes this content (understanding topics, intents, and projects) and captures business signals like relationships and timelines from it. In short, it codifies “how work gets done” from the daily flow of information.
  • Memory: Work IQ builds a persistent memory of preferences and patterns – your personal work habits, styles, and the network of colleagues you interact with most. This is sometimes called your “work chart,” as opposed to the formal org chart. For example, it learns your writing tone, recurrent tasks, and who your go-to collaborators are, regardless of who reports to whom. This lets it carry context across sessions and tailor responses to your way of working.
  • Inference: Finally, Work IQ uses inference to connect the dots between data and memory, turning raw information into insights and proactive assistance. It identifies patterns and relationships that might not be obvious – for instance, linking a chat mention of “Project Phoenix” to the related OneDrive folder and team members, or suggesting the next best action based on past similar projects. Work IQ essentially predicts needs and draws insights, going well beyond what any single API or connector can do in isolation.

Put simply, Work IQ maps the real flow of work in your company. It doesn’t just know the theoretical structure in an HR system – it knows who actually collaborates, what documents really matter for each project, how information moves across teams, and what context is relevant to the task at hand. It builds a living model of your organization’s workflows.

These are the kinds of insights Work IQ continuously curates to paint a holistic picture of your operational reality. That intelligence is built into Microsoft 365 Copilot today – it’s the same brain that makes Copilot’s answers feel enterprise-aware. Now, importantly, your own custom agents can tap into Work IQ as well. This means when you build an AI bot or automation for your organization, it can leverage that shared “work brain” to behave more like a smart teammate instead of a naive script.

Work IQ vs. Microsoft Graph: Data vs. Understanding

A common question is: How is Work IQ different from Microsoft Graph? After all, Microsoft Graph has long provided API access to mail, files, Teams, users, and more. The difference lies in raw data versus interpreted intelligence:

  • Microsoft Graph is essentially a rich data access layer – a unified API to query information from Microsoft 365 (emails, calendar events, documents, chat messages, directory info, etc.). You ask for data, and Graph returns exactly what you requested, but it’s up to you to make sense of it Graph gives you the raw information (for example, a list of files or the text of an email) and as a developer you must build the logic around it.
  • Work IQ is an intelligence layer built on top of that data. It leverages the data that Graph exposes, but adds a deep understanding of relationships, relevance, and context in that data. Instead of you writing code to figure out “who is working on what” or “which documents are important to this project,” Work IQ deduces that automatically by analyzing patterns. Work IQ gives you understanding – the meaning behind the data, not just the data itself.

In summary, Microsoft Graph is indispensable for accessing raw data, but Work IQ is what makes that data immediately useful for AI. The Graph pulls facts while Work IQ finds patterns and insights in those facts. This distinction is key: Work IQ is what elevates an AI assistant from a basic tool into a knowledgeable collaborator.

Why Work IQ Matters

Work IQ represents a strategic shift in how we build and deploy AI in the enterprise. Here are the key reasons it’s a big deal:

  • AI with your organization’s DNA: Because Work IQ continuously learns from your company’s data and interactions, it makes AI responses highly specific to your context. Copilot answers won’t be one-size-fits-all; they’ll reference your internal projects, priorities, and terminology appropriately. For example, ask Copilot for “update on Project Phoenix” and instead of a generic answer, it will leverage Work IQ to know who’s driving that project, recent updates from Teams, and relevant files to summarize – all more relevant, actionable insights and spend less time sifting through information.
  • Agents that act like teammates, not just tools: When your custom agents have Work IQ behind them, they gain a kind of common sense about the organization. They can anticipate needs and follow context in a human-like way. The goal is to have agents stop behaving like tools and start acting like teammates. For instance, an internal IT helpdesk bot with Work IQ could detect that a flurry of Teams messages and an email thread are all about the same incident and proactively alert the relevant engineer – a level of situational awareness that would feel almost proactive like a colleague, not a scripted Q&A bot.
  • Faster, easier development of AI solutions: From an IT leader or developer perspective, Work IQ removes a huge amount of grunt work. You no longer need to manually wire together data from multiple sources and painstakingly program the context for your bots. Microsoft has effectively packaged the context layer for you. This leads to Faster development, Less complicated prompts and Less stitching of disparate APIs. 
  • More out-of-the-box intelligence for any agent you build. In practice, that can cut down development cycles and let your team focus on higher-level logic instead of data plumbing. For example, a developer using Copilot Studio can drag in the Work IQ connection and immediately have their agent “know” the user’s recent meetings or team documents, without writing custom code to fetch and summarize those.
  • Built-in security and compliance: Work IQ is enterprise-ready by design. It respects all the existing permissions, sensitivity labels, and compliance rules on your data. Only information the user (or agent) is allowed to access will be surfaced, and it’s subject to audit and monitoring like the rest of Microsoft 365. For IT, this means you can trust Work IQ to handle corporate data responsibly. It’s not a rogue AI scraping everything – it’s operating within the governance framework you already manage. This distinction is key when enabling AI broadly in a company: Work IQ gives you intelligence and maintains the controls (something that pure large language models on external data don’t guarantee).

Real-World Applications and Examples

To make this more concrete, let’s look at how Work IQ can be applied in real scenarios that IT leaders care about:

  • Project Specific Copilot: Imagine your PMO builds a Project Copilot agent in Copilot Studio. The goal is to onboard new project team members quickly. With Work IQ, this agent can instantly gather all relevant knowledge for a project. It might say, “Hello, I’ve compiled the key documents for Project Phoenix and identified that Alice and Bob are the top collaborators on this initiative. Would you like a summary of recent progress updates from Teams?” This is possible because Work IQ already knows which documents are central to Project Phoenix and who has been driving the conversations. The new team member doesn’t have to hunt for information – the agent, powered by Work IQ, serves it up in context. This accelerates ramp-up and ensures consistency in what information people see.
  • Intelligent Helpdesk Bot: In your IT department, you could enhance a helpdesk chatbot (perhaps built with Power Copilot Studio) using Work IQ’s API. For example, an employee asks the bot a question about a system outage. A Work IQ-enabled bot could recognize, “This issue was discussed in an email thread yesterday and a Teams chat involves the network team”. It can then pull the pertinent info or even loop in the right expert automatically. Essentially, the bot understands the who and where of past incident knowledge. During Ignite, Microsoft showcased a Sales Development Agent that does something similar for sales – it pulls in context from CRM and internal comms to qualify a lead and suggest next steps. Your helpdesk bot can analogously use context to route and resolve IT tickets faster, by knowing what’s happened already across channels.
  • Enterprise App with Contextual AI: Microsoft is also weaving Work IQ into its own tools for creators. In fact, the new Copilot App Builder in Power Platform (announced at Ignite) uses Work IQ to inject organizational context into the apps people build. For example, if a business user creates a budget approval app with App Builder, Work IQ could enable the app’s AI assistant to automatically show related budget files or identify the manager who usually approves similar requests, without extra configuration. This means citizen developers can create smarter apps that “know” the workplace. As an IT lead, you can encourage adoption of such tools, confident that the intelligence layer (Work IQ) will make these solutions far more useful and integrated into daily work.

Each of these scenarios highlights a pattern: Work IQ provides situational awareness that was previously missing in our software. It brings the same kind of contextual understanding that a long-tenured employee might have (“Oh, I know exactly who to ask about this issue, and I recall a similar project from last year…”) directly into our apps and agents. That dramatically improves both the user experience and the effectiveness of AI automation.

Conclusion

Microsoft Work IQ is a cornerstone of the “frontier firm” vision – a company where AI is woven into every workflow with a rich understanding of the business. For IT leaders, Work IQ offers a path to operationalize AI at scale: you get the power of Microsoft’s Graph data plus an intelligence model trained on your organization’s nuances. The end result is AI that feels native to your enterprise. Copilot and custom agents become smarter, more helpful colleagues rather than blunt instruments. Work IQ allows AI to find insights in context, rather than just pulling disjointed data fragments.

By leveraging Work IQ, you enable your AI systems to “know” your business in ways that were previously only in employees’ heads. That translates to faster decisions, less reinventing the wheel, and a significant leap in productivity. In short, Work IQ turns enterprise AI from a cool gadget into a deeply integrated, competitive capability. It is the intelligence that will help your organization’s digital workforce act with the insight and awareness of a seasoned team member – which is exactly what we need for AI to truly drive the next wave of workplace transformation.

]]>
https://blogs.perficient.com/2025/11/25/introducing-microsoft-work-iq-the-intelligence-layer-for-agents/feed/ 0 388641
The Agentic Enterprise: Key Agent Announcements from Microsoft Ignite 2025 https://blogs.perficient.com/2025/11/25/the-agentic-enterprise-key-agent-announcements-from-microsoft-ignite-2025/ https://blogs.perficient.com/2025/11/25/the-agentic-enterprise-key-agent-announcements-from-microsoft-ignite-2025/#respond Tue, 25 Nov 2025 21:44:28 +0000 https://blogs.perficient.com/?p=388623

Microsoft Ignite 2025 marked a pivotal shift in enterprise AI strategy, introducing a new generation of autonomous agents and the governance tools needed to manage them responsibly. From sales and HR to IT and productivity, Microsoft’s announcements signal a future where AI agents are not just assistants—but active participants in business operations.

Key Ignite Announcements

New AI Agents: Expanding the Autonomous Workforce

Several new AI agents debuted at Ignite, each designed to automate and assist in specific business processes:

  • Sales Development Agent – a fully autonomous sales AI that researches prospects, qualifies leads, and engages in personalized outreach to grow the sales pipeline. It works around the clock to nurture leads (via emails or meeting scheduling) and can hand off hot prospects to human sellers when needed. Sales teams can scale outreach and ensure no lead is overlooked, driving revenue growth without proportional headcount increases. (Preview via the Frontier early access program in Dec 2025).
  • Agents in Microsoft Teams Channels – collaboration agents that live in Teams channels and can interact with third-party apps and other agents through the new Model Context Protocol (MCP). For example, a project team’s channel agent can automatically pull issue trackers from Jira and then schedule follow-up meetings based on the risks identified. Teams users get a proactive AI teammate that bridges data across tools and coordinates team tasks, improving productivity and cross-app workflows. (Now in Preview).
  • Workforce Insights, People, and Learning Agents – a trio of HR and employee experience agents powered by Microsoft’s Work IQ intelligence layer. The Workforce Insights Agent provides leaders with real-time analytics on team composition, skills, and attrition to inform data-driven HR decisions. The People Agent helps employees find colleagues by expertise or role and suggests the best ways to connect (e.g. highlighting shared projects). The Learning Agent delivers personalized micro-learning and upskilling content to each employee, tailored to their role and goals. These agents enhance workforce management and development – leadership can respond faster to organizational trends, and employees benefit from stronger internal networks and continuous skill growth. (Available in Preview via the Frontier program.)
  • IT Admin Agents (Teams and SharePoint) – new agents to assist IT administrators in managing Microsoft 365 environments. The Teams Admin Agent (preview) resides in the Teams Admin Center and can automate routine admin tasks like monitoring meeting quality or provisioning users, executing these workflows autonomously and securely. Meanwhile, the SharePoint Admin Agent (preview) helps govern SharePoint by monitoring for inactive or ownerless sites, overshared files, or permission sprawl, then applying policies or automatic fixes such as archiving sites or adjusting access rights.  These admin agents reduce IT workload and enforce best practices consistently – ensuring collaboration platforms stay well-configured, secure, and compliant without requiring constant manual oversight.

Microsoft also announced Office Copilot Agents for Word, Excel, and PowerPoint within Microsoft 365 Copilot chat, which can generate and format content in those apps based on user prompts. These content-creation agents, while not fully autonomous, help users produce high-quality documents, spreadsheets, and presentations more efficiently. They are available in early access for Copilot customers.

Governance Tools: Managing AI Agents with Confidence

Recognizing that deploying dozens or even hundreds of AI agents raises new oversight challenges, Microsoft introduced governance tools to help customers adopt agents safely and transparently:

  • Microsoft Agent 365“the control plane for agents” that extends Microsoft’s existing management infrastructure to cover AI agents. Agent 365 provides a unified dashboard for IT to register, monitor, and secure all agents in the organization. Its core features include an Agent Registry (an inventory of every agent, including those built in-house or by third parties), Access Control to limit what data/resources an agent can access (applying conditional access and least privilege principles), Visualization tools to map relationships between agents, people, and data and to watch agent behavior in real time, and built-in Security integration (with Microsoft Defender and Purview) to detect threats or data leaks involving agents. In short, Agent 365 lets organizations govern AI agents as rigorously as they govern human users, using familiar tools like Microsoft Entra ID and Purview that are now extended to agents. Agent 365 is available in early access (via the Frontier program in the Microsoft 365 admin center) for customers to start piloting now.
  • Microsoft Entra Agent ID – a new capability in the Entra identity suite that provides unique, first-class identities for AI agents. Just as every employee has a digital identity and login, now each agent can be issued an Entra Agent ID to authenticate itself and be assigned role-based access permissions. This brings Zero Trust security to AI agents: every agent’s access can be tightly governed (e.g. a finance-focused agent gets access only to finance data) and monitored via Entra’s conditional access and risk detection. If an agent behaves anomalously or is compromised, its credentials can be revoked immediately, just like for a human account.  Entra Agent ID ensures no “rogue” or unmanaged agents are operating; companies get full control over what each agent is allowed to do, reducing the risk of data leaks or unauthorized actions by AI. (Introduced at Ignite 2025; in preview as part of the Agent 365 ecosystem.)
  • Microsoft Purview Extensions for AI – enhancements in Microsoft Purview (the data governance and compliance suite) to cover AI-generated content and agent activities. Data Loss Prevention (DLP) policies in Purview now apply to interactions with Copilots and agents, preventing sensitive information from being disclosed by an AI. For example, if an internal user asks an agent a question that would output confidential data, Purview can block or mask that response. Additionally, Purview’s Data Security Posture Management (DSPM) can now discover and assess all AI agents running in the environment (including third-party agents) and flag any that pose compliance risks. Audit logging and eDiscovery are extended to agent actions, so every decision an agent makes can be traced for compliance and analysis. Organizations can embrace AI automation while maintaining their compliance obligations and security safeguards. The same oversight used for user actions (DLP, audit logs, risk management) will automatically cover AI agent actions, which is critical for industries with strict regulatory requirements. (Purview’s AI governance features began rolling out at Ignite in preview form.)
  • Foundry Control Plane – for companies developing their own AI solutions, Azure’s Foundry platform introduced a control plane paralleling Agent 365’s capabilities. It allows development and ops teams to set policies, monitor performance, and manage costs for custom-built agents across their lifecycle. By using the Foundry control plane, even AI agents created with open-source tools or non-Microsoft frameworks can be brought under a unified governance umbrella.  This ensures that custom AI projects don’t become a governance blind spot – they too can be centrally managed for security and compliance from day one, making enterprise AI portfolios more coherent and controlled.

Impact

The Ignite 2025 announcements underscore a dual message: significant productivity gains are now within reach through AI agents, and Microsoft is delivering the controls to deploy these agents responsibly. The potential benefits include:

  • Boosted Productivity and Automation: The new agents can handle labor-intensive tasks – from scouring CRM systems and sending outreach emails (Sales Agent) to auto-monitoring IT systems (Admin Agents) – which frees up employees to focus on higher-value strategic work. Early adopters can expect faster cycle times (e.g. quicker lead follow-ups, faster issue resolution) and extended service availability (agents working 24/7).
  • Improved Employee and Customer Experiences: AI agents embedded in everyday workflows mean employees have on-demand assistance. Projects move faster when a Teams channel agent can gather data or schedule meetings automatically. Employees get personalized support in learning and finding information via the People and Learning agents. Customers, in turn, benefit from more responsive service (since AI can help address their needs instantly or outside of business hours). Overall, these agents promise more proactive, responsive operations in many areas of the business.
  • Enterprise-Grade Trust and Control: Perhaps most crucially, Microsoft’s focus on governance provides IT leaders and compliance officers the confidence to scale AI usage safely. Features like Agent 365 and Entra Agent ID mean that introducing an army of AI agents won’t result in loss of visibility or unchecked access to sensitive data. Every agent is accounted for, governed, and subject to security and compliance rules. This lowers the barrier to adoption because organizations can enforce their existing security policies on AI agents just as they do for employees, preventing the kind of “shadow AI” chaos that uncontrolled agents might cause.

Microsoft Ignite 2025 marked a clear shift from AI as a mere assistant to AI as a full-fledged workforce layer, with Microsoft unveiling a unified agent ecosystem across Microsoft 365, Windows, and Azure, centered on Agent 365, a control plane for registering, securing, and managing agents with Entra-issued IDs. New features include Work IQ for personalized agent recommendations, dedicated Office and industry-specific agents, and Windows’ native agent infrastructure for secure integration. The message was clear: the future of work is agent-powered, but trust, compliance, and control must be built in from the start.


Table: Key Announcements on AI Agents and Governance (Ignite 2025)

Feature / Tool  Description  Impact  Availability 
Microsoft Agent 365  Central command center for AI agents – provides a registry of all agents, access controls, real-time monitoring dashboards, and integrates security/compliance tools (Defender, Entra, Purview) for agents. Enables IT to manage and secure AI agents at scale just like user accounts. Increases trust by preventing unmanaged “shadow” agents and enforcing consistent policies (identity, data protection) across all AI-driven processes. Early Access Preview (Available now via the Frontier program in the M365 admin center.)
Microsoft Entra Agent ID  New identity management for AI agents – assigns each agent a unique Entra ID identity and credentials, with full support for Conditional Access and audit logging of agent sign-ins. Extends Zero Trust security to autonomous agents. Tight access control for agents: Every agent operates under a known identity and role, so companies can apply least-privilege access and instantly revoke or adjust an agent’s permissions if needed. Builds trust that agents will only reach the data they’re authorized to use. Preview (Introduced at Ignite; part of Entra updates rolling out in late 2025.)
Sales Development Agent  AI sales representative that autonomously researches prospects, crafts outreach emails, follows up with leads, and hands off interested customers to human sellers. Integrates with CRM systems (Dynamics 365, Salesforce) and works within Outlook/Teams to drive pipeline. Scales up sales capacity by ensuring every lead is engaged promptly and persistently. Sales teams can convert more leads without adding staff, as routine prospecting and follow-ups are handled by the agent (with consistency and no downtime). Frontier Preview (Available to test for participants in Dec 2025.)
Teams Channel AI Agents  Intelligent agents embedded in Microsoft Teams channels that can collaborate with users and connect to third-party apps via MCP (Model Context Protocol). They can aggregate data from other services (e.g. project trackers, DevOps tools) and initiate actions like scheduling meetings or updating tasks. Enhances team collaboration by acting as a smart coordinator: the agent surfaces information from across the toolchain into Teams and automates cross-app steps. Teams become more productive as the agent reduces the need to manually check different apps or remember follow-ups. Preview (New capability in Microsoft Teams, announced at Ignite 2025.)
Workforce Insights & HR Agents  A set of Work IQ-powered agents for HR: Workforce Insights Agent (real-time org analytics for leaders), People Agent (find colleagues by skill/role and suggest connections), Learning Agent (personalized training and upskilling content). Data-driven people management and development. Leaders gain immediate insight into workforce composition and trends for better planning. Employees can more easily network internally and get targeted learning resources, leading to a more connected and skilled workforce. Preview (Available via Frontier program as of Ignite 2025.)
Teams & SharePoint Admin Agents  IT administration agents for Microsoft 365: one in Teams Admin Center to automate tasks like user provisioning and system monitoring; another in SharePoint Admin Center to audit and fix site issues (inactive sites, oversharing, permission drift) via AI. Always-on IT assistance that improves governance. Routine admin tasks are handled consistently and faster, reducing IT effort and human error. These agents also proactively enforce policies (e.g. cleaning up unused sites or tightening permissions), which strengthens security/compliance across collaboration platforms. Preview (Both announced in preview at Ignite 2025.)
Microsoft Purview AI Governance  Purview compliance features for AI – extended DLP policies to monitor and block sensitive data in AI prompts or outputs; Purview’s DSPM now inventories all AI agents and assesses their risk posture; audit trails cover AI agent activities for eDiscovery and oversight. Maintains compliance and security in an AI-driven environment. Companies can trust that adopting AI agents won’t lead to data leaks or compliance violations, because existing data protection rules automatically apply. Every action by an agent is logged and auditable, which is crucial for industries with strict regulations. Preview / Rolling Out (Announced at Ignite; incremental rollout through late 2025 into 2026 for various Purview enhancements.)

 

]]>
https://blogs.perficient.com/2025/11/25/the-agentic-enterprise-key-agent-announcements-from-microsoft-ignite-2025/feed/ 0 388623
Monitoring and Logging in Sitecore AI https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/ https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/#respond Mon, 24 Nov 2025 21:04:34 +0000 https://blogs.perficient.com/?p=388586

Why Observability Matters More Than Ever

Moving from traditional Sitecore deployments Sitecore AI means the infrastructure is abstracted away. That’s fantastic for agility, but it also changes how we troubleshoot. You can’t RDP onto a server and tail a file anymore; your lifeline is observability: clear signals from logs, metrics, and governed automation that tell you what’s happening across the platform and the front‑end.

What’s Different in Sitecore AI?

Logs and diagnostics are centralized. You access them via the Sitecore AI portal and the Sitecore CLI. They’re organized by environment and by role. Your front‑end application or rendering host, often a Next.js site deployed on Vercel, responsible for headless rendering and user experience has its own telemetry separate from the CMS.

So, your monitoring picture spans three surfaces: Sitecore AI logs for CMS and deployment activity, rendering host telemetry for front‑end performance, and Experience Edge signals for content delivery. Together, they describe the health of the experience, not just the servers.

 

Understanding the Logging Surfaces

In Sitecore AI, logs are grouped into three primary areas that each play a distinct role in diagnosing issues:

Content Management (CM) logs

  • These are your first stop for diagnosing publishing failures, broken workflows, template errors, and serialization mismatches. When a publish fails, CM logs help you separate permissions or workflow problems from data or serialization issues.

Rendering Host logs

  • Think front‑end behavior and performance. If personalization falls back, pages render slowly, or API responses seem sluggish, the rendering host logs surface cache misses, API latency, and rendering errors that directly impact Core Web Vitals and UX.

Deployment logs

  • The “narrative” of your CI/CD run. When a build fails or a promotion doesn’t complete, deployment logs pinpoint CLI command failures, artifact mismatches, or environment configuration issues. They also provide stage-by-stage visibility (provisioning, build, deploy, post‑actions), which speeds triage and supports audits.

Access these logs quickly in the Deploy app’s environment view or programmatically via the Sitecore CLI for listing, viewing, and downloading logs as part of your pipeline artifacts.

Integration Patterns for Enterprise Monitoring

Centralizing is helpful; correlating is essential. The pragmatic pattern I recommend is:

Sitecore AI → Azure Monitor/Application Insights

  • Forward CMS and deployment logs so you can correlate spikes in errors with deployments, content bursts, or traffic changes. KQL lets you slice by environment, role, and severity for root cause analysis.

Rendering Host → APM (Datadog/New Relic)

  • Use front‑end analytics to track TTFB, cache hit ratio, route errors, and API dependency health. Pair this with Vercel’s own analytics for global edge performance.

Experience Edge → Webhook Monitoring

  • Register webhooks so you can track publish‑to‑Edge latency and trigger alerts or redeploys when content propagation slows or fails.

SIEM Integration (today’s reality)

  • For unified audit across Sitecore SaaS, stream supported Common Audit Logs (CAL) via webhooks (Personalize/CDP/Connect) and, for Sitecore AI, pull environment and deployment logs via CLI on a schedule until broader CAL coverage lands.

Metrics That Matter

In a SaaS world, traditional “server up” checks don’t describe user experience. Focus on metrics that map directly to reliability and business impact:

Deployment success & promotion health

  • Failed builds or promotions block content and features. Tracking rates and mean time to recovery reveals pipeline reliability.

Publish‑to‑Edge latency

  • Authors expect content to reach Experience Edge quickly. Latency here affects real‑time campaigns, previews, and editorial confidence.

Rendering host performance

  • P95/P99 TTFB, cache hit ratio, and error rates impact Core Web Vitals, SEO, and conversion. They also help you spot regressions after releases.

Agent activity & governance

  • With Sitecore AI’s agentic capabilities, monitoring agent runs, approvals, and failures protects compliance and prevents unintended bulk changes.

Governance Signals in Sitecore AI

Sitecore AI introduces Agentic Studio: a governed workspace to design, run, and oversee automation. Work is organized around four building blocks, Agents, Flows, Spaces, and Signals. Practically, that means you can automate complex operations while maintaining human review and auditability.

  • Agents: Handle focused tasks (e.g., content migration, metadata updates).
  • Flows: Orchestrate agents into multi‑step workflows with visibility across stages.
  • Spaces: Provide shared context for teams to collaborate on active runs.

Signals surface trends and triggers that can start or adjust flows. Together, these give marketers and developers a safe frame to scale automation without losing control.

How Agent Flows Are Monitored

Monitoring agent flows blends product‑level visibility with enterprise analytics:

Run visibility in Agentic Studio:

  • Each flow run exposes status, participants (human and agent), timestamps, and outcomes. Because flows are orchestrated in a governed workspace, you get “full visibility” into progression from brief to publish/optimization, including approvals where human review is required.

Governance signals and audit trails:

  • Signals can trigger flows and also act as governance inputs (for example, trend alerts requiring approval). Capture audit trails of who initiated a run, which agents executed steps, and what content or configurations changed.

Alerting and dashboards:

  • Mirror key flow events into your monitoring plane: start, paused awaiting approval, failed step, completed. Route these into Azure Monitor or your SIEM so operations sees agentic activity alongside deployments and content events.

Integration approach:

  • Where Common Audit Logs (CAL) are available (Personalize/CDP/Connect), stream events via webhooks. For Sitecore AI and Agentic activity not yet covered by CAL, use scheduled CLI log exports and APIs the platform exposes to assemble a unified view. Normalize event schemas (runId, agentId, flowId, environment, severity) to enable cross‑product correlation.

The outcome: agent automation becomes observable. Teams can answer “what changed, when, by whom, and why” and tie those answers to performance and compliance dashboards.

Final Thoughts

Observability in Sitecore AI isn’t about servers; it’s about experience health and trusted automation. When you combine SaaS‑native logs, front‑end telemetry, Edge events, and agentic governance signals, you gain a single narrative across deployments, content, and automation, the narrative you need to keep teams fast, safe, and accountable.

]]>
https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/feed/ 0 388586
Migrating Redirects in Sitecore to Vercel Edge Config: A Practical Path https://blogs.perficient.com/2025/11/23/migrating-redirects-in-sitecore-to-vercel-edge-config-a-practical-path/ https://blogs.perficient.com/2025/11/23/migrating-redirects-in-sitecore-to-vercel-edge-config-a-practical-path/#respond Mon, 24 Nov 2025 05:51:56 +0000 https://blogs.perficient.com/?p=388284

In my previous post, Simplifying Redirect Management in Sitecore XM Cloud with Next.js and Vercel Edge Config,  I explored how Vercel Edge Config can completely transform how we manage redirects in Sitecore XM Cloud. Traditionally, redirects have lived inside Sitecore – often stored as content items or within custom redirect modules – which works well until scale, speed, and operational agility become priorities.

That’s where Vercel Edge Config steps in. By managing redirects at the edge, we can push this logic closer to users, reduce load on Sitecore instances, and make updates instantly available without redeployments. The result is faster performance, easier maintenance, and a cleaner separation of content from infrastructure logic.

In this short follow-up, I will walk you through a step-by-step migration path  from auditing your current redirects to validating, deploying, and maintaining them on Vercel Edge Config. Along the way, I will share practical tips, lessons learned, and common pitfalls to watch out for during the migration process.

Audit Existing Redirects

Before you begin the migration, take time to analyze and clean up your existing redirect setup. In many legacy websites that have been live for years, redirects often accumulate from multiple releases, content restructures, or rebranding efforts. Over time, they become scattered across modules or spreadsheets, and many of them may no longer be relevant.
This is your chance to comb through and make your redirect set current – remove obsolete mappings, consolidate duplicates, and simplify the structure before moving them to Vercel Edge Config. A clean starting point will make your new setup easier to maintain and more reliable in the long run.
Here is a good checklist to follow during the audit:
  • Export all existing redirects from Sitecore or any external sources where they might be managed.
  • Identify and remove obsolete redirects, especially those pointing to pages that no longer exist or have already been redirected elsewhere.
  • Combine duplicate or overlapping entries to ensure a single source of truth for each URL.
  • Validate destination URLs – make sure they’re live and resolve correctly.
  • Categorize by purpose – for example, marketing redirects, content migration redirects, or structural redirects.
  • If you want to store them separately, you can even use different Edge Config stores for each category. This approach can make management easier and reduce the risk of accidental overrides – I have demonstrated this setup in my previous blog.
  • Keep it simple – since we’re dealing with static one-to-one redirects, focus on maintaining clean mappings that are easy to review and maintain.

Define a Flexible JSON Schema

Once you have audited and cleaned up your redirects, the next step is to decide how they will be structured and stored in Edge Config. Unlike Sitecore, where redirects might be stored as content items or within a module, Edge Config uses a key-value data model, which makes JSON the most natural format for managing redirects efficiently.

The goal here is to define a clear and reusable JSON schema that represents your redirects consistently – simple enough to maintain manually, yet flexible enough to scale across multiple environments or stores.

Here’s the schema I used in my implementation:

{
  "/old-page": { 
    "destination": "/new-page", 
    "permanent": true 
  },
  "/legacy-section": { 
    "destination": "/resources", 
    "permanent": false 
  }
}
In this structure:
  • Each key (for example, “/old-page”) is the source path that should be redirected.
  • Each value contains two properties:
    • destination – the target path where the request should redirect.
    • permanent – a boolean flag (true or false) that determines whether the redirect should use a 308 (permanent) or 307 (temporary) status code.

Automate the Export

Once you’ve finalized your redirect list and defined your JSON structure, the next step is to automate the conversion process — so you can easily transform your audited data into the format that Vercel Edge Config expects.

In my implementation, I created a C# console application that automates this step. The tool takes a simple CSV file as input and converts it into the JSON format used by Edge Config.

The CSV file includes three columns: source, destination, permanent. The application reads this CSV and generates a JSON file in the format mentioned in the above section. You can find the complete source code and instructions for this utility on my GitHub repository here: ConvertCsvToJson

This approach is both simple and scalable:
  • You can collect and audit redirects collaboratively in a CSV format, which non-developers can easily work with.
  • Once finalized, simply run the console application to convert the CSV into JSON and upload it to Vercel Edge Config.
  • If you have multiple redirect categories or stores, you can generate separate JSON files for each using different input CSVs.
Tip: If you are working with a large set of redirects, this process ensures consistency, eliminates manual JSON editing errors, and provides an auditable version of your data before it’s deployed.
By automating this step, you save significant time and reduce the risk of human error – ensuring your Edge Config store always stays synchronized with your latest validated redirect list.

Validate & Test

Before you roll out your new redirect setup, it’s important to thoroughly validate and test the data and the middleware behavior. This stage ensures your redirects work exactly as expected once they’re moved to Vercel Edge Config.

A solid validation process will help you catch issues early – like typos in paths, invalid destinations, or accidental redirect loops – while maintaining confidence in your migration.

  • Validate that your JSON is correctly formatted, follows your destination + permanent schema, starts with /, and contains no duplicates.
  • Test redirects locally using the JSON generated from your console app to ensure redirects fire correctly, status codes behave as expected, and unmatched URLs load normally.
  • Check for redirect loops or chains so no route redirects back to itself or creates multiple hops.
  • Upload to a preview/test environment and repeat the tests to confirm the middleware works the same with the actual Edge Config store

Gradual Rollout

Once your redirects have been validated locally and in your preview environment, the next step is to roll them out safely and incrementally. The advantage of using Vercel Edge Config is that updates propagate globally within seconds – but that’s exactly why taking a controlled, phased approach is important.

After validating your redirects, roll them out gradually to avoid unexpected issues in production. Begin by deploying your Next.js middleware and Edge Config integration to a preview/test environment. This helps confirm that the application is fetching from the correct store and that updates in Edge Config appear instantly without redeployments.

Once everything looks stable, publish your redirect JSON to the production Edge Config store. Changes propagate globally within seconds, but it’s still good practice to test a few key URLs immediately. If you have logging or analytics set up (such as Analytics or custom logs), monitor request patterns for any unusual spikes, new 404s, or unexpected redirect hits.

If you’re using multiple Edge Config stores, roll them out one at a time to keep things isolated and easier to debug.
And always keep a simple rollback plan – because Edge Config maintains backup- it creates a backup each time the json is updated, you can always rollback to the previous version, with no redeploy required.

Monitor & Maintain

Once your redirects are live in Vercel Edge Config, it’s important to keep an eye on how they behave over time. Redirects aren’t a “set and forget” feature especially on sites that evolve frequently.

Use logging, analytics, or Vercel’s built-in monitoring to watch for patterns like unexpected 404s, high redirect activity, or missed routes. These signals can help you identify gaps in your redirect set or highlight URLs that need cleanup.

Review and update your redirect JSON regularly. Legacy redirects may become irrelevant as site structures change, so a quick quarterly cleanup helps keep things lean. And since your JSON is version-controlled, maintaining and rolling back changes stays simple and predictable.

If you use multiple Edge Config stores, make sure the separation stays intentional. Periodically check that each store contains only the redirects meant for it—this avoids duplication and keeps your redirect logic easy to understand.

Consistent monitoring ensures your redirect strategy remains accurate, fast, and aligned with your site’s current structure.

 

Migrating redirects from Sitecore to Vercel Edge Config isn’t just a technical shift – it’s an opportunity to simplify how your site handles routing, clean up years of legacy entries, and move this logic to a place thats faster, cleaner, and easier to maintain. With a thoughtful audit, a clear JSON structure, and an automated export process, the migration becomes surprisingly smooth.

As you move forward, keep an eye on the small details: avoid accidental loops, stay consistent with your paths, and use the permanent flag intentionally. A few mindful checks during rollout and a bit of monitoring afterward go a long way in keeping your redirect setup predictable and high-performing.

Ultimately, this approach not only modernizes how redirects are handled in an XM Cloud setup – it also gives you a structured, version-controlled system that’s flexible for future changes and scalable as your site evolves. Its a clean foundation you can build on confidently.

]]>
https://blogs.perficient.com/2025/11/23/migrating-redirects-in-sitecore-to-vercel-edge-config-a-practical-path/feed/ 0 388284
From Questions to Confidence: What Happens at Datablazer Mastery Onsite https://blogs.perficient.com/2025/11/20/from-questions-to-confidence-what-happens-at-datablazer-mastery-onsite/ https://blogs.perficient.com/2025/11/20/from-questions-to-confidence-what-happens-at-datablazer-mastery-onsite/#respond Thu, 20 Nov 2025 19:38:48 +0000 https://blogs.perficient.com/?p=388541

I’ve always believed that the best learning environments are the ones that feel like a conversation. Not a lecture, not a pitch, but a shared space where people come together to explore what’s possible. That’s the spirit behind the Datablazer Mastery Onsite (DMO) series, and it’s exactly what we’re bringing to New York this December. 

The Datablazer Community started with a simple idea: data matters to everyone. Whether you’re just starting out or deep into implementation, the challenges around trust, activation, and scale are shared. And while no one has all the answers, we’ve seen how much faster teams move when they learn from each other. 

 

DMO is where learning happens. It’s a day of enablement focused on Data 360 and Agentforce, designed to help attendees understand what works, what doesn’t, and how to move forward with confidence. You’ll hear from experts who’ve been in the trenches and are ready to share their strategies for delivering value. 

This December, Perficient will be represented by Anu Pandey, Technical Director, AI & Data 360. Anu brings deep expertise in Data Cloud implementation and activation strategies and will be sharing insights during the event. 

The event will cover how Data 360 creates a unified customer view, how Agentforce enables intelligent automation, and how both tools can be used together to drive smarter decisions. You’ll leave with practical insights, real-world examples, and a clearer path forward. 

If you’re asking questions like “Where do I start?” or “How do I know I’m doing this right?”, this event is for you. And if you’re already deep in the work, it’s a chance to refine your approach, connect with others, and share what you’ve learned. 

Join us in New York. Let’s build something smarter together. 

]]>
https://blogs.perficient.com/2025/11/20/from-questions-to-confidence-what-happens-at-datablazer-mastery-onsite/feed/ 0 388541