Technology Partners Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/ Expert Digital Insights Fri, 20 Feb 2026 20:21:19 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technology Partners Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/ 32 32 30508587 Insight into Oracle Cloud IPM Insights https://blogs.perficient.com/2026/02/20/insight-into-oracle-cloud-ipm-insights/ https://blogs.perficient.com/2026/02/20/insight-into-oracle-cloud-ipm-insights/#respond Fri, 20 Feb 2026 23:00:31 +0000 https://blogs.perficient.com/?p=390542

Why Intelligent Insights Matter in Modern Finance

In today’s data‑driven economy, success isn’t just about keeping up – it’s about anticipating change and acting decisively. Oracle IPM Insights, a powerful capability within Oracle EPM Cloud, empowers organizations to uncover critical anomalies, forecast emerging trends, and recommend actions that drive performance. With AI‑driven narratives and real‑time intelligence embedded directly into financial workflows, IPM Insights transforms raw data into strategic guidance – helping businesses improve forecast accuracy, control costs, and stay ahead in a rapidly evolving market.

 

Transforming Data into Actionable Intelligence

Oracle IPM Insights is designed to move finance teams beyond static reporting. It continuously monitors your EPM data, detects anomalies, and forecasts trends – all embedded within your planning and reporting workflows. This means insights aren’t just visible, they’re actionable, enabling proactive decision‑making across the enterprise.

By surfacing emerging risks and opportunities earlier, finance leaders can shift from reactive analysis to strategic guidance. The platform also reduces time spent on manual data investigation, allowing teams to focus on value‑added analysis rather than routine variance checks. Ultimately, IPM Insights helps organizations elevate forecasting accuracy, strengthen operational agility, and drive more confident decision‑making at scale.

 

Key Features of Oracle IPM Insights

  1. Anomaly Detection: Spot Issues Before They Escalate – IPM Insights identifies unusual patterns in your data, such as unexpected variances in budgets or forecasts. By catching anomalies early, finance teams can investigate root causes and correct issues before they affect performance, ensuring alignment with strategic objectives.
  2. Predictive & Prescriptive Analytics: From Forecast to Action – Beyond forecasting, IPM Insights provides guidance on corrective actions based on detected patterns. For example, if forecast accuracy begins to drift, the system can recommend refining key drivers or adjusting planning assumptions—helping teams stay ahead of potential risks.
  3. Forecast Variance & Bias Detection: Strengthening Forecast Reliability – IPM Insights continuously evaluates actuals vs. forecasted results to identify variance trends and detect systemic bias – whether forecasts are consistently optimistic, conservative, or misaligned with drivers. This helps finance teams improve forecast reliability, refine planning models, and increase confidence in future projections.
  4. Generative AI Narratives: Simplifying Complexity – IPM Insights automatically generates narrative explanations for anomalies, trends, and underlying drivers in plain language. These AI‑generated summaries make insights easy to share with stakeholders, improving understanding and reducing time spent preparing reports.

 

Integrating IPM Insights Across EPM

IPM Insights works natively across Oracle Cloud EPM solutions – Planning, Financial Consolidation and Close, Enterprise Profitability and Cost Management , Tax Reporting, and FreeForm Planning. This integration eliminates silos and ensures consistency across processes. By connecting insights across the full financial lifecycle, organizations can trace the impact of assumptions, drivers, and anomalies from planning through consolidation and final reporting. This unified view reduces reconciliation effort, improves data reliability, and accelerates the close‑to‑forecast cycle.

For finance teams, this integration delivers significant value: manual effort drops as data flows automatically across modules, enabling teams to focus on higher‑value analysis rather than time‑consuming data validation. Forecasts become more accurate thanks to a consistent, connected data foundation that minimizes discrepancies and increases trust in the numbers. Cross‑functional collaboration also improves, as FP&A, accounting, and operations all work from the same source of truth—leading to faster decisions and a more agile finance organization.

Best Practices for Optimization

Unlocking the full potential of Oracle IPM Insights requires more than activation – it demands a disciplined approach. Follow these best practices to maximize value:

  1. Define Insight Scope Strategically – Configure Insight Definitions for specific data slices aligned with business priorities to keep insights actionable.
  2. Incorporate Calendars & Event Context – Annotate insights with business events to distinguish expected fluctuations from true anomalies.
  3. Embed Insights into Everyday Workflows – Use Smart View and the Insights dashboard to make insights accessible where planners work.
  4. Use Narratives to Strengthen Commentary and Executive Reporting – Incorporate AI‑generated explanations into management decks, close packages, and forecast summaries to improve speed and consistency. This reduces time spent drafting commentary while increasing clarity and precision.
  5. Establish Governance & Ongoing Review – Create a monitoring team to fine-tune thresholds, validate models, and drive continuous improvement.

 

Future Trends in Enterprise Performance Management

  1. Driver-Based Forecasting with AutoMLx – Trends are shifting toward intelligent, driver-based forecasting. Oracle EPM leads with Advanced Predictions powered by AutoMLx, enabling multivariate models that incorporate key business drivers for greater accuracy and transparency.
  2. Conversational AI Agents for Finance – AI-driven assistants, will allow finance teams to query insights in natural language and receive instant recommendations – making planning more intuitive and collaborative. This shift will not only accelerates decision‑making but will also empower organizations to respond to market changes with greater agility, improving both financial accuracy and overall business performance.
  3. Self-Learning Models and Continuous ImprovementFuture models will learn from user actions and outcomes, improving accuracy over time. This adaptive capability ensures businesses stay ahead in an ever-changing market.

 

Why Insights Matter

The ability to detect, predict, and act on insights is no longer optional – it’s a competitive and existential necessity. In an environment where markets shift rapidly, budgets tighten, and expectations for accuracy increase, finance teams must operate with real‑time intelligence rather than backward‑looking reports. Organizations that can rapidly translate data into decisions gain measurable advantages in agility, cost control, and strategic alignment.

Oracle IPM Insights equips finance teams with the advanced analytics, automation, and predictive capabilities needed to stay ahead of uncertainty. By delivering timely insights directly within planning, close, and reporting workflows, IPM Insights turns raw data into actionable intelligence—empowering teams to respond faster, improve forecast reliability, and drive stronger business outcomes. The result is a finance function that doesn’t just report on performance—it actively shapes it, becoming a strategic partner to the entire enterprise.

 

Ready to unlock the power of Oracle IPM Insights? Leave a comment or contact us to explore how Oracle EPM Cloud can help you anticipate change, optimize performance, and lead with confidence.

 

]]>
https://blogs.perficient.com/2026/02/20/insight-into-oracle-cloud-ipm-insights/feed/ 0 390542
Perficient Earns Databricks Brickbuilder Specialization for Healthcare & Life Sciences https://blogs.perficient.com/2026/02/18/perficient-earns-databricks-brickbuilder-specialization-for-healthcare-life-sciences/ https://blogs.perficient.com/2026/02/18/perficient-earns-databricks-brickbuilder-specialization-for-healthcare-life-sciences/#respond Wed, 18 Feb 2026 17:59:11 +0000 https://blogs.perficient.com/?p=390471

Perficient is proud to announce that we have earned the Databricks Brickbuilder Specialization for Healthcare & Life Sciences, a distinction awarded to select partners who consistently demonstrate excellence in using the Databricks Data Intelligence Platform to solve the industry’s most complex data challenges.

This specialization reflects both our strategic commitment to advancing health innovation through data and AI, and our proven track record of helping clients modernize with speed, responsibility, and measurable outcomes.

Our combined expertise in Healthcare & Life Sciences and the Databricks platform uniquely positions us to help customers achieve meaningful impact, whether improving patient outcomes or accelerating the clinical data review process. This specialization underscores the strength of our capabilities across both the platform and within this highly complex industry. – Nick Passero, Director Data and Analytics

How We Earned the Specialization

Achieving the Databricks Brickbuilder Specialization requires a deep and sustained investment in technical expertise, customer delivery, and industry innovation.

2026 Partner Program Badge Brickbuilder Specialization Healthcare Life SciencesTechnical Expertise: Perficient met Databricks’ stringent certification thresholds, ensuring that dozens of our data engineers, architects, and AI practitioners maintain active Pro and Associate certifications across key domains. This level of technical enablement ensures that our teams not only understand the Databricks platform, but can apply it to clinical trials, healthcare claims management, and real world evidence, leading to AI-driven decisioning.

Delivery Excellence: Equally important, we demonstrated consistent success delivering in production healthcare and life sciences use cases. From enhancing omnichannel member services to migrating complex Hadoop workloads to Databricks for a large midwest payer, building a modern lakehouse on Azure for a leading children’s research hospital, and modernizing enterprise data architecture with Lakehouse and DataOps for a national payer, our client work demonstrates both scale and repeatability.

Thought Leadership: Our achievement also reflects ongoing thought leadership, another core requirement of Databricks’ specialization framework. Perficient continues to publish research-driven perspectives (Agentic AI Closed-Loop Systems for N-of-1 Treatment Optimization, and Agentic AI for RealTime Pharmacovigilance) that help executives navigate the evolving interplay of AI, regulatory compliance, clinical innovation, and operational modernization across the industry.

Why This Matter to You

Healthcare and life sciences organizations face unprecedented complexity as they seek to unify and activate data from sensitive datasets (EMR/EHR, imaging, genomics, clinical trial data). Leaders must make decisions that balance innovation with security, scale with precision, and AI-driven speed with regulatory responsibility.

The Databricks specialization matters because it signals that Perficient has both the technical foundation and the industry expertise to guide organizations through this transformation. Whether the goal is to accelerate drug discovery, reduce clinical trial timelines, personalize therapeutic interventions, or surface real-time operational insights, Databricks provides the engine and Perficient provides the strategy, implementation, and healthcare context needed to turn potential into outcomes.

A Thank You to Our Team

This accomplishment is the result of extraordinary commitment across Perficient’s Databricks team. Each certification earned, each solution architected, and each successful client outcome reflects the passion and expertise of people who believe deeply in improving healthcare through better data.

We’re excited to continue shaping the future of healthcare and life sciences with Databricks as a strategic partner.

To learn more about our Databricks practice and how we support healthcare and life sciences organizations, visit our partner page.

 

]]>
https://blogs.perficient.com/2026/02/18/perficient-earns-databricks-brickbuilder-specialization-for-healthcare-life-sciences/feed/ 0 390471
Agentforce Financial Services Use Cases: Modernizing Banking, Wealth, and Asset Management https://blogs.perficient.com/2026/02/18/agentforce-financial-services-use-cases-modernizing-banking-wealth-and-asset-management/ https://blogs.perficient.com/2026/02/18/agentforce-financial-services-use-cases-modernizing-banking-wealth-and-asset-management/#respond Wed, 18 Feb 2026 15:16:42 +0000 https://blogs.perficient.com/?p=390461

Editor’s Note: We are thrilled to feature this guest post by Tracy Julian, Financial Services Industry Lead & Architect at Perficient. With over 20 years of experience across retail banking, wealth management, and fintech, Tracy is a systems architect who specializes in turning complex data hurdles into high-velocity, future-ready AI solutions.

Executive Summary 

Financial services organizations face mounting pressure to deliver highly personalized client experiences while navigating increasingly complex regulatory requirements. At the same time, relationship managers and advisors spend a significant portion of their week searching for client information across disconnected systems. This administrative burden reduces time available for strategic client engagement and limits the ability to proactively identify cross-sell, retention, and risk management opportunities. 

Agentforce, Salesforce’s enterprise-grade agentic AI platform, addresses these challenges head-on. By automating data aggregation, surfacing real-time insights, and embedding compliance-aware intelligence directly into workflows, Agentforce helps financial services teams operate more efficiently and intelligently. 

This article explores real-world Agentforce financial services use cases and provides a practical implementation roadmap for organizations evaluating AI agent deployment. 

Key Takeaways 

  • Agentforce reduces client research time through automated, multi-source data aggregation 
  • Four proven Agentforce financial services use cases across banking, wealth, and asset management 
  • A 4–6 week implementation timeline is achievable with proper planning 
  • Built-in compliance automation aligned with SOC 2 and financial services standards 

The Challenge: Data Fragmentation in Modern Financial Services 

Financial services teams across B2B banking, wealth management, registered investment advisors (RIAs), and workplace services face a shared set of challenges that directly impact revenue, efficiency, and client satisfaction. 

  1. Information Silos Create Operational Inefficiency
  • Client data is scattered across multiple Salesforce orgs, legacy core banking systems, portfolio management platforms, and document repositories 
  • Financial advisors manage information across many different systems 
  • There is no single, unified view of client relationships, risk indicators, or cross-sell opportunities 
  1. Time-Intensive Meeting Preparation
  • Client-facing teams spend disproportionate time on administrative tasks rather than strategic interactions 
  • Relationship managers manually compile company summaries, account histories, and risk assessments before each meeting 
  • Information retrieval delays slow response times to client inquiries 
  1. Escalating Regulatory Complexity
  • Increasing regulations around data privacy (GDPR, CCPA, GLBA), personally identifiable information (PII), and record retention 
  • Manual compliance reviews create operational bottlenecks and increase the risk of human error 
  • Document scanning for sensitive data (SSNs, account numbers, tax IDs) is often reactive rather than preventive 
  1. Missed Revenue Opportunities
  • Without unified intelligence, leaders struggle to identify upsell, cross-sell, and retention risks in real time 
  • Fragmented data limits proactive account planning and relationship management 
  • Inconsistent visibility into consultant and intermediary relationships reduces partner channel effectiveness 

Real-World Example: Multi-Org Complexity 

A Perficient financial services client operates 20+ production Salesforce orgs across marketing, sales, and service. This complexity has resulted in: 

  • Significant manual effort by relationship managers searching for client information 
  • Inconsistent data interpretation across sales and service teams 
  • Compliance vulnerabilities caused by manual PII identification processes 
  • Delayed opportunity identification due to siloed account intelligence 

This scenario is common across enterprise financial services organizations—and represents one of the most compelling Agentforce financial services use cases. 

How Salesforce Agentforce Helps 

Agentforce is Salesforce’s next-generation AI platform, combining: 

  • Natural language processing (NLP) for conversational interfaces 
  • Multi-source data aggregation across Salesforce objects, external systems, and documents 
  • Workflow automation triggered by agent-driven insights and actions 
  • Compliance-aware processing with PII detection and security controls 
  • Real-time intelligence generated from both structured and unstructured data 

Unlike traditional chatbots or rule-based automation, Agentforce agents: 

  • Understand context and intent from natural language queries 
  • Access and synthesize information from multiple data sources simultaneously 
  • Generate actionable insights and recommendations—not just raw data 
  • Learn from user interactions to improve relevance over time 
  • Integrate seamlessly with existing Salesforce workflows and third-party systems 

Agentforce leverages Salesforce Einstein AI, Data 360 for unified data access, and the Hyperforce infrastructure to deliver enterprise-grade security, compliance, and trust for financial services use cases. 

Four High-Impact Agentforce Financial Services Use Cases 

The following Agentforce use cases have been developed specifically for financial services and can typically be implemented within four weeks. 

Client Intelligence Agent: Gain 360-Degree Relationship Insights 

The Client Summary Agent consolidates comprehensive client intelligence in seconds, eliminating manual data gathering. It aggregates: 

  • Company & Contact Details: Legal entity structure, key decision-makers, organizational hierarchy 
  • Financial Position: Account balances, asset allocation, liabilities, portfolio performance 
  • Relationship Health: Engagement scores, activity frequency, NPS data, retention risk indicators 
  • Opportunity Pipeline: Active deals, proposal status, estimated close dates, win probability 
  • Service History: Open and closed cases, resolution times, satisfaction ratings 
  • Interaction Timeline: Meetings, calls, emails, and all historical touchpoints 

Business Outcome
Relationship managers can prepare for meetings faster, personalize conversations, and proactively identify engagement and retention risks. Time previously spent gathering data is redirected to strategic client interactions. This represents one of the foundational Agentforce financial services use cases. 

Account Relationship Agent: Manage Complex Accounts & Client Risk 

For firms that work with consultants, brokers, or intermediaries, the Account Relationship Agent provides a unified view of partner relationships by consolidating: 

  • Partner Profile: Firm details, key contacts, AUM/AUA influenced, areas of specialization 
  • Referral History: Opportunities sourced, conversion rates, deal size, revenue attribution 
  • Engagement Metrics: Meeting cadence, co-marketing activity, webinar participation, content engagement 
  • Pipeline Analysis: Active referrals by stage, forecasted revenue, deal aging 
  • Collaboration Activity: Shared plans, joint calls, tasks, and communication history 

Business Outcome
Sales teams gain clarity into partner performance and potential, enabling better territory planning, stronger collaboration, and more strategic channel investment. 

Client Prospect Agent: Optimize Sales Intelligence & Next Best Action 

The Client Prospect Agent transforms raw data into actionable sales intelligence by analyzing: 

  • Company Intelligence: Industry position, competitive landscape, growth signals, news mentions 
  • Buying Signals: Website engagement, content consumption, event attendance, RFP activity 
  • Relationship Mapping: Existing connections, decision-makers, organizational structure 
  • Whitespace Analysis: Current services versus product catalog, cross-sell and upsell opportunities 
  • Next Best Actions: Prioritized recommendations based on engagement and firmographic data 

Business Outcome
Sales teams can prioritize accounts more effectively, uncover whitespace opportunities, and focus on actions that accelerate deal progression. This Agentforce financial services use case is most beneficial for acquisition teams. 

Document Scanning Agent: Automate PII Compliance Safeguards 

Regulatory compliance is non-negotiable in financial services. The Document Scanning Agent provides automated, pre-upload document scanning for: 

  • Social Security Numbers (SSNs): Multiple formats (XXX-XX-XXXX, XXXXXXXXX) 
  • Tax Identification Numbers (TINs/EINs): Business and individual identifiers 
  • Account Numbers: Bank, credit card, and brokerage accounts 
  • Passport Numbers: Government-issued identification 
  • Custom PII Patterns: Configurable regex for institution-specific data types 

Business Outcome
Organizations reduce human error, strengthen compliance posture, and protect sensitive client data—automatically and proactively. 

Getting Started: Next Steps for Your Organization 

If your organization is evaluating Agentforce, consider the following steps: 

  1. Assess Your Current State
  • Map data fragmentation across systems and or within the Salesforce org across objects  
  • Quantify time spent on manual data gathering 
  • Identify high-impact pain points 
  • Establish baseline metrics for measuring improvement 
  1. Define Success Criteria
  • Business outcomes: Efficiency gains, revenue impact, compliance risk reduction 
  • Adoption targets: Percentage of users actively engaging with agents 
  • Technical performance: Accuracy, response time, data completeness 
  • ROI expectations: Payback period and time to value 
  1. Prioritize Use Cases
  • Identify quick-win Agentforce financial services use cases that deliver value in 30–60 days 
  • Assess team readiness and change appetite 
  • Evaluate data availability and quality 
  • Align use cases to regulatory risk and compliance priorities 
  1. Engage Expert Partners
  • Schedule a discovery workshop with Perficient 
  • Review reference architectures and live demonstrations 
  • Develop a phased implementation roadmap 
  • Establish governance, KPIs, and success metrics 

AI Agents as a Competitive Advantage in Financial Services 

The financial services industry is at an inflection point. Organizations that successfully deploy Agentforce financial services use cases to augment human expertise will gain durable competitive advantages, including: 

  • Superior client experiences through faster, more personalized, and proactive service 
  • Improved operational efficiency by shifting effort from administration to relationship management 
  • Revenue growth through earlier identification of cross-sell, upsell, and retention opportunities 
  • Increased compliance confidence with automated safeguards that reduce regulatory risk 
  • Data-driven decision-making powered by unified, real-time intelligence 

Agentforce represents Salesforce’s most significant AI advancement for financial services—combining trusted CRM data with cutting-edge agentic AI capabilities. Organizations that move quickly, but strategically, will establish lasting advantages in client relationships, operational efficiency, and market leadership. 

Meet Your Expert 

Tracy

Tracy Julian
Financial Services Industry Lead & Architect, Salesforce Practice 

Tracy brings more than 20 years of financial services experience in retail banking, wealth management, capital markets, and fintech, spanning both industry and consulting roles with firms including the Big 4 across the U.S. and EMEA. 

She leads Perficient’s financial services industry efforts within the Salesforce practice, partnering with clients to define the vision and goals behind their transformation. She then uses that foundation to build smarter, future-ready solutions that deliver business first, scalable solutions across strategy, cloud migration, and innovation in marketing, sales, and service. 

A systems architect by trade, Tracy is known for aligning teams around a shared vision and solving complex problems with measurable impact. 

]]>
https://blogs.perficient.com/2026/02/18/agentforce-financial-services-use-cases-modernizing-banking-wealth-and-asset-management/feed/ 0 390461
An Ultimate Guide to the Toast Notification in Salesforce LWC https://blogs.perficient.com/2026/02/18/an-ultimate-guide-to-the-toast-notification-in-salesforce-lwc/ https://blogs.perficient.com/2026/02/18/an-ultimate-guide-to-the-toast-notification-in-salesforce-lwc/#respond Wed, 18 Feb 2026 07:56:51 +0000 https://blogs.perficient.com/?p=390323

Hello Trailblazers!

Take a scenario where you are creating a record in Salesforce, and you are not getting any kind of confirmation via notification whether your record is created successfully or it throws any Alert or Warning. So, for this, Salesforce has functionality called “Toast Notifications”.

Toast notifications are an effective way to provide users with feedback about their actions in Salesforce Lightning Web Components (LWC). They appear as pop-up messages at the top of the screen and automatically fade away after a few seconds.

So in this blog post, we are going to learn everything about Toast Notifications and their types in Salesforce Lightning Web Components (LWC), along with the real-world examples.

So, let’s get started…

 

In Lightning Web Components (LWC), you can display Toast Notifications using the Lightning Platform’s ShowToastEvent. Salesforce provides four types of toast notifications:

  1. Success – Indicates that the operation was successful.
    • Example: “Record has been saved successfully.”
  2. Error – Indicates that something went wrong.
    • Example: “An error occurred while saving the record.”
  3. Warning – Warns the user about a potential issue.
    • Example: “You have unsaved changes.”
  4. Info – Provides informational messages to the user.
    • Example: “Your session will expire soon.”

 

Img2

 

Example Code for a Toast Notification in LWC:

import { ShowToastEvent } from 'lightning/platformShowToastEvent';

const event = new ShowToastEvent({
    title: 'Success!',
    message: 'Record has been created successfully.',
    variant: 'success' // Can be 'success', 'error', 'warning', or 'info'
});
this.dispatchEvent(event);

So, here is an example of the Toast Notification.

Img1

 

So this way, you can write toast notification code and make changes according to your requirements.

In the next part of this blog series, we will explore what a success toast notification is and demonstrate how to implement it through a practical, real-world example.

Until then, Keep Reading !!

“Consistency is the quiet architect of greatness—progress so small it’s often unnoticed, yet powerful enough to reshape your entire future.”

Related Posts:

  1. Toast Notification in Salesforce
  2. Toast Event: Lightning Design System (LDS)

You Can Also Read:

1. Introduction to the Salesforce Queues – Part 1
2. Mastering Salesforce Queues: A Step-by-Step Guide – Part 2
3. How to Assign Records to Salesforce Queue: A Complete Guide
4. An Introduction to Salesforce CPQ
5. Revolutionizing Customer Engagement: The Salesforce Einstein Chatbot

 

]]>
https://blogs.perficient.com/2026/02/18/an-ultimate-guide-to-the-toast-notification-in-salesforce-lwc/feed/ 0 390323
Building a Marketing Cloud Custom Activity Powered by MuleSoft https://blogs.perficient.com/2026/02/12/building-a-marketing-cloud-custom-activity-powered-by-mulesoft/ https://blogs.perficient.com/2026/02/12/building-a-marketing-cloud-custom-activity-powered-by-mulesoft/#comments Thu, 12 Feb 2026 17:37:13 +0000 https://blogs.perficient.com/?p=390190

The Why…

Salesforce Marketing Cloud Engagement is incredibly powerful at orchestrating customer journeys, but it was never designed to be a system of record. Too often, teams work around that limitation by copying large volumes of data from source systems into Marketing Cloud data extensions—sometimes nightly, sometimes hourly—just in case the data might be needed in a journey. This approach works, but it comes at a cost: increased data movement, synchronization challenges, latency, and ongoing maintenance that grows over time.

Custom Activities, which are surfaced in Journey Builder, open the door to a different model. Instead of forcing all relevant data into Marketing Cloud ahead of time, a journey can request exactly what it needs at the moment it needs it. When you pair a Custom Activity with MuleSoft, Marketing Cloud can tap into real-time, orchestrated data across your enterprise—without becoming another place where that data has to live.

Example 1: Weather

Consider a simple example like weather-based messaging. Rather than pre-loading weather data for every subscriber into a data extension, a Custom Activity can call an API at decision time, retrieve the current conditions for a customer’s location, and immediately branch the journey or personalize content based on the response. The data is used once, in context, and never stored unnecessarily inside Marketing Cloud.

Example 2: Enterprise Data

The same pattern becomes even more compelling with enterprise data. Imagine a post-purchase journey that needs to know the current status of an order, a shipment, or a service case stored in a system like Data 360. Instead of replicating that operational data into Marketing Cloud—and keeping it in sync—a Custom Activity can call MuleSoft, which in turn retrieves and aggregates the data from the appropriate back-end systems and returns only what the journey needs to proceed.

Example 3: URL Shortener for SMS (Real-Time)

While Marketing Cloud Engagement does provide it own form of a URL shortener, some companies want to use Bitly.  Typically in order to use a Bitly URL we would have to move our logic to Server Side Javascript (SSJS) so the API call to Bitly could be made in the SSJS, and then we could use the URL in our text message.  SSJS forces us to use Automation Studio which cannot be run in real-time and must be scheduled.  This is very important to note, that being able to do API calls within the flow of a Journey is very powerful and helps to meet more real-time use cases. With these Custom Activities we can ask Mulesoft to call the Bitly API which returns the shortened URL so then it can be used in the email or SMS message.

This is where MuleSoft truly shines. It acts as a clean abstraction layer between Marketing Cloud and your enterprise landscape, handling authentication, transformation, orchestration, and governance. Marketing Cloud stays focused on customer engagement, while MuleSoft owns the complexity of integrating with source systems. The result is a more scalable, real-time, and maintainable architecture—one that reduces data duplication, respects system boundaries, and enables richer, more contextual customer experiences.

The How….

So how does this actually work in practice? In the next section, we’ll walk through how a Marketing Cloud Custom Activity can call a MuleSoft API in the middle of a Journey, receive a response in real time, and use that data to drive decisions or personalization. We’ll focus on the key building blocks—what lives in Marketing Cloud, what belongs in MuleSoft, and how the two communicate—so you can see how this pattern comes together without turning Marketing Cloud into yet another integration layer.

Part 1 – Hosted Files

Every Marketing Cloud Custom Activity starts with hosted files. These files provide the user interface and configuration that Journey Builder interacts with, making them the foundation of the entire solution. At a minimum, this includes five main files/folders.

  1. index.html – This is what you see in Journey Builder when you click on the Custom Activity to configure it.
  2. config.json – This holds the Mulesoft endpoint to call and what output arguments will be used.
  3. customactivity.js – The javascript that is running behind the index.html page.
  4. postmonger.js – More javascript to support the index.html page
  5. A folder called images must exist and a single icon.png image should exist in it.  This image is shown within Journey Builder.

Blog Ca Files

These files tell Marketing Cloud how the activity behaves, what endpoints it uses, and how it appears to users when they drag it onto a journey. While the business logic ultimately lives elsewhere, within Mulesoft in our example, hosted files are what make the Custom Activity feel native inside Journey Builder.

In this pattern, hosted files are intentionally lightweight. Their primary responsibility is to capture configuration input from the marketer—such as which API operation to call, optional parameters, or behavior flags—and pass that information along when the journey executes. They are not responsible for complex transformations, orchestration, or direct system-to-system integrations. By keeping the hosted files focused on presentation and configuration, you reduce coupling with backend systems and make the Custom Activity easier to maintain, update, and reuse across different journeys.

A place to do a simple proof of concept is on GitHub if you want to try this yourself.  You can easily create these four files and one folder in a repo.  If you use GitHub, then you do have to use the Pages functionality in GitHub to make that repo public.  This public URL will then be used when we configure the ‘Installed App’ in Marketing Cloud Engagement later.

In production, Custom Activity config.json and UI assets should be hosted on an enterprise‑grade HTTPS platform like Azure App Service, AWS CloudFront/S3, or Heroku—not GitHub.

One thing I had to overcome is that the config.json gets cached at the Marketing Cloud server level as talked about in this post.  So when I had to make changes to my config.json, I would create a new folder (v2 / v3) in my repository and then use that path in my Installed Package in the Component added in Journey Builder.

Part 2 – API Server – Mulesoft

This is really the beauty here.  Instead of building API calls in SSJS that are hard to debug, difficult to scale and hard to secure, we get to pass all of that off to an enterprise API platform like Mulesoft.  It really is the best of both worlds.  There are basically two main pieces on the Mulesoft side: A) Five endpoints to develop and B) security.

The Five Endpoints.

Journey Builder uses four lifecycle endpoints to manage the activity and one execute endpoint to process each contact and return outArguments used for decisioning and personalization.

The five endpoints that have to be developed in Mulesoft are…

Endpoint Called When Per Contact? Returns outArguments?
/save User saves config ❌ ❌
/validate User publishes ❌ ❌
/publish Journey goes live ❌ ❌
/execute Contact hits activity ✅ ✅
/stop Journey stops ❌ ❌

For the save, validate, publish and stop in Mulesoft they need to return a 200 status code and can return an empty JSON string of {} in the most basic example.

For the execute method, it should also return a 200 status code and simple json that looks like this for any outArguments…  { status: “myStatus” }

The Security.

The first piece of security is configured in the config.json file.   There is a useJwt key that can either be true of false for each of the endpoint.   If it is true, then Mulesoft will receive an encoded string based on the JWT Signing Secret that was created from the Installed Package in Marketing Cloud.  If jwt is false then Mulesoft will just receive the plain JSON.  For production level work we should make sure jwt is true.
We can also use an OAuth 2.0 Bearer Token.  We want to make sure that our Mulesoft endpoints are only responding to calls coming from Marketing Cloud Engagement.

Part 3 – Journey Builder – Custom Activities

Once the configuration details are setup in the app described in step 2, then creating the custom activity and adding it to the Journey is pretty quick.
  1. Go to the ‘Installed Package’ in setup and create a new app following these steps.
    1. When you add your ‘Component’ to the Installed App selecting ‘Customer Updates’ in the ‘Category’ drop-down worked for me.
    2. My ‘Endpoint URL’ had a format like this:  https://myname.github.io/my_repo_name/v3/
      Blog Ca Package
  2. Create a new Journey
  3. Your new Custom Activity will show up in the Components panel on the left-hand side.  Since we selected ‘Customer Updates’ in step 1 above, our ‘Send to Mulesoft V3a’ Custom Activity shows in that section.   The name under the icon comes from the config.json file.  The image is the icon.png from the images folder.
    Blog Jb View
  4. Once you drag your Custom Activity onto the Journey Builder page you will be able to click on it to configure it.
  5. The user interface from the index.html will display when you click on it so you can configure your Custom Activity.  Note that this user interface could be changed to collect whatever configuration needs to be collected.
    Blog Ca Indexpage
  6. When the ‘Done’ buttons are clicked on the page, then the javascript runs and saves the configuration details into the Journey Builder itself.  In my example the gray and blue ‘Done’ buttons are hooked to the same javascript and really do the same thing.

Part 4 – How to use the Custom Activity

outArguments

Now that we have our Custom Activity configured and in our journey, now the integration with Mulesoft becomes a configuration detail which is so great for admins.  In the config.json file there are two places where the outArguments are placed.
The first is in the arguments section towards the top.  Here I can provide a default value for my status field, which is this case is the very intuitive “DefaultStatus”.  🙂
"arguments": {
   "execute": {
     "inArguments": [],
     "outArguments": [
       {
         "status": "DefaultStatus"
       }
     ],
     "url": "https://mymuleAPI.partofurl.usa-e1.cloudhub.io/api/marketingCloud/execute",
     "useJwt": false,
     "timeout": 60000,
     "retryCount": 3,
     "retryDelay": 3000,
     "concurrentRequests": 5
   }
 },

The second place is lower in the config.json file in the schema section and describes the actual data type for my output variable.  We can see the status variable is a ‘Text’ field, that has access = visible and direction = out.

"schema":{
      "arguments":{
          "execute":{
              "inArguments": [],
              "outArguments":[
                  {
                      "status":{
                          "dataType":"Text",
                          "isNullable":true,
                          "access":"visible",
                          "direction":"out"
                      }
                  }
              ]
          }
      }
  }

Note in the example below that I did not use a typical status value like ‘Not Started’, ‘In Progress ‘ and ‘Done’.  That would have made more sense. 🙂  Instead I was running five records through my journey with various versions of my last name: Luschen, Luschen2, Luschen3, Luschen4 and Luschen5.  So Mulesoft was basically received these different spellings through the json being passed over, parsed it out of the incoming json and then injected it into the response json in the status field.  This is what the incoming data extension looked like.

Blog De

An important part of javascript turned out to be setting the isConfigured flag to true in the customActivity.js file.  This makes sure Journey Builder understands that node has been configured when the journey is ‘Validated’ before it is ‘Activated’.

activity.metaData = activity.metaData || {};
activity.metaData.isConfigured = true;

Now that we have our ‘status’ field as an output from Mulesoft via the Custom Activity, I will describe how it can be used in either a Decision Split or some AmpScript.

Decision Split

The outArguments show up under the ‘Journey Data’ portion of the configuration screen.  Once you select the ‘status’ outArgument you configure the rest of the decision split like any other one you have built before.
Blog Ca Decision Split
Blog Ca Decision Split2

AmpScript

These outArguments are also available as send context attributes so they are easy to use in any manner you want within your AmpScript for either email or SMS personalization.
%%[
SET @status = AttributeValue(“status”)
]%%
%%=v(@status)=%%

The Wrap-up…

As you let the flexibility of these Custom Activities sink in, it really creates a lot of flexible patterns.  The more data we can surface to our marketing team, the more dynamic, personalized and engaging the content will become.  While we all see more campaigns and use cases being developed on the new Agentforce Marketing, we all know that Marketing Cloud Engagement has some legs to it yet.  I hope this post has given you some ideas to make your Marketing team look like heros as they use Journey Builder to its fullest potential!

I want to thank my Mulesoft experts Anusha Danda and Jana Pagadala for all of their help!

Please connect with me on LinkedIn for more conversations!  I am here to help make you a hero with your next Salesforce project.

Example Files…

Config.JSON

{  
  "workflowApiVersion": "1.1",
  "metaData": {
    "icon": "images/icon.png",
    "category": "customer",
    "isConfigured": true,
    "configOnDrop": false
  },
  "type": "REST",
  "lang": {
    "en-US": {
      "name": "Send to MuleSoft V3a",
      "description": "Calls MuleSoft to orchestrate downstream systems V3a."
    }
  },
  "arguments": {
    "execute": {
      "inArguments": [],
      "outArguments": [
        {
          "status": "DefaultStatus"
        }
      ],
      "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/execute",
      "useJwt": true,
      "timeout": 60000,
      "retryCount": 3,
      "retryDelay": 3000,
      "concurrentRequests": 5
    }
  },
  "configurationArguments": {
    "applicationExtensionKey": "MY_KEY_ANYTHING_I_WANT_MULESOFT_TEST",
    "save":    { "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/save",    "useJwt": true },
    "publish": { "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/publish", "useJwt": true },
    "validate":{ "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/validate","useJwt": true },
    "stop":    { "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/stop",    "useJwt": true }
  },
  "userInterfaces": {
    "configModal": { "height": 480, "width": 480 }
  },
  "schema":{
      "arguments":{
          "execute":{
              "inArguments": [],
              "outArguments":[
                  {
                      "status":{
                          "dataType":"Text",
                          "isNullable":true,
                          "access":"visible",
                          "direction":"out"
                      }
                  }
              ]
          }
      }
  }
}

Index.html

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8" />
  <title>Terry – JB → Mule Custom Activity</title>
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <style>
    body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Arial, sans-serif; margin: 24px; }
    label { display:block; margin-top: 16px; font-weight:600; }
    input, select, button { padding: 8px; font-size: 14px; }
    button { margin-top: 20px; }
    .hint { color:#666; font-size:12px; }
  </style>
</head>
<body>
  <h2>Send to MuleSoft – Custom Activity</h2>
  <p class="hint">Configure the API URL and (optionally) bind a Journey field3.</p>

  <label for="apiUrl">MuleSoft API URL</label>
  <input id="apiUrl" type="url" placeholder="https://api.example.com/journey/execute2" style="width:100%" />

  <label for="fieldPicker">Bind a field from Entry Source (optional)</label>
  <select id="fieldPicker">
    <option value="">— none —</option>
  </select>

  <button id="done">Done</button>

  <!-- Postmonger must be local in your repo - ADD BEGIN AND CLOSE BRACKETS BELOW-->
  script src="./postmonger.js"></script
  <!-- Your Postmonger client logic - ADD BEGIN AND CLOSE BRACKETS BELOW-->
  script src="./customActivity.js?v=2026-02-02v1"></script
</body>
</html>

 

CustomActivity.js

/* global Postmonger */
(function () {
  'use strict';

  // Create the Postmonger session (bridge to Journey Builder)
  const connection = new Postmonger.Session();

  // Journey Builder supplies this payload when we call 'ready'
  let activity = {};
  let schema = [];
  let pendingSelectedField = null;  // holds saved token until options exist

  document.addEventListener('DOMContentLoaded', () => {
    // Listen to JB lifecycle events
    connection.on('initActivity', onInitActivity);
    connection.on('requestedTokens', onTokens);
    connection.on('requestedEndpoints', onEndpoints);
    connection.on('requestedSchema', onRequestedSchema); // common pattern in field pickers
    connection.on('clickedNext', onDone);

    // Signal readiness and request useful context
    connection.trigger('ready');
    connection.trigger('requestTokens');
    connection.trigger('requestEndpoints');

    // Optionally, ask for Entry Source schema (undocumented but widely used in the field)
    connection.trigger('requestSchema');

    // Bind UI
    document.getElementById('done').addEventListener('click', onDone);
  });

  function onInitActivity (payload) {
    activity = payload || {};
    // Re-hydrate UI if the activity is being edited
    try {
      const args = (activity.arguments?.execute?.inArguments || [])[0] || {};
      if (args.apiUrl) document.getElementById('apiUrl').value = args.apiUrl;
      if (args.selectedField) document.getElementById('fieldPicker').value = args.selectedField;
      pendingSelectedField = args.selectedField;
    } catch (e) {}
  }

  function onTokens (tokens) {
    // If you ever need REST/SOAP tokens, they arrive here
    // console.log('JB tokens:', tokens);
  }

  function onEndpoints (endpoints) {
    // REST base URL for BU, if you need it
    // console.log('JB endpoints:', endpoints);
  }

  function onRequestedSchema (payload) {
    schema = payload?.schema || [];
    const select = document.getElementById('fieldPicker');

    // Keep current value if re-opening
    const current = select.value;
    // Reset options (leave the first '— none —')
    select.length = 1;

    // Populate with Entry Source keys (e.g., {{Event.APIEvent-UUID.Email}})
    schema.forEach(col => {
      const opt = document.createElement('option');
      opt.value = `{{${col.key}}}`;
      opt.textContent = col.key.split('.').pop();
      select.appendChild(opt);
    });

    if (current) select.value = current;
    if (pendingSelectedField) select.value = pendingSelectedField;
    
  }

  function onDone () {
    const apiUrl = document.getElementById('apiUrl').value?.trim() || '';
    const selectedField = document.getElementById('fieldPicker').value || '';

    // Validate minimal config
    if (!apiUrl) {
      alert('Please provide a MuleSoft API URL.10');
      return;
    }
    // alert(selectedField);

    // Build inArguments that JB will POST to /execute at run time
    const inArguments = [{
      apiUrl,            // static value from UI
      selectedField      // optional mustache ref to Journey Data
    }];

    // Mutate the activity payload we received and hand back to JB
    activity.arguments = activity.arguments || {};
    activity.arguments.execute = activity.arguments.execute || {};
    activity.arguments.execute.inArguments = inArguments;

    activity.metaData = activity.metaData || {};
    activity.metaData.isConfigured = true;

    // Tell Journey Builder to save this configuration
    connection.trigger('updateActivity', activity);
  }
})();

 

]]>
https://blogs.perficient.com/2026/02/12/building-a-marketing-cloud-custom-activity-powered-by-mulesoft/feed/ 3 390190
EDS Adventures – Integrating External Data and Building Custom Feature Blocks https://blogs.perficient.com/2026/02/11/eds-adventures-integrating-external-data-and-building-custom-feature-blocks/ https://blogs.perficient.com/2026/02/11/eds-adventures-integrating-external-data-and-building-custom-feature-blocks/#respond Wed, 11 Feb 2026 16:23:43 +0000 https://blogs.perficient.com/?p=390252

In Edge Delivery Services, you have good options for putting together engaging content. Adobe’s block collection has a considerable number of content shapes, providing a good base or starting point for your project. This is similar in purpose to the Sling + HTL-driven components provided by WCM Core Components. While similar in purpose, they are very different in design. EDS provides a more simplified process for creating authorable content, backed by an architecture that always executes optimally. EDS blocks can enable similar features to what a Sling-driven component might deliver. In this post, I’ll walk through how to build a custom block with unique capabilities and integrate third-party APIs, while keeping everything at the edge!

What We Will Build

Defining the Block

We’ll develop a block that represents a process for retrieving data and using it to directly change rendered output. We’re using a simple use case for demonstration, but this technique could be used for API data retrieval from any database, data warehouse, or repository.

The block we’re developing is for a fictional paint company’s color previewer. It allows users to preview different paint colors in a fictional coffee shop. This type of content would be useful for customers wanting to visualize how the paint colors might look in their real-life home or business.

The paint colors will be provided from an API, internally managed by our fictional paint company. The block consumes this data and uses it to render swatches. Upon click or tap of a swatch color, the walls in the image will update to render the selected color.

 

Most of this can be contained in a single GitHub repository, based on Adobe’s EDS boilerplate repo. The API will be provided as a Cloudflare worker (as I said, keeping everything at the edge).

 

It’s to be assumed that you already have an established AEM EDS project, with a provisioned cloud service tenant, programs, environments, and deploy pipelines. Please find details on setting up an EDS site and the Universal Editor here: https://www.aem.live/developer/ue-tutorial.

Architecture Quick Summary

We’re leveraging the pattern of authoring EDS pages in the AEM as a cloud service author tier, and publishing to EDS preview and publish tiers. All third-party API requests happen client-side.

High Level EDS and 3rd Party API View

Block Definition and Model

For an EDS project, particularly one based on Adobe’s EDS boilerplate, 3 key files are needed for defining a block and where it may be authored.

The component-definition.json file defines a block’s display name, id, resource type, and the name of its data model. For our block, we need to add the following object to the array in this file:

{
  "title": "Paint Room Preview",
  "id": "paint-room-preview",
  "plugins": {
    "xwalk": {
      "page": {
        "resourceType": "core/franklin/components/block/v1/block",
        "template": {
          "name": "Paint Room Preview",
          "model": "paint-room-preview"
        }
      }
    }
  }
}

The component-models.json file describes the block’s data model and authorable field types. For our block, we need to add the following to the array in this file:

{
  "id": "paint-room-preview",
  "fields": [
    {
      "component": "reference",
      "valueType": "url",
      "name": "baseImage",
      "label": "Paint Preview Base Image",
      "description": "The base image to recolor.",
      "multi": false
    },
    {
      "component": "reference",
      "valueType": "url",
      "name": "maskImage",
      "label": "Paint Preview Mask Image",
      "description": "Black/white mask defining which areas to recolor.",
      "multi": false
    },
    {
      "component": "reference",
      "valueType": "url",
      "name": "shadingImage",
      "label": "Paint Preview Shading Image",
      "description": "Shading image defining where to apply lights and shadows.",
      "multi": false
    }
  ]
 }

This configuration defines 3 image selection fields, allowing authors to pick one image as the base image, a layer mask version of that base image, and a shading version of that base image. This base image is changed by the color selection, with the colors applied in the specific areas defined by the mask, namely, the room’s walls. The shading image ensures the existing shadows and highlights are retained, so nothing is flattened or washed out. These 3 images are used by our block script to build a composite image based on the color selection. To the user, the paint color changes as if the walls were always the selected color.

Relating this to Sling, the component-filters.json is akin to a responsivegrid/layout-container allowed components policy. Our block id is “paint-room-preview”. In the file, we can add this into any block’s array list of components to allow our block to be added to that section of a page. This is sensible for blocks designed to contain other blocks, such as sections, lists, embeds, carousels, etc. We’ll add “paint-room-preview” to the section block’s filter list:

{
  "id": "section",
  "components": [
    "text",
    "image",
    "button",
    "title",
    "hero",
    "cards",
    "columns",
    "fragment",
    "paint-room-preview"
  ]
},

Block Functionality

Ok, now for the block itself, we need to create a JavaScript and CSS file. We’ll also create a helper method in the scripts/aem.js file to abstract API calls and allow for better re-use. In the project’s blocks folder, create a new folder named paint-room-preview. Then, in this folder, create a new file called paint-room-preview.js with the following contents:

import { fetchFromApi } from '../../scripts/aem.js';

export default async function decorate(block) {
  const COLORS_URL = 'https://yourdomain.com/colorapi/colors.json';
  const PAGE_SIZE = 30;
  const VISIBLE = 5;

  function ensureMarkup() {
    let root = block.querySelector('.paint-room-preview');
    if (!root) {
      root = document.createElement('div');
      root.className = 'paint-room-preview';
      block.appendChild(root);
    }

    let canvas = root.querySelector('#room-canvas');
    if (!canvas) {
      canvas = document.createElement('canvas');
      canvas.id = 'room-canvas';
      root.appendChild(canvas);
    }

    let nav = root.querySelector('.bm-nav');
    if (!nav) {
      nav = document.createElement('div');
      nav.className = 'bm-nav';
      nav.innerHTML = `
        <button id="bm-prev">Prev</button>
        <div id="bm-colors"></div>
        <button id="bm-next">Next</button>
      `;
      root.appendChild(nav);
    }
    return root;
  }

  const root = ensureMarkup();

  function findImageFromMarkup(prop) {
    const img = block.querySelector(`img[data-aue-prop="${prop}"]`);
    return img ? img.getAttribute('src') : '';
  }

  const baseImage = (block.dataset.baseImage?.trim())
    || (root.dataset.baseImage?.trim())
    || findImageFromMarkup('baseImage') || '';

  const maskImage = (block.dataset.maskImage?.trim())
    || (root.dataset.maskImage?.trim())
    || findImageFromMarkup('maskImage') || '';

  const shadingImage = (block.dataset.shadingImage?.trim())
    || (root.dataset.shadingImage?.trim())
    || findImageFromMarkup('shadingImage') || '';

  if (!baseImage || !maskImage || !shadingImage) {
    root.innerHTML = `
      <div style="border:1px dashed #ddd;padding:12px;border-radius:6px;color:#666;">
        Paint Room Preview requires Base Image, Mask Image, and Shading Image.
      </div>`;
    return;
  }

  const canvas = root.querySelector('#room-canvas');
  const ctx = canvas.getContext('2d');
  if (!ctx) return;

  const prevBtn = root.querySelector('#bm-prev');
  const nextBtn = root.querySelector('#bm-next');
  const colorsContainer = root.querySelector('#bm-colors');

  canvas.style.width = '100%';
  colorsContainer.style.display = 'flex';
  colorsContainer.style.gap = '10px';
  colorsContainer.style.flexWrap = 'wrap';
  colorsContainer.style.justifyContent = 'center';

  function loadImage(src) {
    return new Promise((resolve, reject) => {
      const img = new Image();
      img.crossOrigin = 'anonymous';
      img.onload = () => resolve(img);
      img.onerror = () => reject(new Error(`Failed loading image ${src}`));
      img.src = src;
    });
  }

  let imgBase;
  let imgMask;
  let imgShade;
  try {
    [imgBase, imgMask, imgShade] = await Promise.all([
      loadImage(baseImage),
      loadImage(maskImage),
      loadImage(shadingImage),
    ]);

    block.querySelectorAll('img[data-aue-prop]').forEach((img) => {
      const wrap = img.closest('picture,div') || img;
      wrap.style.display = 'none';
    });
  } catch (e) {
    // eslint-disable-next-line no-console
    console.error(e);
    root.innerHTML = '<div style="color:#b00">Error loading images.</div>';
    return;
  }

  canvas.width = imgBase.width;
  canvas.height = imgBase.height;
  ctx.drawImage(imgBase, 0, 0);

  function getMaskData() {
    const temp = document.createElement('canvas');
    temp.width = canvas.width;
    temp.height = canvas.height;
    const tctx = temp.getContext('2d');
    tctx.drawImage(imgMask, 0, 0, temp.width, temp.height);
    return tctx.getImageData(0, 0, temp.width, temp.height).data;
  }
  const maskData = getMaskData();

  function getShadeData() {
    const temp = document.createElement('canvas');
    temp.width = canvas.width;
    temp.height = canvas.height;
    const tctx = temp.getContext('2d');
    tctx.drawImage(imgShade, 0, 0, temp.width, temp.height);
    return tctx.getImageData(0, 0, temp.width, temp.height).data;
  }
  const shadeData = getShadeData();

  function hexToRgb(hex) {
    const h = hex.replace('#', '');
    return {
      r: parseInt(h.substring(0, 2), 16),
      g: parseInt(h.substring(2, 4), 16),
      b: parseInt(h.substring(4, 6), 16),
    };
  }
  function blend(base, target, amt) {
    return Math.round(base * (1 - amt) + target * amt);
  }

  function applyPaintHex(hex) {
    const tgt = hexToRgb(hex.startsWith('#') ? hex : `#${hex}`);

    // Step 1: reset base
    ctx.drawImage(imgBase, 0, 0);
    const imgData = ctx.getImageData(0, 0, canvas.width, canvas.height);
    const { data } = imgData;

    // Step 2: apply flat paint using alpha mask
    for (let i = 0; i < data.length; i += 4) {
      const maskVal = maskData[i] / 255;
      if (maskVal > 0.03) {
        data[i] = blend(data[i], tgt.r, maskVal);
        data[i + 1] = blend(data[i + 1], tgt.g, maskVal);
        data[i + 2] = blend(data[i + 2], tgt.b, maskVal);
      }
    }

    // Step 3: multiply wall shading (lighting pass)
    for (let i = 0; i < data.length; i += 4) {
      const maskVal = maskData[i] / 255;
      if (maskVal > 0.03) {
        const shade = shadeData[i] / 255; // grayscale
        data[i] = Math.round(data[i] * shade);
        data[i + 1] = Math.round(data[i + 1] * shade);
        data[i + 2] = Math.round(data[i + 2] * shade);
      }
    }

    ctx.putImageData(imgData, 0, 0);
  }

  let apiPage = 1;
  let pageIndex = 0;
  let colors = [];

  async function loadApiPage(p = 1) {
    try {
      const json = await fetchFromApi(COLORS_URL, {
        page: p,
        pageSize: PAGE_SIZE,
      });
      colors = Array.isArray(json.data) ? json.data : [];
      apiPage = json.page || p;
      pageIndex = 0;
    } catch (e) {
      // eslint-disable-next-line no-console
      console.error(e);
      colors = [];
    }
  }

  function renderSwatches() {
    colorsContainer.innerHTML = '';
    const start = pageIndex * VISIBLE;
    const slice = colors.slice(start, start + VISIBLE);

    if (slice.length === 0) {
      colorsContainer.innerHTML = '<div>No colors</div>';
      return;
    }

    slice.forEach((c, idx) => {
      const hex = (c.hex || '').replace('#', '');
      const name = c.name || `Color ${idx + 1}`;
      const sw = document.createElement('button');
      sw.style.width = '48px';
      sw.style.height = '48px';
      sw.style.borderRadius = '6px';
      sw.style.border = '1px solid #ddd';
      sw.style.background = `#${hex}`;
      sw.addEventListener('click', () => applyPaintHex(hex));

      const wrap = document.createElement('div');
      wrap.style.display = 'flex';
      wrap.style.flexDirection = 'column';
      wrap.style.alignItems = 'center';
      wrap.style.fontSize = '12px';
      wrap.style.color = '#333';
      wrap.style.minWidth = '64px';
      wrap.style.gap = '4px';

      const lbl = document.createElement('div');
      lbl.textContent = name;
      lbl.style.maxWidth = '72px';
      lbl.style.textOverflow = 'ellipsis';
      lbl.style.overflow = 'hidden';

      wrap.appendChild(sw);
      wrap.appendChild(lbl);
      colorsContainer.appendChild(wrap);
    });
  }

  if (pageIndex < 1) {
    prevBtn.disabled = true;
  }

  prevBtn.addEventListener('click', async () => {
    const maxIndex = Math.floor((colors.length - 1) / VISIBLE);

    if (pageIndex > 0) {
      pageIndex -= 1;
      renderSwatches();
      if (pageIndex < 1) {
        prevBtn.disabled = true;
      }
      if (pageIndex < maxIndex) {
        nextBtn.disabled = false;
      }
      return;
    }
    if (apiPage > 1) {
      await loadApiPage(apiPage - 1);
      pageIndex = Math.floor((colors.length - 1) / VISIBLE);
      renderSwatches();
    }
  });

  nextBtn.addEventListener('click', async () => {
    const maxIndex = Math.floor((colors.length - 1) / VISIBLE);

    if (pageIndex < maxIndex) {
      pageIndex += 1;
      if (pageIndex >= 1) {
        prevBtn.disabled = false;
      }
      renderSwatches();
      if (pageIndex === (maxIndex - 1)) {
        nextBtn.disabled = true;
      }
      return;
    }
    await loadApiPage(apiPage + 1);
    if (colors.length > 0) renderSwatches();
  });

  await loadApiPage(apiPage);
  renderSwatches();
  if (colors.length > 0 && colors[0].hex) applyPaintHex(colors[0].hex);
}

This script provides a decorate function, which is used to initialize and define the HTML DOM structure of the block. Within decorate we have methods and fields unique to this block’s custom functionality.

The ensureMarkup() method guarantees that required HTML is created, namely a root container div, a canvas element for our image previews, and a navigation div for paging through color swatches and selecting colors.

Several constants are also defined to ensure the required images are available. These attempt to pull the image URI values from the block’s data attributes, ensureMarkup’s containing div, or from img elements containing a specific attribute with a value matching the image type. If any one of the base, mask, or shading images is missing, the block renders text indicating that all are required. This is like Sling/HTL default content that may be rendered if a component instance is not yet authored.

Then details of the canvas are defined based on the Canvas API, to set up our photo manipulation in a 2D context.

The images are rendered from the previously defined URIs via a loadImage() method, which asynchronously loads an image and returns a Promise that resolves with the loaded image element. The base, mask, and shading images are simultaneously loaded. The author-selected images are hidden, as the canvas will render them as a combined composite image. The canvas width and height are defined, and the base image is drawn to it.

The getMaskData() and getShadeData() methods extract the pixel data from the mask and shade images using the Canvas API context’s getImageData() method. This returns an array of RGBA-formatted pixels for each of these images. These are drawn in off-screen canvases, and the pixel arrays are computed and cached once, then reused for every color change.

The hexToRgb() method converts hex color codes to RGB color values. The blend() method performs a smooth blending between a base and target value. These are each used in the applyPaintHex() method, which is where the key functionality takes place for painting! The base image is redrawn to obtain its pixel data (again as an array of RGBA pixels), and the mask data is used to determine which parts of the base image are “paintable”.

The blend() method is called to mix the original base image pixel data with the selected paint color, within the paintable areas derived from the mask data. Pixel data from the shading array is then applied to ensure the shadows and highlights of the base image are retained, so no depth is lost.

The loadApiPage() method is used to call for available colors from a third-party API service and uses a utility method from scripts/aem.js to make the request. The renderSwatches() method renders the colors as sets of swatches that the user can page through to select a color for painting. Buttons for this pagination are set up with click event handlers.

Third-party API requests

The previous section went over a substantial amount of the details for rendering the block.

While we could have contained everything there, it’s helpful in any modern project to modularize your code for reuse when possible. With that mindset, a utility method has been added to the aem.js file in the scripts directory:

async function fetchFromApi(url, { page, pageSize, params = {} } = {}) {
  const query = new URLSearchParams();

  if (page !== undefined) query.set('page', page);
  if (pageSize !== undefined) query.set('pageSize', pageSize);

  Object.entries(params).forEach(([k, v]) => {
    if (v !== undefined && v !== null) {
      query.set(k, v);
    }
  });

  const fullUrl = query.toString()
    ? `${url}?${query.toString()}`
    : url;

  const res = await fetch(fullUrl, {
    headers: { Accept: 'application/json' },
  });

  if (!res.ok) {
    throw new Error(`fetchAPI failed: ${res.status} ${res.statusText}`);
  }

  return res.json();
}

This fetchFromApi() method was also added to the aem.js export object so that we can call it in our blocks (like we did in the import statement of paint-room-preview.js).

This method makes paginated API requests, though the pagination is optional when calling it. This takes a provided URL, page, page size (the number of items per page), and any additional parameters. For our block, we use this to call our third-party API on page 1. The API offers 30 colors in total. We make a single request for all of them and then page between sets of 5 when the user clicks the next or previous buttons.

You’ll notice in the decorate() method of our block script, we defined the API URL via:

const COLORS_URL = 'https://yourdomain.com/colorapi/colors.json';
  

This should be updated to match the domain and path of your API, based on your implementation. As for that API, we’ll cover it in the next section.

Third Party Colors API

For my EDS site, I’m using the bring your own CDN approach via a Cloudflare worker. Adobe documentation provides a worker script that you can use for requests to your configured EDS domain. To enable our colors API, we just need to make a few minor updates to the script.

In the handleRequests() method, we first define constants for pages and colors returned in API requests, then we define a JSON object containing the page, page size, total number of pages, and most importantly, the array of colors!

const page = parseInt(url.searchParams.get("page") || "1", 10);
const pageSize = parseInt(url.searchParams.get("pageSize") || "30", 10);

const colors = [
  { name: "White", hex: "FFFFFF" },
  { name: "Black", hex: "000000" },
  { name: "Red", hex: "FF0000" },
  { name: "Green", hex: "00FF00" },
  { name: "Blue", hex: "0000FF" },
  { name: "Cyan", hex: "00FFFF" },
  { name: "Magenta", hex: "FF00FF" },
  { name: "Yellow", hex: "FFFF00" },
  { name: "Gray", hex: "808080" },
  { name: "Orange", hex: "FFA500" },
  { name: "Purple", hex: "800080" },
  { name: "Brown", hex: "A52A2A" },
  { name: "Pink", hex: "FFC0CB" },
  { name: "Lime", hex: "32CD32" },
  { name: "Teal", hex: "008080" },
  { name: "Navy", hex: "000080" },
  { name: "Olive", hex: "808000" },
  { name: "Maroon", hex: "800000" },
  { name: "Silver", hex: "C0C0C0" },
  { name: "Gold", hex: "FFD700" },
  { name: "Coral", hex: "FF7F50" },
  { name: "Indigo", hex: "4B0082" },
  { name: "Turquoise", hex: "40E0D0" },
  { name: "Lavender", hex: "E6E6FA" },
  { name: "Beige", hex: "F5F5DC" },
  { name: "Mint", hex: "98FF98" },
  { name: "Peach", hex: "FFDAB9" },
  { name: "Sky Blue", hex: "87CEEB" },
  { name: "Chocolate", hex: "D2691E" },
  { name: "Crimson", hex: "DC143C" }
];

const total = colors.length;
const start = (page - 1) * pageSize;
const end = start + pageSize;
const pageColors = colors.slice(start, end);

const json = JSON.stringify({
  page,
  pageSize,
  total,
  totalPages: Math.ceil(total / pageSize),
  data: pageColors,
});

Lastly, above the condition checking if the path starts with /drafts/, add the following:

if (url.pathname.startsWith('/colorapi/')) {
  return new Response(json, {
    headers: {
      "Content-Type": "application/json",
      "Access-Control-Allow-Origin": "*",
      "Access-Control-Allow-Methods": "GET, OPTIONS",
      "Access-Control-Allow-Headers": "Content-Type"
    },
  });
}

This sets up our API as path-based, supporting requests to /colorsapi/colors.json. Once we deploy our worker changes, the JSON response to API requests will resemble the following:

JSON Response to API Requests

I will mention that while this works, our API does leave some things to be desired. In a true, production-ready implementation, the worker should only act as a proxy to a separate data service (with its own dedicated redundancy and fault tolerance). The colors data could be enriched with details such as product codes, applications where each color is supported (works on drywall vs wood), and split into different sets of color palettes based on the paint quality (economy, super, deluxe, etc.). There might even be a review process where certain colors are filtered out based on inventory or other factors. The primary goal of this post is to demonstrate block building and, secondly, to keep the entire implementation at the edge, not to provide a best practice API implementation.

Block Design

With the functional aspects of our block complete, we need to add some styles to make our image previewer, color options, and paging work cohesively on varying client devices. So, in the /blocks/paint-room-preview folder, create a file called paint-room-preview.css and add the following contents:

.paint-room-preview {
  max-width: 800px;
  margin: auto;
  text-align: center;
}

#room-canvas {
  width: 100%;
  border-radius: 8px;
  margin-bottom: 20px;
}

.bm-controls {
  display: flex;
  justify-content: space-between;
  margin-bottom: 20px;
}

.bm-nav {
  display: flex;
  align-items: center;
  justify-content: center;
  gap: 16px;
  margin-bottom: 20px;
}

#bm-colors {
  display: flex;
  gap: 12px;
  justify-content: center;
  flex-wrap: nowrap;
  margin: 0 10%;
}

.bm-color {
  display: flex;
  flex-direction: column;
  align-items: center;
}

.bm-swatch {
  width: 60px;
  height: 60px;
  border-radius: 6px;
  border: 1px solid #ccc;
  cursor: pointer;
  margin-bottom: 8px;
  transition: transform .2s;
}

.bm-swatch:hover {
  transform: scale(1.1);
}

@media (width <= 900px) {    
  .bm-nav {
    flex-wrap: wrap;
  }

  #bm-colors {
    order: 3;
    width: 100%;
    justify-content: center;
    margin: 12px 0 0;
    flex-wrap: wrap;
  }

  #bm-prev {
    order: 1;
  }

  #bm-next {
    order: 2;
  }
}

With that, we should merge or commit our changes. Then we can author our block on a page, via the universal editor:

Editing in the Universal Editor

If you want to test drive this using the coffee shop example above, please find the base image, mask, and shading image at the links below. Upload these images to AEM Assets to select them in your block’s authorable fields.

https://blogs.perficient.com/files/coffee-shop-shading.png

https://blogs.perficient.com/files/coffee-shop-mask-1.png

https://blogs.perficient.com/files/coffee-shop.png

With all 3 images authored, you can publish the page and see your changes in action.

Closing Thoughts

As you can see, EDS blocks can be as specific as you need them to be. All the block and utility code is contained in JavaScript and CSS. Authoring fields are easily enabled in component-models.json. The block is easily enabled for use in pages via component-filters.json. Using just browser APIs, event handlers, and DOM selectors, we built a compelling experience for our fictional paint company. Using just ES6+ modular code, we built a serverless API to provide simple color options. The best part of this is that it’s all delivered at the edge, for an optimally fast application. There are many possibilities for block customization. The speed and flexibility of Edge Delivery Services should be considered for your project.

]]>
https://blogs.perficient.com/2026/02/11/eds-adventures-integrating-external-data-and-building-custom-feature-blocks/feed/ 0 390252
Enhancing Fluent UI DetailsList with Custom Sorting, Filtering, Lazy Loading and Filter Chips https://blogs.perficient.com/2026/02/04/enhancing-fluent-ui-detailslist-with-custom-sorting-filtering-lazy-loading-and-filter-chips/ https://blogs.perficient.com/2026/02/04/enhancing-fluent-ui-detailslist-with-custom-sorting-filtering-lazy-loading-and-filter-chips/#respond Wed, 04 Feb 2026 07:48:24 +0000 https://blogs.perficient.com/?p=390027

Fluent UI DetailsList custom sorting and filtering can transform how structured data is displayed. While the default DetailsList component is powerful, it doesn’t include built‑in features like advanced sorting, flexible filtering, lazy loading, or selection‑driven filter chips. In this blog, we’ll show you how to extend Fluent UI DetailsList with these enhancements, making it more dynamic, scalable, and user‑friendly.

We’ll also introduce simple, reusable hooks that allow you to implement your own filtering and sorting logic, which will be perfect for scenarios where the default behavior doesn’t quite fit your needs. By the end, you’ll have a flexible, feature-rich Fluent UI DetailsList setup with sorting and filtering that can handle complex data interactions with ease.

Here’s what our wrapper brings to the table:

  • Context‑aware column menus that enable sorting beyond simple A↔Z ordering
  • Filter interfaces designed for each data type (.i.e. freeform text, choice lists, numeric ranges, or time values)
  • Selection chips that display active filters and allow quick deselection with a single click
  • Lazy loading with infinite scroll, seamlessly integrated with your API or pagination pipeline
  • One orchestrating component that ties all these features together, eliminating repetitive boilerplate

Core Architecture

The wrapper includes:

  • Column Definitions: To control how each column sorts/filters
  • State & Refs: To manage final items, full dataset, and UI flags
  • Default Logic By overriding hooks – onSort, onFilter
  • Selection: Powered by Fluent UI Selection API
  • Lazy Loading: Using IntersectionObserver
  • Filter Chips: Reflect selected rows

Following are the steps to achieve these features:

Step 1: Define Column Metadata

Each column in the DetailsList must explicitly describe its data type, sort behavior, and filtering behavior. This metadata helps the wrapper render the correct UI elements such as combo boxes, number inputs, or time pickers.

Each column needs metadata describing:

  • Field type
  • Sort behavior
  • Filter behavior
  • UI options (choice lists, icons, etc.)
export interface IDetailsListColumnDefinition {
  fieldName: string;
  displayName: string;
  columnType?: DetailsListColumnType; // Text, Date, Time, etc.
  sortDetails?: { fieldType: SortFilterType };
  filterDetails?: {
    fieldType: SortFilterType;
    filterOptions?: IComboBoxOption[];
    appliedFilters?: any[];
  };
}

Following is the example:

const columns = [{
  fieldName: 'status',
  displayName: 'Status',
  columnType: DetailsListColumnType.Text,
  sortDetails: {
    fieldType: SortFilterType.Choice
  },
  filterDetails: {
    fieldType: SortFilterType.Choice,
    filterOptions: [{
      key: 'Active',
      text: 'Active'
    },
    {
      key: 'Inactive',
      text: 'Inactive'
    }]
  }
}];

Step 2: Implement Type-Aware Fluent UI DetailsList Custom Sorting

The sorting mechanism dynamically switches based on the column’s data type. Time fields are converted to minutes to ensure consistent sorting, while text and number fields use their native values. It supports following:

  • Supports Text, Number, NumberRange, Date, and Time (custom handling for time via minute conversion).
  • Sort direction is controlled from the column’s context menu.
  • Works with default sorting or lets you inject custom sorting via onSort.
  • Default sorting uses lodash orderBy unless onSort is provided

Sample code for its implementation can be written as follows:

switch (sortColumnType) {
case SortFilterType.Time:
  sortedItems = orderBy(sortedItems, [item = >getTimeForField(item, column.key)], column.isSortedDescending ? ['desc'] : ['asc']);
  break;
default:
  sortedItems = orderBy(sortedItems, column.fieldName, column.isSortedDescending ? 'desc': 'asc');
}

Step 3: Implement Fluent UI DetailsList Custom Filtering (Text/Choice/Range/Time)

Filtering inputs change automatically based on column type. Text and choice filters use combo boxes, while numeric fields use range inputs. Time filters extract and compare HH:mm formatted values.

Text & Choice Filters

Implemented using Fluent UI ComboBox as follows:

<ComboBox

    allowFreeform={!isChoiceField}
    
    multiSelect={true}
    
    options={comboboxOptions}
    
    onChange={(e, option, index, value) =>
    
    _handleFilterDropdownChange(e, column, option, index, value)
    
    }
/>

Number Range Filter

Implemented as two input boxes, min & max for defining number range.

  • Min/Max chips are normalized in order [min, max].
  • Only applied if present; absence of either acts as open‑ended range.

Time Filter

For filtering time, we are ignoring date part and just considering time part.

  • Times are converted to minutes since midnight(HH:mm) to sort reliably regardless of display format.
  • Filtering uses date-fns format() for display and matching.

Step 4: Build the Filtering Pipeline

This step handles the filtering logic as capturing user-selected values, updating filter states, re-filtering all items, and finally applying the active sorting order. If custom filter logic is provided, it overrides the defaults. It will work as follows:

  1. User changes filter
  2. Update column.filterDetails.appliedFilters
  3. Call onFilter (if provided)
  4. Otherwise run default filter pipeline as follows:

allItems → apply filter(s) → apply current sort → update UI

Following are some helper functions that can be created for handing filter/sort logic:

  • _filterItems
  • _applyDefaultFilter
  • _applyDefaultSort

Step 5: Display Filter Chips

When selection is enabled, each selected row appears as a dismissible chip above the grid. Removing the chip automatically deselects the row, ensuring tight synchronization between UI and data.

<FilterChip key={filterValue.key} filterValue={filterValue} onRemove={_handleChipRemove} />

Note: This is a custom subcomponent used to handle filter chips. Internally it display selected values in chip form and we can control its values and functioning using onRemove and filterValue props.

Chip removal:

  • Unselects row programmatically
  • Updates the selection object

Step 6: Implementing Lazy Loading (IntersectionObserver)

The component makes use of IntersectionObserver, to detect if the user reaches the end of the list. Once triggered, it calls the lazy loading callback to fetch the next batch of items from the server or state.

  • An additional row at the bottom triggers onLazyLoadTriggered() as it enters the viewport.
  • Displays a spinner while loading; attaches the observer when more data is available.

A sentinel div at the bottom triggers loading:

observer.current = new IntersectionObserver(async entries => {
  const entry = entries[0];
  if (entry.isIntersecting) {
    observer.current ? .unobserve(lazyLoadRef.current!);
    await lazyLoadDetails.onLazyLoadTriggered();
  }
});

Props controlling behavior:

lazyLoadDetails ? :{
  enableLazyLoad: boolean;
  onLazyLoadTriggered: () => void;
  isLoading: boolean;
  moreItems: boolean;
};

Step 7: Sticky Headers

Sticky headers keep the column titles visible as the user scrolls through large datasets, improving readability and usability. Following is the code where, maxHeight property determines the scrollable container height:

const stickyHeaderStyle = {
  root: {
    maxHeight: stickyHeaderDetails ? .maxHeight ? ?450
  },
  headerWrapper: {
    position: 'sticky',
    top: 0,
    zIndex: 1
  }
};

Step 8: Putting It All Together — Minimal Example for Fluent UI DetailsList custom filtering and sorting

Following is an example where we are calling our customizes details list component:

<CustomDetailsList
  columnDefinitions={columns}
  items={data}
  allItems={data}
  checkboxVisible={CheckboxVisibility.always}
  initialSort={{ fieldName: "name", direction: SortDirection.Asc }}
  filterChipDetails={{
    filterChipKeyColumnName: "key",

    filterChipColumnName: "name",
  }}
  stickyHeaderDetails={{ enableStickyHeader: true, maxHeight: 520 }}
  lazyLoadDetails={{
    enableLazyLoad: true,

    isLoading: false,

    moreItems: true,

    onLazyLoadTriggered: async () => {
      // load more
    },
  }}
/>;

Accessibility & UX Notes

  • Keyboard: Enter key applies text/number inputs instantly; menu remains open so users can stack filters.
  • Clear filter: Context menu shows “Clear filter” action only when a filter exists; there’s also a “Clear Filters (n)” button above the grid that resets all columns at once.
  • Selection cap: To begin, maxSelectionCount helps prevent accidental bulk selections; next, it provides immediate visual feedback so users can clearly see their limits in action.

Performance Guidelines

  • Virtualization: For very large datasets, you can enable virtualization and validate both menu positioning and performance. For current example, onShouldVirtualize={() => false} is used to maintain a predictable menu experience.
  • Server‑side filtering/sorting: If your dataset is huge, pass onSort/onFilter and do the heavy lifting server‑side, then feed the component the updated page through items.
  • Lazy loading: Use moreItems to hide the sentinel when the server reports the last page; set isLoading to true to show the spinner row.

Conclusion

Finally, we have created a fully customized Fluent UI DetailsList with custom filtering and sorting which condenses real‑world list interactions into one drop‑in component. CustomDetailsList provides a production-ready, extensible, developer-friendly data grid wrapper with following enhanced features:

  • Clean context menus for type‑aware sort & filter
  • Offers selection chips for quick, visual interaction and control
  • Supports lazy loading that integrates seamlessly with your API
  • Allows you to keep headers sticky to maintain clarity in long lists
  • Delivers a ready‑to‑use design while allowing full customization when needed

GitHub repository

Please refer to the GitHub repository below for the full code. A sample has been provided within to illustrate its usage:

https://github.com/pk-tech-dev/customdetailslist

 

 

 

]]>
https://blogs.perficient.com/2026/02/04/enhancing-fluent-ui-detailslist-with-custom-sorting-filtering-lazy-loading-and-filter-chips/feed/ 0 390027
Seven Perficient Colleagues Honored as Sitecore MVPs https://blogs.perficient.com/2026/02/03/seven-perficient-colleagues-honored-as-sitecore-mvps/ https://blogs.perficient.com/2026/02/03/seven-perficient-colleagues-honored-as-sitecore-mvps/#respond Tue, 03 Feb 2026 23:01:17 +0000 https://blogs.perficient.com/?p=390096

We are thrilled to announce that Sitecore has recognized seven standout Perficient team members as Sitecore Most Valuable Professionals, spanning all three MVP categories – strategy, technology, and ambassador.

2025 Sitecore Mvp Logo

“Our team continues to elevate what’s possible and set a new standard of excellence for Perficient’s Sitecore practice,” said Vice President Mark Ursino. “Earning seven Sitecore MVP titles reflects the passion, collaboration, and forward‑thinking mindset that define our work. We’re proud of this momentum and excited for the milestones still ahead.”

Meet Our 2026 Sitecore MVPs

Mark Ursino, 16x MVP, 2026 Ambassador MVP

Mark UrsinoAs the Sitecore ecosystem continues to evolve, one thing has stayed the same year after year: the amazing community that shows up, shares, and supports one another. The Sitecore MVP program really captures that spirit, and 2026 feels like a big moment as the program grows into a world where AI is front and center. So much has changed over the years. Products have changed, technologies have changed, architectures have changed. In some ways, what used to be old feels new again. But the community has never stopped showing up. I’m genuinely excited about where Sitecore is heading in this new AI era and how confidently they are shaping the future. It feels like an energizing time to be part of this MVP community, and I can’t wait to help build what comes next.

Stephen Tynes, 14x MVP, 2026 Ambassador MVP

StephentynesI’m incredibly excited to be named a Sitecore MVP for the 14th time. Sitecore continues to push the boundaries of what’s possible for modern marketers. Sitecore’s relentless focus on helping teams do more with less through smarter, faster, and more connected experiences perfectly matches the challenges marketers face right now. Even better, that innovation aligns seamlessly with Perficient’s AI-first strategy, where we’re turning intelligent platforms into real, measurable outcomes.

Joshua Hover, 5x MVP, 2026 Ambassador MVP

JoshhoverThe Sitecore roadmap and the pace of AI innovation make the year ahead especially exciting. I’m looking forward to mentoring amazing mentees, growing the MVP community together, and partnering with Sitecore to highlight how the platform continues to evolve and lead.

 

Megan Mueller Jensen, 5x MVP, 2026 Strategy MVP

MjBeing a part of the Sitecore MVP over the last five years has truly made me better at my work: it pushes me to stay curious, connected, and deeply informed about digital experience trends, ensuring I bring a more holistic, future‑oriented perspective to every solution I design and deliver. Most importantly, this community has given me the opportunity to build relationships with professionals from all over the world, from all walks of life. It’s also driven me to advocate for women, people of color, members of the LGBTQIA+ community, and people with disabilities entering the marketing technology field—creating pathways many of us once had to carve out alone. Looking ahead to 2026, I’m energized by the acceleration of AI-driven experiences, composable architectures becoming the norm, and the growing community of strategists with diverse lived experiences shaping the future of our digital world. Being part of this elite group of MVPs means I feel a responsibility to lead an ethical AI evolution, champion new voices, and continue making the industry more inclusive than it’s ever been.

Tiffany Laster, 3x MVP, 2026 Strategy MVP

Tiffany Laster 1 I’m incredibly excited and genuinely grateful to be part of such a special MVP community. This community inspires me every day with its creativity, its problem‑solving spirit, and the way everyone shows up for one another to push innovation forward. With AI opening up an entirely new world of possibilities, I can’t wait to help shape how organizations embrace Sitecore products to build unforgettable digital experiences for their users and the teams behind them. Being surrounded by people who are passionate, supportive, and always striving to make things better makes this journey even more meaningful, and I’m thrilled to grow and contribute alongside this amazing community.

Eric Sanner, 3x MVP, 2026 Technology MVP

Profile Picture 2023 400x400I’m excited, proud, and honored to be named a Sitecore Technology MVP for the third year in a row!  Along with my fellow Perficient colleagues and friends around the world, the Sitecore Community is truly global community.  MVPs are at the forefront of the changes in technology as the world moves to incorporate AI at every level by learning and sharing knowledge.  But even as the technology changes, it is important to remember the human aspect of technology and the value of human connection.  I’m excited to continue growing and learning with my fellow MVPs in 2026!

Joey Running, 1x MVP, 2026 Strategy MVP

Screenshot 2025 06 03 182731Having been part of the Sitecore community for many years, being recognized with my first Strategy MVP award is both an honor and a meaningful milestone. This community represents far more than a group of platform experts. It is a collective of passionate leaders committed to elevating each other’s capabilities and advancing best practices that drive exceptional digital experiences. I’m excited to deepen my involvement as an MVP and to help guide and support fellow strategists who are pursuing the same goals and growth I focused on throughout the past year.

Sitecore MVP Program

The Sitecore MVP program recognizes the most active Sitecore individuals from around the globe who share their knowledge with various Sitecore partners and customers. These MVPs represent talent from numerous countries around the world.  

Individuals selected for this elite honor experience several benefits including recognition within the Sitecore community, early access to product releases and resources, an exclusive invite to MVP discussion forums, and more. 

Learn More About Our Sitecore Practice

We are a Sitecore Platinum Partner with specializations in XM Cloud, CDP, Order Cloud, Content Hub, and XP. With our long-standing Sitecore partnership, we look forward to continuing to elevate our Sitecore practice and work with brands to best utilize the platform.  

Congrats again to all seven of our MVPs. To learn more about Perficient’s Sitecore solutions, visit our partner page, and follow us on LinkedIn. 

]]>
https://blogs.perficient.com/2026/02/03/seven-perficient-colleagues-honored-as-sitecore-mvps/feed/ 0 390096
On Demand Webinar: Building an Agentic Ready CRM from POC to Outcomes  https://blogs.perficient.com/2026/01/30/on-demand-webinar-building-an-agentic-ready-crm-from-poc-to-outcomes/ https://blogs.perficient.com/2026/01/30/on-demand-webinar-building-an-agentic-ready-crm-from-poc-to-outcomes/#respond Fri, 30 Jan 2026 16:35:29 +0000 https://blogs.perficient.com/?p=390051

AI is now foundational to how businesses operate. The fastest path from experimentation to measurable value starts with executive alignment, clean data, and a CRM backbone designed for agents.
Across every executive conversation we have today, the theme is the same. How do we accelerate AI adoption, realize its value, and manage risk with trust at the center. Buyers are moving from proofs of concept to measurable outcomes, and leaders feel pressure on two fronts, move faster, and show impact. In our webinar featuring Forrester as a guest alongside Salesforce, the advice was consistent. Start now, start small, and build a foundation that connects AI initiatives to clear business outcomes.  

The Market Moment 

AI is no longer a trend. It is becoming foundational to front office operations, from sales to service to marketing. Forrester’s perspective is that three of the top five CRM operations priorities are now AI related, because AI helps create personal, high value experiences while lifting productivity and decision making in the flow of work. Organizations see gains in cycle time, content relevance, brand reputation, and customer outcomes when AI is coupled with trusted data.   

Yet many teams still struggle to connect AI technology strategy to business strategy. The common blockers are familiar. Data privacy and security concerns, uncertainty about trusting AI outputs, gaps in AI governance, and limited access to the right skills inside the organization. Without a roadmap tied to outcomes, efforts stall and confidence fades.   

“If you haven’t started, you need to start now. Start small with one targeted project, learn, continuously improve, and bring your people along.” 
— Kate Leggett, Vice President, Principal Analyst, Forrester Guest  

Why Programs  Stall 

We hear three consistent challenges from leaders across industries.  

  • Technical debt and legacy CRM. Old systems and fragmented processes slow experimentation and scale.   
  • Siloed and low-quality data. Insights are trapped in scattered sources. The right context is missing, and trust suffers.   
  • Process habits. Teams try to automate yesterday’s steps instead of asking whether AI can redesign the work entirely.   

There is also a measurement gap. Only about half of organizations have benchmarks to assess AI performance, fewer than half have KPIs to decide whether to keep a feature, and roughly a third can link AI directly to profit and loss. That missing operational discipline is a core reason many AI projects disappoint.   

Build the Backbone for Agents 

Perficient’s point of view is simple. Do not just modernize CRM. Create an agentic ready CRM backbone that unifies Salesforce CRM, Data 360, and Agentforce to fuel agentic experiences across the enterprise. Put clean, accurate, actionable data at the center, and make iterative releases your default operating rhythm.   

Salesforce’s customer zero experience shows what happens when you do. Sales velocity improved by about 36 percent with agents preparing briefings and eliminating repetitive work. Win rates lifted by roughly 11 percent as agents surfaced context and long-term memory at the right moment. Service agents now handle about 85 percent of inbound inquiries, which enabled a meaningful annualized cost takeout that was redeployed into higher value roles. Those gains start with a backbone designed for agents and the right trust controls in place.   

“Be an agent boss. Set a big vision, pull it in tight to start, then scale as you learn.” 
— Kaylin Voss, Executive Vice President, Agentforce and Data Cloud, Salesforce  

Executive Alignment, Measured Weekly 

Successful programs are not departmental mandates. They are board and CEO level initiatives that bring business and IT together, including your CISO. Establish top to bottom governance, agree on a weekly readout cadence, and develop a simple scorecard that connects input metrics, like coverage and cycle time, to outcome metrics, like win rate, average deal size, cost reallocation, and CSAT. If the executive communication cadence slips, momentum will not follow. Keep the drumbeat and iterate.   

Start Now, Start Small 

Choose one targeted use case that matters for revenue or experience. Land the agent where people already work, such as Slack. Give it trusted data and clear context by role and function. Put humans in the loop and coach the agent with rapid releases. This creates the flywheel for the next use case and prevents agent sprawl by keeping a learning architecture underneath your experiments.   

And do not go it alone. Most organizations rely on vendors and partners to supply best practices, methodologies, and the hands-on support required to move from pilot to production with trust.   

Your Next Steps 

About this Series 

This post is the first in our three-part series on moving from POC to outcomes, then measuring what matters, then scaling with people at the center. Subscribe to get the next article in your inbox.  

]]>
https://blogs.perficient.com/2026/01/30/on-demand-webinar-building-an-agentic-ready-crm-from-poc-to-outcomes/feed/ 0 390051
Just what exactly is Visual Builder Studio anyway? https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/ https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/#respond Thu, 29 Jan 2026 15:40:45 +0000 https://blogs.perficient.com/?p=389750

If you’re in the world of Oracle Cloud, you are most likely busy planning your big switch to Redwood. While it’s easy to get excited about a new look and a plethora of AI features, I want to take some time to talk about a tool that’s new (at least to me) that comes along with Redwood. Functional users will come to know VB Studio as the new method for delivering page customizations, but I’ve learned it’s much more.

VB Studio has been around since 2020, but I only started learning about it recently. At its core, VB Studio is Oracle’s extension platform. It provides users with a safe way to customize by building around their systems instead of inside of it. Since changes to the core code are not allowed, upgrades are much less problematic and time consuming.  Let’s look at how users of different expertise might use VB Studio.

Oracle Cloud Application Developers

I wouldn’t call myself a developer, but this is the area I fit into. Moving forward, I will not be using Page Composer or HCM Experience Design Studio…and I’m pretty happy about that. Every client I work with wants customization, so having a one-stop shop with Redwood is a game-changer after years of juggling tools.

Sandboxes are gone. VB Studio uses Git repositories with branches to track and log every change. Branches let multiple people work on different features without conflict, and teams review and merge changes into the main branch in a controlled process.

And what about when these changes are ready for production? By setting up a pipeline from your development environment to your production environment, these changes can be pushed straight into production. This is huge for me! It reduces the time needed to implement new Oracle modules. It also helps with updating or changing existing systems as well. I’ve spent countless hours on video calls instructing system administrators on how to perform requested changes in their production environment because their policy did not allow me to have access. Now, I can make these changes in a development instance and push them to production. The sys admin can then view these changes and approve or reject them for production. Simple!

Maxresdefault

Low-Code Developers

 

Customizations to existing features are great, but what about building entirely new functionality and embedding it right into your system?  VB Studio simplifies building applications, letting low-code developers move quickly without getting bogged down in traditional coding. With VB Studio’s visual designer, developers can drag and drop components, arrange them the way they want, and preview changes instantly. This is exciting for me because I feel like it is accessible for someone who does very little coding. Of course, for those who need more flexibility, you can still add custom logic using familiar web technologies like JavaScript and HTML (also accessible with the help of AI). Once your app is ready, deployment is easy. This approach means quicker turnaround, less complexity, and applications that fit your business needs perfectly.

 

Experienced Programmers

Okay, now we’re getting way out of my league here, so I’ll be brief. If you really want to get your hands dirty by modifying the code of an application created by others, you can do that. If you prefer building a completely custom application using the web programming language of your choice, you can also do that. Oracle offers users a wide range of tools and stays flexible in how they use them. Organizations need tailored systems, and Oracle keeps evolving to make that possible.

 

https://www.oracle.com/application-development/visual-builder-studio/

]]>
https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/feed/ 0 389750
Moving to CJA? Sunset Adobe Analytics Without Causing Chaos https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/ https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/#comments Tue, 27 Jan 2026 13:51:10 +0000 https://blogs.perficient.com/?p=389876

Adobe Experience Platform (AEP) and Customer Journey Analytics (CJA) continue to emerge as the preferred solutions for organizations seeking a unified, 360‑degree view of customer behavior.  For organizations requiring HIPAA compliance, AEP and CJA is a necessity.  Many organizations are now having discussions about whether they should retool or retire their legacy Adobe Analytics implementations.  The transition from Adobe Analytics to CJA is far more complex than simply disabling an old tool. Teams must carefully plan, perform detailed analysis, and develop a structured approach to ensure that reporting continuity, data integrity, and downstream dependencies remain intact.

Adobe Analytics remains a strong platform for organizations focused exclusively on web and mobile app measurement; however, enterprises that are prioritizing cross‑channel data activation, real‑time profiles, and detailed journey analysis should embrace AEP as the future. Of course, you won’t be maintaining two platforms after building out CJA so you must think about how to move on from Adobe Analytics.

Decommissioning Options and Key Considerations

You can approach decommissioning Adobe Analytics in several ways. Your options include: 1) disabling the extension; 2) adding an s.abort at the top of the AppMeasurement custom‑code block to prevent data from being sent to Adobe Analytics; 3) deleting all legacy rules; or 4) discarding Adobe Analytics entirely and creating a new Launch property for CJA. Although multiple paths exist, the best approach almost always involves preserving your data‑collection methods and keeping the historical Adobe Analytics data. You have likely collected that data for years, and you want it to remain meaningful after migration. Instead of wiping everything out, you can update Launch by removing rules you no longer need or by eliminating references to Adobe Analytics.

Recognizing the challenges involved in going through the data to make the right decisions during this process, I have developed a specialized tool – Analytics Decommissioner (AD) — designed to support organizations as they decommission Adobe Analytics and transition fully to AEP and CJA. The tool programmatically evaluates Adobe Platform Launch implementations using several Adobe API endpoints, enabling teams to quickly identify dependencies, references, and potential risks associated with disabling Adobe Analytics components.

Why Decommissioning Requires More Than a Simple Shutdown

One of the most significant obstacles in decommissioning Adobe Analytics is identifying where legacy tracking still exists and where removing Adobe Analytics could potentially break the website or cause errors. Over the years, many organizations accumulate layers of custom code, extensions, and tracking logic that reference Adobe Analytics variables—often in places that are not immediately obvious. These references may include s. object calls, hard‑coded AppMeasurement logic, or conditional rules created over the course of several years. Without a systematic way to surface dependencies, teams risk breaking critical data flows that feed CJA or AEP datasets.

Missing or outdated documentation makes the problem even harder. Many organizations fail to maintain complete or current solution design references (SDRs), especially for older implementations. As a result, teams rely on tribal knowledge, attempts to recall discussions from years ago, or a manual inspection of data collected to understand how the system collects data. This approach moves slowly, introduces errors, and cannot support large‑scale environments. When documentation lacks clarity, teams struggle to identify which rules, data elements, or custom scripts still matter and which they can safely remove. Now imagine repeating this process for every one of your Launch properties.

This is where Perficient and the AD tool provide significant value.
The AD tool programmatically scans Launch properties and uncovers dependencies that teams may have forgotten or never documented. A manual analysis might easily overlook these dependencies. AD also pinpoints where custom code still references Adobe Analytics variables, highlights rules that have been modified or disabled since deployment, and surfaces AppMeasurement usage that could inadvertently feed into CJA or AEP data ingestion. This level of visibility is essential for ensuring that the decommissioning process does not disrupt data collection or reporting.

How Analytics Decommissioner (AD) Works

The tool begins by scanning all Launch properties across your organization and asking the user to select a property. This is necessary because the decommissioning process must be done on each property individually.  This is the same way data is set for Adobe Analytics, one Launch property at a time.  Once a property is selected, the tool retrieves all production‑level data elements, rules, and rule components, including their revision histories.  The tool ignores rules and data element revisions that developers disabled or never published (placed in production).  The tool then performs a comprehensive search for AppMeasurement references and Adobe Analytics‑specific code patterns. These findings show teams exactly where legacy tracking persists and see what needs to be updated or modified and which items can be safely removed.  If no dependencies exist, AD can disable the rules and create a development library for testing.  When AD cannot confirm that a dependency exists, it reports the rule names and components where potential issues exist and depend on development experts to make the decision about the existence of a dependency.  The user always makes the final decisions.

This tool is especially valuable for large or complex implementations. In one recent engagement, a team used it to scan nearly 100 Launch properties. Some of those properties included more than 300 data elements and 125 active rules.  Attempting to review this level of complexity manually would have taken weeks and the risk would remain that critical dependencies are missed. Programmatic scanning ensures accuracy, completeness, and efficiency.  This allows teams to move forward with confidence.

A Key Component of a Recommended Decommissioning Approach

The AD tool and a comprehensive review are essential parts of a broader, recommended decommissioning framework. A structured approach typically includes:

  • Inventory and Assessment – Identifying all Adobe Analytics dependencies across Launch, custom code, and environments.
  • Mapping to AEP/CJA – Ensuring all required data is flowing into the appropriate schemas and datasets.
  • Gap Analysis – Determining where additional configuration or migration work needs to be done.
  • Remediation and Migration – Updating Launch rules, removing legacy code, and addressing undocumented dependencies.
  • Validation and QA – Confirming that reporting remains accurate in CJA after removal of Launch rules and data elements created for Adobe Analytics.
  • Sunset and Monitoring – Disabling AppMeasurement, removing Adobe Analytics extensions, and monitoring for errors.

Conclusion

Decommissioning Adobe Analytics is a strategic milestone in modernizing the digital data ecosystem. Using the right tools and having the right processes are essential.  The Analytics Decommissioner tool allows organizations to confidently transition to AEP and CJA. This approach to migration preserves data quality, reduces operational costs, and strengthens governance when teams execute it properly. By using the APIs and allowing the AD tool to handle the heavy lifting, teams ensure that they don’t overlook any dependencies.  This will enable a smooth and risk‑free transition with robust customer experience analytics.

]]>
https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/feed/ 2 389876
Build, Govern, Measure: Agentforce Done Right https://blogs.perficient.com/2026/01/26/build-govern-measure-agentforce-done-right/ https://blogs.perficient.com/2026/01/26/build-govern-measure-agentforce-done-right/#respond Mon, 26 Jan 2026 18:59:22 +0000 https://blogs.perficient.com/?p=389923

Part 1 of our Salesforce Outcomes Playbook made the case for measurable value and orchestrated workflows. In this next post, we move from strategy to execution and show how to put Agentforce to work on a real business KPI.

Perficient is recognized in Forrester’s Salesforce Consulting Services Landscape, Q4 2025 for our North America focus and industry depth in Financial Services, Healthcare, and Manufacturing. We bring proven capabilities across Agentforce, Data 360 (Data Cloud), and Industry Clouds to help clients turn trusted data and well designed workflows into outcomes you can verify.

Forrester asked each provider included in the Landscape to select the top business scenarios for which clients select them and from there determined which are the extended business scenarios that highlight differentiation among the providers. Perficient is shown in the report for having selected Agentforce, Data 360 (Data Cloud), and Industry Clouds as top reasons clients work with us out of those extended business scenarios. Our proven capabilities across Agentforce, Data 360 (Data Cloud), and Industry Clouds to help clients achieve measurable outcomes from their Salesforce investments.

Here, we walk through a practical operating model to launch one production agent, govern by design, and measure lift with real users. The goal is confidence without complexity: a visible improvement in a specific KPI and a repeatable pattern you can scale as results compound.

What Success Looks Like

  • Build: A visible lift in one KPI, such as reduced time to resolution in Service or improved conversion in Sales.
  • Govern: Role‑based access with data minimization, accuracy checks, and audit trails in place.
  • Measure: Observability that traces agent decisions and reports performance, adoption, and error rates.
  • Scale: A prioritized backlog and a scale plan that extends the win without unnecessary build.

The Operating Model: Build, Govern, Measure

1) Build one agent for one KPI

Choose a single use case with a business‑visible metric. Ship a working slice and measure against an agreed baseline. Examples:

  • Agent‑assisted case triage that reduces average handle time in Service
  • Quote‑to‑order agent in Agentforce Revenue Management (formerly Revenue Cloud) that shrinks cycle time and errors
  • Renewal‑risk agent that flags at‑risk accounts and improves retention
  • Field service parts availability agent that improves first‑time fix rate

Ground the agent in trustworthy data. Unify records, events, and identities so decisions are consistent and auditable. Use Data 360 foundations to give agents clean context across teams and channels.

2) Govern by Design

Put guardrails in at the start. Define who can access what, how accuracy is checked, and where audit trails are stored.

  • Role‑based access and data minimization
  • Accuracy checks and human‑in‑the‑loop for high‑impact actions
  • Prompt and policy versioning with change tracking
  • Audit trails that capture inputs, decisions, and outcomes
  • Backout controls with pause and rollback procedures

Governance belongs inside your delivery lifecycle, not as an afterthought.

3) Measure and iterate

Use observability to trace decisions, monitor performance, and tune safely.

  • Baseline the KPI before launch and track lift after launch
  • Monitor adoption, satisfaction, and error rates
  • Identify drift, hallucination, or policy violations quickly
  • Iterate prompts, policies, and integrations based on data

Expand capabilities only once the first KPI moves. This keeps momentum high, risk low, and aligns investment to tangible results.

Why This Matters

Most teams already believe in AI. The question is how to make it work here, safely and repeatably. Salesforce continues to expand what you can do with AI, data, and integration. When foundations are solid, those capabilities turn into outcomes you can measure. Agentforce gives you practical building blocks for trusted AI at scale. You get observability to understand how agents perform, governance controls to protect data and accuracy, and low code configuration so business and IT can move together faster.

“Enterprises often underestimate the need for structured enablement, adoption planning, and sustained evolution….” – The Salesforce Consulting Services Landscape, Q4 2025

Partners help translate powerful platform features into everyday outcomes. That is how you reduce risk and accelerate value.

Orchestrate The Workflow, Not Just the Feature

Real value shows up when workflows span systems. Map the end‑to‑end process across Salesforce and adjacent platforms. Eliminate the handoffs that slow customers down. Use reference architectures and integration patterns so the process is portable and resilient. Agentforce is most effective when agents can act across the flow rather than bolt onto a single step.

Ready to translate strategy into a working Agentforce use case that moves a KPI?

Book an Agentforce workshop. We will help you choose one KPI, define data sources, set guardrails and observability, and stand up a working slice you can scale.

Forrester does not endorse any company, product, brand, or service included in its research publications and does not advise any person to select the products or services of any company or brand based on the ratings included in such publications. Information is based on the best available resources. Opinions reflect judgment at the time and are subject to change. For more information, read about Forrester’s objectivity here.

]]>
https://blogs.perficient.com/2026/01/26/build-govern-measure-agentforce-done-right/feed/ 0 389923