Technology Partners Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/ Expert Digital Insights Thu, 29 Jan 2026 15:40:45 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technology Partners Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/ 32 32 30508587 Just what exactly is Visual Builder Studio anyway? https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/ https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/#respond Thu, 29 Jan 2026 15:40:45 +0000 https://blogs.perficient.com/?p=389750

If you’re in the world of Oracle Cloud, you are most likely busy planning your big switch to Redwood. While it’s easy to get excited about a new look and a plethora of AI features, I want to take some time to talk about a tool that’s new (at least to me) that comes along with Redwood. Functional users will come to know VB Studio as the new method for delivering page customizations, but I’ve learned it’s much more.

VB Studio has been around since 2020, but I only started learning about it recently. At its core, VB Studio is Oracle’s extension platform. It provides users with a safe way to customize by building around their systems instead of inside of it. Since changes to the core code are not allowed, upgrades are much less problematic and time consuming.  Let’s look at how users of different expertise might use VB Studio.

Oracle Cloud Application Developers

I wouldn’t call myself a developer, but this is the area I fit into. Moving forward, I will not be using Page Composer or HCM Experience Design Studio…and I’m pretty happy about that. Every client I work with wants customization, so having a one-stop shop with Redwood is a game-changer after years of juggling tools.

Sandboxes are gone. VB Studio uses Git repositories with branches to track and log every change. Branches let multiple people work on different features without conflict, and teams review and merge changes into the main branch in a controlled process.

And what about when these changes are ready for production? By setting up a pipeline from your development environment to your production environment, these changes can be pushed straight into production. This is huge for me! It reduces the time needed to implement new Oracle modules. It also helps with updating or changing existing systems as well. I’ve spent countless hours on video calls instructing system administrators on how to perform requested changes in their production environment because their policy did not allow me to have access. Now, I can make these changes in a development instance and push them to production. The sys admin can then view these changes and approve or reject them for production. Simple!

Maxresdefault

Low-Code Developers

 

Customizations to existing features are great, but what about building entirely new functionality and embedding it right into your system?  VB Studio simplifies building applications, letting low-code developers move quickly without getting bogged down in traditional coding. With VB Studio’s visual designer, developers can drag and drop components, arrange them the way they want, and preview changes instantly. This is exciting for me because I feel like it is accessible for someone who does very little coding. Of course, for those who need more flexibility, you can still add custom logic using familiar web technologies like JavaScript and HTML (also accessible with the help of AI). Once your app is ready, deployment is easy. This approach means quicker turnaround, less complexity, and applications that fit your business needs perfectly.

 

Experienced Programmers

Okay, now we’re getting way out of my league here, so I’ll be brief. If you really want to get your hands dirty by modifying the code of an application created by others, you can do that. If you prefer building a completely custom application using the web programming language of your choice, you can also do that. Oracle offers users a wide range of tools and stays flexible in how they use them. Organizations need tailored systems, and Oracle keeps evolving to make that possible.

 

https://www.oracle.com/application-development/visual-builder-studio/

]]>
https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/feed/ 0 389750
Moving to CJA? Sunset Adobe Analytics Without Causing Chaos https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/ https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/#comments Tue, 27 Jan 2026 13:51:10 +0000 https://blogs.perficient.com/?p=389876

Adobe Experience Platform (AEP) and Customer Journey Analytics (CJA) continue to emerge as the preferred solutions for organizations seeking a unified, 360‑degree view of customer behavior.  For organizations requiring HIPAA compliance, AEP and CJA is a necessity.  Many organizations are now having discussions about whether they should retool or retire their legacy Adobe Analytics implementations.  The transition from Adobe Analytics to CJA is far more complex than simply disabling an old tool. Teams must carefully plan, perform detailed analysis, and develop a structured approach to ensure that reporting continuity, data integrity, and downstream dependencies remain intact.

Adobe Analytics remains a strong platform for organizations focused exclusively on web and mobile app measurement; however, enterprises that are prioritizing cross‑channel data activation, real‑time profiles, and detailed journey analysis should embrace AEP as the future. Of course, you won’t be maintaining two platforms after building out CJA so you must think about how to move on from Adobe Analytics.

Decommissioning Options and Key Considerations

You can approach decommissioning Adobe Analytics in several ways. Your options include: 1) disabling the extension; 2) adding an s.abort at the top of the AppMeasurement custom‑code block to prevent data from being sent to Adobe Analytics; 3) deleting all legacy rules; or 4) discarding Adobe Analytics entirely and creating a new Launch property for CJA. Although multiple paths exist, the best approach almost always involves preserving your data‑collection methods and keeping the historical Adobe Analytics data. You have likely collected that data for years, and you want it to remain meaningful after migration. Instead of wiping everything out, you can update Launch by removing rules you no longer need or by eliminating references to Adobe Analytics.

Recognizing the challenges involved in going through the data to make the right decisions during this process, I have developed a specialized tool – Analytics Decommissioner (AD) — designed to support organizations as they decommission Adobe Analytics and transition fully to AEP and CJA. The tool programmatically evaluates Adobe Platform Launch implementations using several Adobe API endpoints, enabling teams to quickly identify dependencies, references, and potential risks associated with disabling Adobe Analytics components.

Why Decommissioning Requires More Than a Simple Shutdown

One of the most significant obstacles in decommissioning Adobe Analytics is identifying where legacy tracking still exists and where removing Adobe Analytics could potentially break the website or cause errors. Over the years, many organizations accumulate layers of custom code, extensions, and tracking logic that reference Adobe Analytics variables—often in places that are not immediately obvious. These references may include s. object calls, hard‑coded AppMeasurement logic, or conditional rules created over the course of several years. Without a systematic way to surface dependencies, teams risk breaking critical data flows that feed CJA or AEP datasets.

Missing or outdated documentation makes the problem even harder. Many organizations fail to maintain complete or current solution design references (SDRs), especially for older implementations. As a result, teams rely on tribal knowledge, attempts to recall discussions from years ago, or a manual inspection of data collected to understand how the system collects data. This approach moves slowly, introduces errors, and cannot support large‑scale environments. When documentation lacks clarity, teams struggle to identify which rules, data elements, or custom scripts still matter and which they can safely remove. Now imagine repeating this process for every one of your Launch properties.

This is where Perficient and the AD tool provide significant value.
The AD tool programmatically scans Launch properties and uncovers dependencies that teams may have forgotten or never documented. A manual analysis might easily overlook these dependencies. AD also pinpoints where custom code still references Adobe Analytics variables, highlights rules that have been modified or disabled since deployment, and surfaces AppMeasurement usage that could inadvertently feed into CJA or AEP data ingestion. This level of visibility is essential for ensuring that the decommissioning process does not disrupt data collection or reporting.

How Analytics Decommissioner (AD) Works

The tool begins by scanning all Launch properties across your organization and asking the user to select a property. This is necessary because the decommissioning process must be done on each property individually.  This is the same way data is set for Adobe Analytics, one Launch property at a time.  Once a property is selected, the tool retrieves all production‑level data elements, rules, and rule components, including their revision histories.  The tool ignores rules and data element revisions that developers disabled or never published (placed in production).  The tool then performs a comprehensive search for AppMeasurement references and Adobe Analytics‑specific code patterns. These findings show teams exactly where legacy tracking persists and see what needs to be updated or modified and which items can be safely removed.  If no dependencies exist, AD can disable the rules and create a development library for testing.  When AD cannot confirm that a dependency exists, it reports the rule names and components where potential issues exist and depend on development experts to make the decision about the existence of a dependency.  The user always makes the final decisions.

This tool is especially valuable for large or complex implementations. In one recent engagement, a team used it to scan nearly 100 Launch properties. Some of those properties included more than 300 data elements and 125 active rules.  Attempting to review this level of complexity manually would have taken weeks and the risk would remain that critical dependencies are missed. Programmatic scanning ensures accuracy, completeness, and efficiency.  This allows teams to move forward with confidence.

A Key Component of a Recommended Decommissioning Approach

The AD tool and a comprehensive review are essential parts of a broader, recommended decommissioning framework. A structured approach typically includes:

  • Inventory and Assessment – Identifying all Adobe Analytics dependencies across Launch, custom code, and environments.
  • Mapping to AEP/CJA – Ensuring all required data is flowing into the appropriate schemas and datasets.
  • Gap Analysis – Determining where additional configuration or migration work needs to be done.
  • Remediation and Migration – Updating Launch rules, removing legacy code, and addressing undocumented dependencies.
  • Validation and QA – Confirming that reporting remains accurate in CJA after removal of Launch rules and data elements created for Adobe Analytics.
  • Sunset and Monitoring – Disabling AppMeasurement, removing Adobe Analytics extensions, and monitoring for errors.

Conclusion

Decommissioning Adobe Analytics is a strategic milestone in modernizing the digital data ecosystem. Using the right tools and having the right processes are essential.  The Analytics Decommissioner tool allows organizations to confidently transition to AEP and CJA. This approach to migration preserves data quality, reduces operational costs, and strengthens governance when teams execute it properly. By using the APIs and allowing the AD tool to handle the heavy lifting, teams ensure that they don’t overlook any dependencies.  This will enable a smooth and risk‑free transition with robust customer experience analytics.

]]>
https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/feed/ 1 389876
Build, Govern, Measure: Agentforce Done Right https://blogs.perficient.com/2026/01/26/build-govern-measure-agentforce-done-right/ https://blogs.perficient.com/2026/01/26/build-govern-measure-agentforce-done-right/#respond Mon, 26 Jan 2026 18:59:22 +0000 https://blogs.perficient.com/?p=389923

Part 1 of our Salesforce Outcomes Playbook made the case for measurable value and orchestrated workflows. In this next post, we move from strategy to execution and show how to put Agentforce to work on a real business KPI.

Perficient is recognized in Forrester’s Salesforce Consulting Services Landscape, Q4 2025 for our North America focus and industry depth in Financial Services, Healthcare, and Manufacturing. We bring proven capabilities across Agentforce, Data 360 (Data Cloud), and Industry Clouds to help clients turn trusted data and well designed workflows into outcomes you can verify.

Forrester asked each provider included in the Landscape to select the top business scenarios for which clients select them and from there determined which are the extended business scenarios that highlight differentiation among the providers. Perficient is shown in the report for having selected Agentforce, Data 360 (Data Cloud), and Industry Clouds as top reasons clients work with us out of those extended business scenarios. Our proven capabilities across Agentforce, Data 360 (Data Cloud), and Industry Clouds to help clients achieve measurable outcomes from their Salesforce investments.

Here, we walk through a practical operating model to launch one production agent, govern by design, and measure lift with real users. The goal is confidence without complexity: a visible improvement in a specific KPI and a repeatable pattern you can scale as results compound.

What Success Looks Like

  • Build: A visible lift in one KPI, such as reduced time to resolution in Service or improved conversion in Sales.
  • Govern: Role‑based access with data minimization, accuracy checks, and audit trails in place.
  • Measure: Observability that traces agent decisions and reports performance, adoption, and error rates.
  • Scale: A prioritized backlog and a scale plan that extends the win without unnecessary build.

The Operating Model: Build, Govern, Measure

1) Build one agent for one KPI

Choose a single use case with a business‑visible metric. Ship a working slice and measure against an agreed baseline. Examples:

  • Agent‑assisted case triage that reduces average handle time in Service
  • Quote‑to‑order agent in Agentforce Revenue Management (formerly Revenue Cloud) that shrinks cycle time and errors
  • Renewal‑risk agent that flags at‑risk accounts and improves retention
  • Field service parts availability agent that improves first‑time fix rate

Ground the agent in trustworthy data. Unify records, events, and identities so decisions are consistent and auditable. Use Data 360 foundations to give agents clean context across teams and channels.

2) Govern by Design

Put guardrails in at the start. Define who can access what, how accuracy is checked, and where audit trails are stored.

  • Role‑based access and data minimization
  • Accuracy checks and human‑in‑the‑loop for high‑impact actions
  • Prompt and policy versioning with change tracking
  • Audit trails that capture inputs, decisions, and outcomes
  • Backout controls with pause and rollback procedures

Governance belongs inside your delivery lifecycle, not as an afterthought.

3) Measure and iterate

Use observability to trace decisions, monitor performance, and tune safely.

  • Baseline the KPI before launch and track lift after launch
  • Monitor adoption, satisfaction, and error rates
  • Identify drift, hallucination, or policy violations quickly
  • Iterate prompts, policies, and integrations based on data

Expand capabilities only once the first KPI moves. This keeps momentum high, risk low, and aligns investment to tangible results.

Why This Matters

Most teams already believe in AI. The question is how to make it work here, safely and repeatably. Salesforce continues to expand what you can do with AI, data, and integration. When foundations are solid, those capabilities turn into outcomes you can measure. Agentforce gives you practical building blocks for trusted AI at scale. You get observability to understand how agents perform, governance controls to protect data and accuracy, and low code configuration so business and IT can move together faster.

“Enterprises often underestimate the need for structured enablement, adoption planning, and sustained evolution….” – The Salesforce Consulting Services Landscape, Q4 2025

Partners help translate powerful platform features into everyday outcomes. That is how you reduce risk and accelerate value.

Orchestrate The Workflow, Not Just the Feature

Real value shows up when workflows span systems. Map the end‑to‑end process across Salesforce and adjacent platforms. Eliminate the handoffs that slow customers down. Use reference architectures and integration patterns so the process is portable and resilient. Agentforce is most effective when agents can act across the flow rather than bolt onto a single step.

Ready to translate strategy into a working Agentforce use case that moves a KPI?

Book an Agentforce workshop. We will help you choose one KPI, define data sources, set guardrails and observability, and stand up a working slice you can scale.

Forrester does not endorse any company, product, brand, or service included in its research publications and does not advise any person to select the products or services of any company or brand based on the ratings included in such publications. Information is based on the best available resources. Opinions reflect judgment at the time and are subject to change. For more information, read about Forrester’s objectivity here.

]]>
https://blogs.perficient.com/2026/01/26/build-govern-measure-agentforce-done-right/feed/ 0 389923
Perficient included in IDC ServiceScape U.S. Midmarket Salesforce Implementation Services 2025–2026 https://blogs.perficient.com/2026/01/26/perficient-included-in-idc-servicescape-u-s-midmarket-salesforce-implementation-services-2025-2026/ https://blogs.perficient.com/2026/01/26/perficient-included-in-idc-servicescape-u-s-midmarket-salesforce-implementation-services-2025-2026/#respond Mon, 26 Jan 2026 17:34:18 +0000 https://blogs.perficient.com/?p=389879

Perficient is proud to be included in the IDC ServiceScape: U.S. Midmarket Salesforce Implementation Services 2025–2026 (Doc# US54222726, January 2026). Led by Jason Bremner, Research Vice President, IT Consulting and Systems Integration Services at IDC, this IDC ServiceScape provides buyers with a structured view of Salesforce services capabilities across the industry.

Why we believe this matters for Salesforce leaders

Organizations are asking for measurable outcomes on Salesforce, not bigger projects. The questions have shifted:

  • How do we modernize without disrupting what works
  • How do we orchestrate workflows across Salesforce and adjacent platforms for end-to-end impact
  • How do we adopt AI with confidence so accuracy, access, and auditability are protected
  • How do we fund what works based on KPI movement rather than effort

How we help on Salesforce

  • Sales Cloud and Revenue Cloud  – Opportunity to quote, quote to order, renewals, and pricing accuracy
  • Service Cloud and Field Service – Case triage, knowledge curation, parts availability, and first time fix
  • Data Cloud – Unified customer profiles, identity resolution, and event driven context
  • Agentforce – One agent, one KPI patterns with governance and observability by design
  • Integration – Reusable API patterns for portable, resilient end to end workflows
  • Org consolidation and tech debt cleanup – License alignment, reduction of unsupported customizations, native first design

“Our clients ask for clarity, speed, and confidence. We align to a single KPI, orchestrate the workflow, and build in governance so value is visible and repeatable.”
Hunter Austin, Managing Director, Perficient Salesforce Practice

Getting started

  • Explore Perficient’s Salesforce servicesPerficient is a trusted Salesforce partner helping enterprises lead AI-powered transformation. We specialize in CRM, data, and personalization—using real-time intelligence to deliver relevant experiences with Agentforce, Data 360, and Agentforce Marketing.

 

]]>
https://blogs.perficient.com/2026/01/26/perficient-included-in-idc-servicescape-u-s-midmarket-salesforce-implementation-services-2025-2026/feed/ 0 389879
Build a Custom Accordion Component in SPFx Using React – SharePoint https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/ https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/#comments Thu, 22 Jan 2026 07:50:54 +0000 https://blogs.perficient.com/?p=389813

When building modern SharePoint Framework (SPFx) solutions, reusable UI components play a crucial role in keeping your code clean, scalable, and maintainable. In particular, interactive components help improve the user experience without cluttering the interface.

Among these components, the Accordion is a commonly used UI element. It allows users to expand and collapse sections, making it easier to display large amounts of information in a compact and organized layout. In this blog, we’ll walk through how to create a custom accordion component in SPFx using React.


Create the Accordion Wrapper Component

To begin with, we’ll create a wrapper component that acts as a container for multiple accordion items. At a high level, this component’s responsibility is intentionally simple: it renders child accordion items while keeping styling and layout consistent across the entire accordion.This approach allows individual accordion items to remain focused on their own behavior, while the wrapper handles structure and reusability.

Accordion.tsx

import * as React from 'react';
import styles from './Accordion.module.scss';
import classNames from 'classnames';
import { IAccordionItemProps } from './subcomponents/AccordionItem';

import { ReactElement } from 'react';

export interface IAccordionProps {
  children?:
    | ReactElement<IAccordionItemProps>
    | ReactElement<IAccordionItemProps>[];
  className?: string;
}


const Accordion: React.FunctionComponent<
  React.PropsWithChildren<IAccordionProps>
> = (props) => {
  const { children, className } = props;
  return (
    <div className={classNames(styles.accordionSubcomponent, className)}>
      {children}
    </div>
  );
};

export default Accordion;

Styling with SCSS Modules

Next, let’s focus on styling. SPFx supports SCSS modules, which is ideal for avoiding global CSS conflicts and keeping styles scoped to individual components. Let’s see styling for accordion and accordion items.

Accordion.module.scss

.accordionSubcomponent {
    margin-bottom: 12px;
    .accordionTitleRow {
        display: flex;
        flex-direction: row;
        align-items: center;
        padding: 5px;
        font-size: 18px;
        font-weight: 600;
        cursor: pointer;
        -webkit-touch-callout: none;
        -webkit-user-select: none;
        -khtml-user-select: none;
        -moz-user-select: none;
        -ms-user-select: none;
        user-select: none;
        border-bottom: 1px solid;
        border-color: "[theme: neutralQuaternaryAlt]";
        background: "[theme: neutralLighter]";
    }
    .accordionTitleRow:hover {
        opacity: .8;
    }
    .accordionIconCol {
        padding: 0px 5px;
    }
    .accordionHeaderCol {
        display: inline-block;
        width: 100%;
    }
    .iconExpandCollapse {
        margin-top: -4px;
        font-weight: 600;
        vertical-align: middle;
    }
    .accordionContent {
        margin-left: 12px;
        display: grid;
        grid-template-rows: 0fr;
        overflow: hidden;
        transition: grid-template-rows 200ms;
        &.expanded {
          grid-template-rows: 1fr;
        }
        .expandableContent {
          min-height: 0;
        }
    }
}

Styling Highlights

  • Grid‑based animation for expand/collapse
  • SharePoint theme tokens
  • Hover effects for better UX

Creating Accordion Item Component

Each expandable section is managed by AccordionItem.tsx.

import * as React from 'react';
import styles from '../Accordion.module.scss';
import classNames from 'classnames';
import { Icon } from '@fluentui/react';
import { useState } from 'react';


export interface IAccordionItemProps {
  iconCollapsed?: string;
  iconExpanded?: string;
  headerText?: string;
  headerClassName?: string;
  bodyClassName?: string;
  isExpandedByDefault?: boolean;
}
const AccordionItem: React.FunctionComponent<React.PropsWithChildren<IAccordionItemProps>> = (props: React.PropsWithChildren<IAccordionItemProps>) => {
  const {
    iconCollapsed,
    iconExpanded,
    headerText,
    headerClassName,
    bodyClassName,
    isExpandedByDefault,
    children
  } = props;
  const [isExpanded, setIsExpanded] = useState<boolean>(!!isExpandedByDefault);
  const _toggleAccordion = (): void => {
    setIsExpanded((prevIsExpanded) => !prevIsExpanded);
  }
  return (
    <Stack>
    <div className={styles.accordionTitleRow} onClick={_toggleAccordion}>
        <div className={styles.accordionIconCol}>
            <Icon
                iconName={isExpanded ? iconExpanded : iconCollapsed}
                className={styles.iconExpandCollapse}
            />
        </div>
        <div className={classNames(styles.accordionHeaderCol, headerClassName)}>
            {headerText}
        </div>
    </div>
    <div className={classNames(styles.accordionContent, bodyClassName, {[styles.expanded]: isExpanded})}>
      <div className={styles.expandableContent}>
        {children}
      </div>
    </div>
    </Stack>
  )
}
AccordionItem.defaultProps = {
  iconExpanded: 'ChevronDown',
  iconCollapsed: 'ChevronUp'
};
export default AccordionItem;

Example Usage in SPFx Web Part

<Accordion>
  <AccordionItem headerText="What is SPFx?">
    <p>SPFx is a development model for SharePoint customizations.</p>

  </AccordionItem>

  <AccordionItem
    headerText="Why use custom controls?"
    isExpandedByDefault={true}
  >
    <p>Custom controls improve reusability and UI consistency.</p>
  </AccordionItem>
</Accordion>

Accordion

Conclusion

By building a custom accordion component in SPFx using React, you gain:

  • Full control over UI behavior
  • Lightweight and reusable code
  • Native SharePoint theming

This pattern is perfect for:

  • FAQ sections
  • Configuration panels
  • Dashboard summaries
]]>
https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/feed/ 1 389813
OmniStudio Expression Set Action – A Beginner‑Friendly Guide https://blogs.perficient.com/2026/01/21/omnistudio-expression-set-action-a-beginner-friendly-guide/ https://blogs.perficient.com/2026/01/21/omnistudio-expression-set-action-a-beginner-friendly-guide/#respond Wed, 21 Jan 2026 07:50:14 +0000 https://blogs.perficient.com/?p=389487

OmniStudio Expression Set Action is a powerful feature in Salesforce Industries. It lets you make changes and make decisions based on rules in guided processes, like OmniScripts and Integration Procedures. Instead of writing rules in many places, you can define your business rules once in an Expression Set and use them wherever you need them. This improves consistency, reduces errors, and simplifies maintenance.

What Is an Expression Set Action?

An Expression Set Action acts as a bridge between:

  • OmniScripts / Integration Procedures, and
  • Expression Sets, which are part of the Business Rules Engine (BRE)

In simple terms:

  • Your OmniScript or Integration Procedure sends inputs (like OrderValue or DeliveryType).
  • The Expression Set processes this data using calculations, conditions, or lookups.
  • The result is returned as a structured JSON output, which you can display or use for further logic.

What Can Expression Sets Do?

Expression Sets are Designed to Handle:

  • Mathematical calculations
  • Conditional logic (if/else situations)
  • Lookups using decision matrices
  • Data transformations

Common Real‑World Use Cases

  • Calculating shipping or delivery charges.
  • Determining customer eligibility.
  • Applying discounts or fees.
  • Computing taxes or surcharges

Because Expression Sets work with JSON, they are lightweight, fast, and ideal for complex rule processing.

Creating an Expression Set – Step by Step

Step 1: Navigate to Expression Sets

  1. Go to Salesforce Setup
  2. Search for Expression Sets (under OmniStudio / Industries features)
  3. Click New

 Step 2: Basic Setup

  • Name: Example – ShippingChargesExp
  • Usage Type: Select Default
  • Save the record

This automatically creates Version 1 of the Expression Set.

Building Logic Using Expression Set Builder

After saving, open the Expression Set Builder, which provides a visual canvas for designing logic.

Step 3: Define Variables (Resource Manager)

Variables represent the data your Expression Set uses and produces.

Example Variables:

  • DeliveryType – Input (e.g., Standard or Express)
  • OrderValue – Input (order amount)
  • ExpectedDeliveryCharges – Intermediate result
  • TotalCharges – Final output

Each Variable Should Have:

  • A clear name
  • Data type (number, text, boolean, etc.)
  • Input or Output configuration

Step 4: Use Decision Matrices (Optional but Powerful)

If your charges depend on predefined rules (for example, deliveryType), you can use a Decision Matrix.

  1. Drag the Lookup Table element onto the canvas
  2. Associate it with an existing Decision Matrix, such as DeliveryCharges
  3. Use inputs like DeliveryType to return ExpectedDeliveryCharges

This keeps your logic external and easy to update without modifying the code.

Step 5: Add Calculations

To perform arithmetic operations:

  1. Drag the Calculation element from the Elements panel
  2. Define a formula such as:
    TotalCharges = ExpectedDeliveryCharges + OrderValue

This element performs the actual math and stores the result in a variable.

Step 6: Sequence and Test

  • Arrange elements on the canvas in logical order
  • Use the Simulate option to test with a sample JSON input:
                         {
                                  “DeliveryType”: “Standard”,
                                  “OrderValue”: 1000
                         }
Verify that the output JSON returns the expected TotalCharges.

Step 7: Activate the Expression Set

Before using it:

  • Set Active Start Date
  • Define Rank (for rule priority)
  • Select Output Variables
  • Click Activate

Your Expression Set is now ready for use.

Screenshot 13 1 2026 132048 Perficient 1d2 Dev Ed.develop.lightning.force.com

 

Using Expression Set Action in an OmniScript

OmniScripts are user-facing guided flows, and Expression Set Actions allow logic to run automatically in the background.

Step 1: Prepare Inputs

In the OmniScript:

  • Create fields such as DeliveryType  and OrderValue
  • Capture values from user input or previous steps

Step 2: Add Expression Set Action

  • Open OmniScript Designer
  • Drag Expression Set Action between steps
  • Select your Expression Set (ShippingChargesExp)

Step 3: Configure Input Mapping

Map inputs using JSON paths, for example:

  • %Step:CustomerDetails:DeliveryType%
  • %Step:CustomerDetails:OderValue%

Step 4: Use Output Values

In the next step:

  • Use Set Values or Display Text elements
  • Reference returned outputs like TotalCharges

Step 5: Test

Preview the OmniScript with different inputs to ensure calculations work correctly.

Using Expression Set Action in an Integration Procedure

Integration Procedures handle server-side processing and are ideal for performance-heavy logic.

Step 1: Create Integration Procedure

  1. Go to Integration Procedures
  2. Click New
  3. Add an Expression Set Action from the Actions palette

Step 2: Configure the Action

  • Select the Expression Set
  • Map inputs such as DeliveryType and OrderValue

Step 3: Return Outputs

  • Add a Response Action
  • Include output variables
  • Save and execute to validate results

Step 4: Call from OmniScript

Use an Integration Procedure Action inside OmniScript to invoke this logic.

This approach improves scalability and keeps OmniScripts lightweight.

Key Learning Resources

If you’re new to OmniStudio, these resources are highly recommended:

]]>
https://blogs.perficient.com/2026/01/21/omnistudio-expression-set-action-a-beginner-friendly-guide/feed/ 0 389487
Part 1: Mobile AI 2026: Why On-Device Intelligence is the New Standard https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/ https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/#comments Mon, 19 Jan 2026 20:15:36 +0000 https://blogs.perficient.com/?p=389691

Subtitle: From Critical Medical Hardware to the Apple Ecosystem, the future of mobile intelligence is local, instant, and unified.

We are standing at a hardware tipping point. For the last decade, “AI” on mobile effectively meant one thing: sending data to the cloud and waiting for an answer. Especially for those chatbots, adding AI to an app meant integrating a slow, spinning loading indicator while data traveled to a server, waited in a queue, and eventually returned text. Users are tired of waiting. They are overwhelmed by generic bots that feel disconnected from the app they are actually using.

But as we move toward 2026, the script is flipping. Phone manufacturers are shipping devices with neural engines (NPUs) so powerful they rival the desktop GPUs of just a few years ago. This shift isn’t just about faster chatbots or smoother animations; it is reshaping critical industries like healthcare and unifying the mobile ecosystem under a single dominant model family: Google Gemini.

The Hardware Revolution: The “Brain” in Your Pocket

The defining trend of the 2025-2026 cycle is the explosion of Hardware Acceleration. Modern mobile processors—whether it’s the latest Snapdragons powering Android flagships or the A-series chips in iPhones—are no longer just Central Processing Units (CPUs). They are dedicated AI powerhouses capable of “always-on” generative tasks.

This hardware leap means we can now run massive models (like Gemini Nano) directly on the device. The benefits are immediate and transformative:

  • Zero Latency: No network round-trips. The intelligence feels instantaneous.
  • Total Privacy: Sensitive data never leaves the phone’s secure enclave.
  • Offline Reliability: Intelligence works in elevators, basements, and airplanes.

The Critical Use Case: Android in Healthcare

Nowhere is this shift more vital than in the rapidly expanding world of Medical Devices. Android has quietly become the operating system of choice for specialized medical hardware, from handheld ultrasound scanners to patient vitals monitors.

Why is the edge critical here? Because medical environments are unforgiving. A doctor in a rural clinic or a paramedic in a speeding ambulance cannot rely on spotty 5G connections to process a patient’s vitals or analyze an X-ray.

  • Privacy Compliance: Processing sensitive patient data (like facial analysis for pain detection) strictly on-device removes complex regulatory cloud compliance hurdles. The data stays with the patient.
  • Reliability: An Android-based diagnostic tool must work instantly, 100% of the time, regardless of Wi-Fi status.
  • Adoption: We are seeing a massive surge in smart, connected medical tools that rely on commodity Android hardware to deliver hospital-grade diagnostics at a fraction of the cost.

The “One AI” Future: Gemini on iOS & Android

Perhaps the most compelling reason to bet on Gemini is the upcoming unification of the mobile AI landscape. Reports indicate that Apple is partnering with Google to integrate Gemini models into iOS 18 and macOS Sequoia for complex reasoning tasks and summaries, a rollout expected to mature by Spring 2026.

While Apple will handle basic tasks with its own on-device models, it is leaning on Gemini’s superior reasoning for the “heavy lifting.” This creates a unique opportunity for developers:

  • Unified Intelligence: Learning to engineer prompts and integrations for Gemini means you are effectively targeting the entire mobile market—both the Android medical devices and the premium iPhone user base.
  • Cross-Platform Consistency: A feature built on Gemini’s logic will behave consistently whether it’s running on a Samsung Galaxy Tab in a hospital or an iPhone 17 in a consumer’s hand.
  • Future-Proofing: With these updates expected shortly, building expertise in Gemini now puts us ahead of the curve when the feature goes mainstream across billions of Apple devices.

In Part 2, we will leave the strategy behind and dive into the code to see how we are already building this future today on iOS and Android.

]]>
https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/feed/ 1 389691
For Architects, by Architects: See Allie Vaughan and Anu Pandey at Architect Dreamin’ 2026 https://blogs.perficient.com/2026/01/14/for-architects-by-architects-see-allie-vaughan-and-anu-pandey-at-architect-dreamin-2026/ https://blogs.perficient.com/2026/01/14/for-architects-by-architects-see-allie-vaughan-and-anu-pandey-at-architect-dreamin-2026/#respond Wed, 14 Jan 2026 18:49:49 +0000 https://blogs.perficient.com/?p=389736

If you believe great Salesforce architecture is built through collaboration, curiosity, and real world problem solving, Architect Dreamin’ is your kind of event.

Taking place January 21–22, 2026 in Scottsdale, Arizona, Architect Dreamin’ brings together solution architects, system architects, senior consultants, and senior developers for two days of deep technical exploration and peer-led learning. And this year, two of Perficient’s own experts, Allie Vaughan, Technical Director AI & Data 360, and Anu Pandey, Technical Director AI & Data 360, are helping lead the conversation.

What Is Architect Dreamin’?

Architect Dreamin’ is designed for professionals who already architect, design, and deliver at scale. It is created by architects for architects, with no sales pitches and no basics, just meaningful exchange among peers who have been there.

What you will experience:

  • Real world insights and practical solutions tailored specifically for architects
  • A collaborative setting that fosters sharing and learning among industry leaders
  • A program dedicated to maximizing your Salesforce capabilities
  • Expanded workshops, immersive design challenges, and interactive solutioning sessions
  • Community activities that recharge and reconnect, because strong architecture is built on strong relationships

Architect Dreamin’ unites the global Salesforce architect community to share, learn, and grow together. Every discussion, diagram, and debate helps shape the future of scalable, sustainable Salesforce solutions.

Spotlight Session: Unifying the Un‑unifiable in Data 360

When customers span multiple accounts and roles, traditional unification models fall short. This session tackles that complexity head‑on, focusing on how architects can design flexible, high‑performing models that support advanced segmentation and activation.

Unifying the Un‑unifiable: Exploring Custom Unification Strategies in Data 360 invites participants to collaboratively explore patterns, tradeoffs, and practical approaches grounded in real architecture challenges.

This session will provide:

  • Frame a real scenario with multi account, multi role customer relationships
  • Facilitate open discussion and live whiteboarding to explore custom unification strategies
  • Dive into identity resolution and Data Model Object design within Data 360
  • Consider impacts on segmentation, activation, and cross system alignment
  • Produce two visual modeling diagrams you can take back to your teams, one representing the source Salesforce org and one depicting the proposed Data 360 solution with identity resolution

This session is not about a single right answer. It is about surfacing practical patterns and shared insights that architects can apply in their own environments.

In their own words:

“Data 360 gives us incredible power, but the real challenge is modeling relationships the way customers actually exist in the world to create solutions that can scale and are optimized for a consumption-based product.”
— Allie Vaughan, Technical Director, AI and Data360, Perficient

“What I love about this topic is that there is not a single right answer. Every organization carries hidden assumptions in its data model, and uncovering those together is where the real learning happens. This session is about slowing down, thinking deeply, and designing unification that truly serves the business.”
— Anu Pandey, Technical Director, AI and Data360, Perficient

Spaces are limited, so be sure to sign up today.

Ready to Join the Architect Dreamin’ Community?

Architect Dreamin’ is where experienced Salesforce professionals come together to challenge assumptions, sharpen their craft, and learn from peers who have been there.

Event details:

  • Dates: Wednesday, January 21 – Thursday, January 22, 2026
  • Location: Scottsdale, Arizona
  • Format: 20+ architect‑led sessions across five tracks, hands‑on workshops, collaborative experiences, and community‑driven activities

🎟 Seats are limited.
Secure yours and join the architects who are building the future of Salesforce, together.

 

]]>
https://blogs.perficient.com/2026/01/14/for-architects-by-architects-see-allie-vaughan-and-anu-pandey-at-architect-dreamin-2026/feed/ 0 389736
Upgrading from Gulp to Heft in SPFx | Sharepoint https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/ https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/#respond Wed, 14 Jan 2026 09:59:20 +0000 https://blogs.perficient.com/?p=389727

With the release of SPFx v1.22, Microsoft introduced Heft as the new build engine, replacing Gulp. This change brings better performance, modern tooling, and a more standardized approach to building SPFx solutions. In this blog, we’ll explore what this means for developers and how to upgrade.

What is Gulp in SPFx?

In SharePoint Framework (SPFx), Gulp is a JavaScript-based task runner that was traditionally used to automate build and development tasks.

What Gulp Did in SPFx

Historically, the SharePoint Framework (SPFx) relied on Gulp as its primary task runner, responsible for orchestrating the entire build pipeline. Gulp did a series of scripted tasks, defined inside gulpfile.js and in different SPFx build rig packages. These tasks automate important development and packaging workflows.These tasks included:

  • Automates repetitive tasks such as:
    • TypeScript to JavaScript.
    • Bundling multiple files into optimized packages.
    • Minifying code for better performance.
    • Packaging the solution into a “.sppkg” file for deployment.
  • Runs development servers for testing (gulp serve).
  • Watches for changes and rebuilds automatically during development

Because these tasks depended on ad‑hoc JavaScript streams and SPFx‑specific build rig wrappers, the pipeline could become complex and difficult to extend consistently across projects.

The following are the common commands included in gulp:

  • gulp serve – local workbench/dev server
  • gulp build – build the solution
  • gulp bundle – produce deployable bundles
  • gulp package-solution – create the .sppkg for the App Catalog

What is Heft?

In SharePoint Framework (SPFx), Heft is the new build engine introduced by Microsoft, starting with SPFx v1.22. It replaces the older Gulp-based build system.

Heft has replaced Gulp to support modern architecture, improve performance, ensure consistency and standardization, and provide greater extensibility.

Comparison between heft and gulp:

Area Gulp (Legacy) Heft (SPFx v1.22+)
Core model Task runner with custom JS/streams (gulpfile.js) Config‑driven orchestrator with plugins/rigs
Extensibility Write custom tasks per project Use Heft plugins or small “patch” files; standardized rigs
Performance Sequential tasks; no native caching Incremental builds, caching, unified TypeScript pass
Config surface Often scattered across gulpfile.js and build rig packages Centralized JSON/JS configs (heft.json, Webpack patch/customize hooks)
Scale Harder to keep consistent across many repos Designed to scale consistently (Rush Stack)

Installation Steps for Heft

  • To work with the upgraded version, you need to install Node v22.
  • Run the command npm install @rushstack/heft –global

Removing Gulp from an SPFx Project and Adding Heft (Clean Steps)

  • To work with the upgraded version, install Node v22.
  • Remove your current node_modules and package-lock.json, and run npm install again
  • NOTE: deleting node_modules can take a very long time if you don’t skip the recycle bin.
    • Open PowerShell
    • Navigate to your Project folder
    • Run command Remove-Item -Recurse -Force node_modules
    • Run command Remove-Item -Force package-lock.json
  • Open the solution in VS Code
  • In terminal run command npm cache clean –force
  • Then run npm install
  • Run the command npm install @rushstack/heft –global

After that, everything should work, and you will be using the latest version of SPFx with heft. However, going forward, there are some commands to be aware of

Day‑to‑day Commands on Heft

  • heft clean → cleans build artifacts (eq. gulp clean)
  • heft build → compiles & bundles (eq. gulp build/bundle) (Note— prod settings are driven by config rather than –ship flags.)
  • heft start → dev server (eq. gulp serve)
  • heft package-solution → creates.sppkg (dev build)
  • heft package-solution –production → .sppkg for production (eq. gulp package-solution –ship)
  • heft trust-dev-cert → trusts the local dev certificate used by the dev server (handy if debugging fails due to HTTPS cert issues

Conclusion

Upgrading from Gulp to Heft in SPFx projects marks a significant step toward modernizing the build pipeline. Heft uses a standard, configuration-based approach that improves performance, makes things the same across projects, and can be expanded for future needs. By adopting Heft, developers align with Microsoft’s latest architecture, reduce maintenance overhead, and gain a more scalable and reliable development experience.

]]>
https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/feed/ 0 389727
Building Custom Search Vertical in SharePoint Online for List Items with Adaptive Cards https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/ https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/#respond Wed, 14 Jan 2026 06:25:15 +0000 https://blogs.perficient.com/?p=389614

This blog explains the process of building a custom search vertical in SharePoint Online that targets a specific list using a dedicated content type. It covers indexing important columns, and mapping them to managed properties for search. Afterward, a result type is configured with Adaptive Cards JSON to display metadata like title, category, author, and published date in a clear, modern format. Then we will have a new vertical on the hub site, giving users a focused tab for Article results. In last, the result is a streamlined search experience that highlights curated content with consistent metadata and an engaging presentation.

For example, we will start with the assumption that a custom content type is already in place. This content type includes the following columns:

  • Article Category – internal name article_category
  • Article Topic – internal name article_topic

We’ll also assume that a SharePoint list has been created which uses this content type, with the ContentTypeID: 0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B

With the content type and list ready, the next steps focus on configuring search so these items can be surfaced effectively in a dedicated vertical.

Index Columns in the List

Indexing columns optimize frequently queried metadata, including category or topic, for faster search.. This improves performance and makes it easier to filter and refine results in a custom vertical.

  • Go to List Settings → Indexed Columns.
  • Ensure article_category and article_topic are indexed for faster search queries.

Create Managed Properties

First, check which RefinableString managed properties are available in your environment. After you identify them, configure them as shown below.:

Refinable stringField nameAlias nameCrawled property
RefinableString101article _topicArticleTopicows_article _topic
RefinableString102article_categoryArticleCategoryows_article_category
RefinableString103article_linkArticleLinkows_article_link

Tip: Creating an alias name for a managed property makes it easier to read and reference. This step is optional — you can also use the default RefinableString name directly.

To configure these fields, follow the steps below:

  • Go to the Microsoft Search Admin Center → Search schema.
  • Go to Search Schema → Crawled Properties
  • Look for the field (ex. article _topic or article_category),  find its crawled property (starts with ows_)
  • Click on property → Add mapping
  • Popup will open → Look for unused RefinableString properties (e.g., RefinableString101, RefinableString102) → click “Ok” button
  • Click “Save”
  • Likewise, create managed properties for all the required columns.

Once mapped, these managed properties can be searched, found, and defined. This means they can be used in search filters, result types, and areas.

Creating a Custom Search Vertical

This lets you add a dedicated tab that filters results to specific content, improving findability and user experience. It ensures users quickly access targeted items like lists, libraries, or content types without sifting through all search results. In this example, we will set the filter for a specific articles list.

This lets you add a dedicated tab that filters results to specific content, improving findability and user experience. It ensures users quickly access targeted items like lists, libraries, or content types without sifting through all search results. In this example, we will set the filter for a specific articles list.

Following the steps given below to create and configure a custom search vertical from the admin center:

  • In “Verticals” tab, add a new value as per following configuration:
    • Name = “Articles”
    • Content source = SharePoint and OneDrive
    • KQL query = It is the actual filter where we specify the filter for items from the specific list to display in search results. In our example, we will set it as: ContentTypeId:0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B*Verticalskql
    • Filters: Filters are an optional setting that allows users to narrow search results based on specific criteria. In our example, we can add a filter by category. To add “Categories” filter on search page, follow below steps:
      • Click on add filter
      • Select “RefinableString102” (This is a refinable string managed property for “article_category” column as setup in above steps)
      • Name = “Category” or other desired string to display on search

Set Vertical filter

Creating a Result Type

Creating a new result type in the Microsoft Search Admin Center lets you define how specific content (like items from a list or a content type) is displayed in search results. In this example, we set some rules and use Adaptive Card template to make search easier and more interesting.

Following are the steps to create a new result type in the admin center.

  • Go to admin center, https://admin.cloud.microsoft
  • Settings → Search & intelligence
  • In “Customizations”, go to “Result types”
  • Add new result types with the following configurations:
    • Name = “AarticlesResults” (Note: Specify any name you want to display in search vertical)
    • Content source = SharePoint and OneDrive
    • Rules
      • Type of content = SharePoint list item
      • ContentTypeId starts with 0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B (Note: Content type Id created in above steps)Set Result type
      • Layout = Put the JSON string for Adaptive card to display search result. Following is the JSON for displaying the result:
        {
           "type": "AdaptiveCard",
          "version": "1.3",
          "body": [
            {
              "type": "ColumnSet",
              "columns": [
                {
                  "type": "Column",
                  "width": "auto",
                  "items": [
                    {
                    "type": "Image",
                    "url": <url of image/thumbnail to be displayed for each displayed item>,
                    "altText": "Thumbnail image",
                    "horizontalAlignment": "Center",
                    "size": "Small"
                    }
                  ],
                  "horizontalAlignment": "Center"
                },
                {
                  "type": "Column",
                  "width": 10,
                  "items": [
                    {
                      "type": "TextBlock",
                      "text": "[${ArticleTopic}](${first(split(ArticleLink, ','))})",
                      "weight": "Bolder",
                      "color": "Accent",
                      "size": "Medium",
                      "maxLines": 3
                    },
                    {
                      "type": "TextBlock",
                      "text": "**Category:** ${ArticleCategory}",
                      "spacing": "Small",
                      "maxLines": 3
                    }
                  ],
                  "spacing": "Medium"
                }
              ]
            }
          ],
          "$schema": "http://adaptivecards.io/schemas/adaptive-card.json"
        }

        Set Result type adaptive card

When you set up everything properly, the final output will look like this:

Final search results

Conclusion

Finally, we created a special search area in SharePoint Online for list items with adaptive cards. This changes how users use search. Important metadata becomes clearly visible when you index key columns, map them to managed properties, and design a tailored result type. Since we used Adaptive Card, it adds a modern, interesting presentation layer. It makes it easier to scan and more visually appealing. In the end, publishing a special section gives you a special tab that lets you access a special list of content. This makes it easier to work with and makes the user experience better.

]]>
https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/feed/ 0 389614
From Legacy to Modern: Migrating WCF to Web API with the Help of AI https://blogs.perficient.com/2026/01/13/from-legacy-to-modern-migrating-wcf-to-web-api-with-the-help-of-ai/ https://blogs.perficient.com/2026/01/13/from-legacy-to-modern-migrating-wcf-to-web-api-with-the-help-of-ai/#respond Tue, 13 Jan 2026 17:32:36 +0000 https://blogs.perficient.com/?p=389673

Introduction

The modernization of legacy applications has always been a costly process: understanding old code, uncovering hidden dependencies, translating communication models (for example, from SOAP to REST), and ensuring that nothing breaks in production. This is where artificial intelligence changes the game.

AI does not replace the architect or the developer, but it speeds up the heaviest steps in a migration: it helps read and summarize large codebases, proposes equivalent designs in the new technology, generates drafts of controllers, DTOs, and tests, and even suggests architectural improvements that take advantage of the change. Instead of spending hours on mechanical tasks, the team can focus on what really matters: the business rules and the quality of the new solution.

In this post, we’ll look at that impact applied to a concrete case: migrating a WCF service written in C# to an ASP.NET Core Web API, using a real public repository as a starting point and relying on AI throughout the entire process.

Sample project: a real WCF service to be migrated

For this article, we’ll use the public project jecamayo/t-facturo.net as a real-world example: a .NET application that exposes SOAP services based on WCF to manage advisors and branches, using NHibernate for data access. This kind of solution perfectly represents the scenario of many legacy applications currently running in production, and it will serve as our basis to show how artificial intelligence can speed up and improve their migration to a modern architecture with ASP.NET Core Web API.

Key Steps to Migrate from Legacy WCF to a Modern Web API

Migrating a legacy application is not just about “moving code” from one technology to another: it involves understanding the business context, the existing architecture, and designing a modern solution that will be sustainable over time. To structure that process—and to clearly show where artificial intelligence brings the most value—it’s useful to break the migration down into a few key steps like the ones we’ll look at next.

  1. Define the goals and scope of the migration
    Clarify what you want to achieve with the modernization (for example, moving to .NET 8, exposing REST, improving performance or security) and which parts of the system are in or out of the project, in order to avoid surprises and rework.
  2. Analyze the current architecture and design the target architecture
    Understand how the solution is built today (layers, projects, WCF, NHibernate, database) and, with that snapshot, define the target architecture in ASP.NET Core Web API (layers, patterns, technologies) that will replace the legacy system.
  3. Identify dependencies, models, DTOs, and business rules
    Locate external libraries, frameworks, and critical components; inventory domain entities and DTOs; and extract the business rules present in the code to ensure they are properly preserved in the new implementation.
  4. Design the testing strategy and migration plan
    Decide how you will verify that the new API behaves the same (unit tests, integration tests, comparison of WCF vs Web API responses) and define whether the migration will be gradual or a “big bang”, including phases and milestones.
  5. Implement the new Web API, validate it, and retire the legacy WCF
    Build the Web API following the target architecture, migrate the logic and data access, run the test plan to validate behavior, deploy the new solution and, once its stability has been confirmed, deactivate the inherited WCF service.

How to Use AI Prompts During a Migration

Artificial intelligence becomes truly useful in a migration when we know what to ask of it and how to ask it. It’s not just about “asking for code,” but about leveraging it in different phases: understanding the legacy system, designing the target architecture, generating repetitive parts, proposing tests, and helping document the change. To do this, we can classify prompts into a few simple categories (analysis, design, code generation, testing, and documentation) and use them as a practical guide throughout the entire migration process.

Analysis and Understanding Prompts

These focus on having the AI read the legacy code and help you understand it faster: what a WCF service does, what responsibilities a class has, how projects are related, or which entities and DTOs exist. They are ideal for obtaining “mental maps” of the system without having to review every file by hand.

Usage examples:

  • Summarize what a project or a WCF service does.
  • Explain what responsibilities a class or layer has.
  • Identify domain models, DTOs, or design patterns.

Design and Architecture Prompts

These are used to ask the AI for target architecture proposals in the new technology: how to translate WCF contracts into REST endpoints, what layering structure to follow in ASP.NET Core, or which patterns to apply to better separate domain, application, and infrastructure. They do not replace the architect’s judgment, but they offer good starting points and alternatives.

Usage examples:

  • Propose how to translate a WCF contract into REST endpoints.
  • Suggest a project structure following Clean Architecture.
  • Compare technological alternatives (keeping NHibernate vs migrating to EF Core).

Code Generation and Refactoring Prompts

These are aimed at producing or transforming specific code: generating Web API controllers from WCF interfaces, creating DTOs and mappings, or refactoring large classes into smaller, more testable services. They speed up the creation of boilerplate and make it easier to apply good design practices.

Usage examples:

  • Create a Web API controller from a WCF interface.
  • Generate DTOs and mappings between entities and response models.
  • Refactor a class with too many responsibilities into cleaner services/repositories.

Testing and Validation Prompts

Their goal is to help ensure that the migration does not break existing behavior. They can be used to generate unit and integration tests, define representative test cases, or suggest ways to compare responses between the original WCF service and the new Web API.

Usage examples:

  • Generate unit or integration tests for specific endpoints.
  • Propose test scenarios for a business rule.
  • Suggest strategies to compare responses between WCF and Web API.

Documentation and Communication Prompts

They help explain the before and after of the migration: documenting REST endpoints, generating technical summaries for the team, creating tables that show the equivalence between WCF operations and Web API endpoints, or writing design notes for future evolutions. They simplify communication with developers and non-technical stakeholders.

Usage examples:

  • Write documentation for the new API based on the controllers.
  • Generate technical summaries for the team or stakeholders.
  • Create equivalence tables between WCF operations and REST endpoints.

To avoid making this article too long and to be able to go deeper into each stage of the migration, we’ll leave the definition of specific prompts —with real examples applied to the t-facturo.net project— for an upcoming post. In that next article, we’ll go through, step by step, what to ask the AI in each phase (analysis, design, code generation, testing, and documentation) and how those prompts directly impact the quality, speed, and risk of a WCF-to-Web-API migration.

Conclusions

The experience of migrating a legacy application with the help of AI shows that its main value is not just in “writing code,” but in reducing the intellectual friction of the process: understanding old systems, visualizing possible architectures, and automating repetitive tasks. Instead of spending hours reading WCF contracts, service classes, and DAOs, AI can summarize, classify, and propose migration paths, allowing the architect and the team to focus their time on key design decisions and business rules.

At the same time, AI speeds up the creation of the new solution: it generates skeletons for Web API controllers, DTOs, mappings, and tests, acting as an assistant that produces drafts for the team to iterate on and improve. However, human judgment remains essential to validate each proposal, adapt the architecture to the organization’s real context, and ensure that the new application not only “works,” but is maintainable, secure, and aligned with business goals.

]]>
https://blogs.perficient.com/2026/01/13/from-legacy-to-modern-migrating-wcf-to-web-api-with-the-help-of-ai/feed/ 0 389673
Understanding Pages, Blocks, and Rendering in Optimizely CMS https://blogs.perficient.com/2026/01/05/understanding-pages-blocks-and-rendering-in-optimizely-cms/ https://blogs.perficient.com/2026/01/05/understanding-pages-blocks-and-rendering-in-optimizely-cms/#respond Tue, 06 Jan 2026 05:14:28 +0000 https://blogs.perficient.com/?p=389152

Before diving into key concepts, ensure that Optimizely CMS is set up on your local machine. You can refer to the detailed setup guide here: “Optimizely CMS Set Up: A Developer’s Guide / Blogs / Perficient

Once Optimizely CMS is successfully configured, the following core concepts will help you understand how to work with pages, blocks, and rendering in your project.

Understanding Page Types and Block Types

Page Types

  • Represent full pages in the CMS (e.g., Home Page, Article Page).
  • Defined as C# classes inheriting from PageData.
  • Each public property becomes a field editors can fill in the CMS.

Block Types

  • Represent reusable content components (e.g., Teasers, Banners).
  • Defined as C# classes inheriting from BlockData.
  • Can be embedded in pages or reused across multiple pages.

In the Optimizely CMS project (created via dotnet new epi-alloy-mvc) add the files as below.

Type Folder Example
Page Type /Models/Pages/ StandardPage.cs
Block Type /Models/Blocks/ TeaserBlock.cs

Example: Basic Page Type

Core Concepts 1

MainBody: Rich text field using XhtmlString for formatted content.

MainContentArea: For adding block into pages.

After compiling, the CMS UI will reflect this new page type, and editors can start creating pages.

Creating a Page

  • Navigate to the page tree.
  • Right-click a parent page → New Page.
  • Select your new page type (e.g., “Standard Page”).
  • Fill in the fields like Main Body and Main Content Area.

Working with Blocks

Blocks are reusable content chunks. Here’s how to create a simple text block:

Core Concepts 2

You can now drag and drop this block into any compatible page area.

Creating a Block

  • Open the Assets Panel.
  • Click New Block → Choose “Teaser Block”.
  • Fill in the Heading and Image.
  • Drag and drop the block into a content area on a page.

Routing and Rendering Views

Create Razor View for StandardPage

  1. A class /Models/Pages/StandardPage.cs
  2. Create a controller in /Controllers/StandardPageController.cs
    Core Concepts 3
  3. Create the Razor view. Create the file: /Views/StandardPage/Index.cshtmlCore Concepts 4

For adding block into pages add a content area to the razor view.

    1. In /Models/Pages/StandardPage.cs, add a property
      Core Concepts 5
      This creates a drop zone in the CMS UI where editors can add blocks.
    2.  In /Views/StandardPage/Index.cshtml, update the view
      Core Concepts 6
      This renders whatever blocks the editor drops into MainContentArea.

Once compiled and the site is running:

  • Open a page of type Standard Page
  • Find a field called Main Content Area
  • From the Assets Panel, drag blocks (like Teaser Block) into this area
  • Publish the page and view it in browser.

Admin & Edit mode

Access the CMS back office:

  • Navigate to /episerver
  • Login with the seeded admin account or set one up using Identity or ASP.NET Core Identity providers

Preview & Debugging

Use Visual Studio’s debugger, breakpoints, and logs. CMS also offers content preview modes and version management tools.

Deployment Basics

You can deploy Optimizely CMS using:

  • Azure App Services
  • IIS on Windows Server
  • Docker (with a configured image)

Use dotnet publish for build output:

Core Concepts 7

Content Editing Experience

Optimizely offers:

  • Inline editing
  • Drag-and-drop interface
  • Preview modes by device and visitor group

Content editors can switch languages, schedule publishing, and experiment with layout—all from the CMS UI.

Conclusion

Optimizely CMS offers a structured and extensible framework for building content‑driven websites. By understanding core concepts such as Page Types, Block Types, and view rendering, developers can quickly create scalable and reusable components. Combined with its intuitive editing tools, Optimizely enables teams to deliver and manage content efficiently, making it a strong foundation for modern digital experiences.

]]>
https://blogs.perficient.com/2026/01/05/understanding-pages-blocks-and-rendering-in-optimizely-cms/feed/ 0 389152